id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
240025645
pes2o/s2orc
v3-fos-license
SARS-CoV-2 Seroprevalence and Profiles Among Convalescents in Sichuan Province, China Objectives: To explore and understand the SARS-CoV-2 seroprevalence of convalescents, the association between antibody levels and demographic factors, and the seroepidemiology of convalescents of COVID-19 till March 2021. Methods: We recruited 517 voluntary COVID-19 convalescents in Sichuan Province and collected 1,707 serum samples till March 2021. Then we reported the seroprevalence and analyzed the associated factors. Results: Recent travel history was associated with IgM levels. Convalescents who had recent travel history were less likely to be IgM antibody negative [OR = 0.232, 95% CI: (0.128, 0.420)]. Asymptomatic cases had, approximately, twice the odds of being IgM antibody negative compared with symptomatic cases [OR = 2.583, 95% CI: (1.554, 4.293)]. Participants without symptoms were less likely to be IgG seronegative than those with symptoms [OR = 0.511, 95% CI: (0.293, 0.891)]. Convalescents aged 40–59 were less likely to be IgG seronegative than those aged below 20 [OR = 0.364, 95% CI: (0.138, 0.959)]. The duration of positive IgM antibodies persisted 365 days while the IgG persisted more than 399 days. Conclusions: Our findings suggested that recent travel history might be associated with the antibody levels of IgM, while age could be associated with the antibody levels of IgG. Infection type could be associated with both antibody levels of IgM and IgG that declined quicker in asymptomatic cases. INTRODUCTION The global pandemic of coronavirus disease 2019 (COVID- 19), an emerging infectious disease seeding from severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has posed a serious threat to public health (1). The epidemiological and serological characteristics of patients with COVID-19 have been reported explicitly (2), while few have paid attention to convalescents. Antibody response is crucial in eliminating viral infection (3), and the seroprevalence of specific serum antibodies including immunoglobulin M (IgM) and immunoglobulin G (IgG) against SARS-CoV-2 can provide immune protection. Understanding the seroprevalence dynamic of SARS-CoV-2 assists in assessing the immunologic levels of convalescents and predicting the potential immune protection (4). In the case of SARS-CoV-2, IgM responses firstly against viral infection, while IgG production lags behind IgM but produces a more durable immunity (5), similar to the response process in coronavirus infections such as severe acute respiratory syndrome coronavirus (SARS) and middle east respiratory syndrome coronavirus (MERS) (6). Previous studies have reported that specific IgM antibodies last only for 13 weeks in the body (7), while IgG antibodies are more longstanding with an average of 2 years (8). The decline of IgM, as an indicator of virus clearance, suggests convalescents' robust immunity against re-infection with positive antibody, while the reduction of IgG prompts serious concerns on the robustness and persistence of immunity after recovery (9). Studying the seroprevalence of these antibodies is primary for developing vaccine and immunity strategies. Previous studies (10)(11)(12)(13) have explored the seroprevalence of these antibodies from diversified perspectives, such as the accuracy of serological tests, immunological memory, and molecular findings. However, associated factors and the duration of positive antibodies still require to be updated. As the number of patients recovering from the SARS-CoV-2 continues to rise, the duration of individual serological responses has attracted public attention (14). Most of the previous researches mainly focused on the acute response within several weeks after clinical onset in SARS-CoV-2. As the number of convalescents re-infected with SARS-CoV-2 started to escalate, testified by those initial symptomatic cases re-infected with SARs-CoV-2 reported in several countries (15), clarifying the antibody response duration to the virus after infection is of paramount significance. Additionally, understanding whether the demographic factors (such as age, gender, recent travel history, and infection type) were associated with serological responses during SARS-CoV-2 infection is also vital. Studying the associated factors contributes to our understanding of the body's response to SARS-CoV-2 at different stages. However, as most studies focused on molecular and cellular reports (16,17), researches from the perspective of public health were few, such as profiles, associated factors and so on. Therefore, we focused on the dynamics of seroprevalence to special antibodies and the factors associated with antibody results. By collecting and analyzing the serological level information on 517 convalescents of SARS-CoV-2 in Sichuan Province, this study assessed whether seroprevalence was associated with demographic factors, such as gender, age, infection type (symptomatic cases or asymptomatic cases), and recent travel history (cases with or without recent travel history). Meanwhile, we described the dynamic serum changes and durability of convalescents of SARS-CoV-2. Our study aimed to profile the demographic features of convalescents and explore the relationship between characteristics and humoral responses, which could give insights into the humoral immune responses among the convalescents. Additionally, in this study, we observed the difference between symptomatic cases and asymptomatic cases, which could offer some clues for the prevention and therapy for COVID-19 patients. Study Design This study is a retrospective cohort study including 517 convalescents in Sichuan Province as of March 12, 2021. All subjects voluntarily joined the serological research and gave their informed consents. We collected 1,707 serum samples and demographic characteristics of 517 convalescents. The variables introduced were gender, age groups (<20 years old, 20-39 years old, 40-59 years old, and ≥60 years old), recent travel history (cases with or without recent travel history), infection type (symptomatic or asymptomatic), and antibody results (positive or negative). The antibody results were outcomes of interest. Consenting individuals who were diagnosed with COVID-19 and not vaccinated were asked to do serology testing. We excluded individuals who were unable to go to designated locations for the blood draw and those who had severe complications and those on immunity inhibitors. Written informed consent was provided by all study participants or their parents, and parental permission was obtained before collecting serum samples. The interval between two serum collections was not less than 30 days, and the same batch of serum samples was detected simultaneously and operated by the same laboratorians. All 1,707 serum samples were detected by the Institute of Microbiology and Analysis. The SARS-CoV-2 IgM and IgG antibodies were detected using a 2019-nCoV IgG/IgM detection kit (Maccura Biotechnology Co., Ltd, Sichuan, China). IgM and IgG were observed to have antibody responses against RBD proteins, which could neutralize the virus. Detection of IgG and IgM Non-anticoagulant specimens (intravenous blood collection) were collected for all subjects, 3 mL for children (aged below 5 years), and 5 mL for others. Serum samples were collected, loaded into sealed bags following Class A transport packaging, refrigerated, and transported to the local CDC laboratory for serum separation. The isolated serum was stored in a 1.5 mL frozen deposit tube at −20 degrees C. The Maccura 1,000 fully automated luminescent immunoanalystator (base fluid lot number: 0520153; reagent lot number: 0520031,0520032; reaction cup lot number: 0720582) was utilized to test serum by the principle of direct chemical luminescence immune analysis. Ethical Approval All participants assented to informed consent before participation, and this study was conducted under Good Clinical Practice (GCP). This study was performed in compliance with all relevant ethical regulations. The protocol for human subject studies was approved by the Sichuan Center for Disease Control and Prevention (SCCDCIRB-2020-007). Statistical Analysis Descriptive statistics were utilized to summarize the demographic characteristics of the cohort and significant study outcome variables. Median and Inter-Quartile Range (IQR) were used to describe age. Then frequency and composition ratio were used for categorical variables. Furthermore, the Chi-squared test or Fisher's exact test was applied for comparing categorical variables. Finally, multivariable logistic regression was adopted to calculate odds ratios and 95% confidence interval. The Kaplan-Meier method was applied for the seroprevalence changes, and the log-rank test was used to calculate the difference for positive rates of specific antibodies IgM and IgG over time. All analyses were performed by Stata 16.0 software, and the p-value <0.05 in this paper was considered statistically significant. Specific Antibodies IgM and IgG Levels The levels of different antibodies in 517 patients infected with SARS-CoV-2 were further described in Tables 1, 2. IgM and IgG seroprevalences were diverse among different convalescents. The majority of patients tended to be IgG positive (417 cases) and IgM negative (392 cases). And the proportion of IgM positive in infected females (13.64%) was significantly lower than in infected males (28.65%), but the proportion of IgG positive was similar between both genders. The proportion of cases with recent travel history was 31.29% for positive-IgM and 81.60% for positive-IgG. Further, a discrepancy in seroprevalence was observed among age subgroups ( Table 2), with the peak in the subset of people Multivariable Logistic Regression Analysis for IgM In Table 3, the antibody level of IgM antibody (positive or negative) was taken as the dependent variable, while gender and infection type as the independent variables. Female, cases without recent travel history, and symptomatic cases were regarded as the reference, respectively. Different age groups were computed as dummy variables, while the age group younger than 20 years was the reference group. Previous studies had found that gender and age are related to the outcome levels of antibodies (18) Multivariable Logistic Regression Analysis for IgG The antibody level of IgG antibody (positive or negative) was taken as the dependent variable, gender and infection type of infected person as the independent variables in Table 3. Female, cases without recent travel history and symptomatic cases were taken as the reference group. Different age groups were set as dummy variables, while the age group younger than 20 years was the control group. Multivariable logistic regression analysis showed that gender and recent travel history did not correlate to IgG positive results. Participants without symptoms were nearly less likely to be seronegative than those with symptoms [OR = 0.511, 95% CI: (0.293, 0.891), P = 0.018]. Convalescents aged 40-59 years were less likely to be IgG seronegative than those aged below 20 years [OR = 0.364, 95% CI: (0.138, 0.959), P = 0.041]. The Duration of IgM and IgG The analysis of the 1,707 serological samples showed that the total positive rate of IgG was higher than that of IgM during the whole study time, and the immune response persistence of IgG was longer than that of IgM antibody, which was consistent with the current research (17). We have observed that the longest duration of positive IgM antibodies was 365 days while the IgG persisted over 399 days, suggesting that there might be a longterm immune response after infection with SARS-CoV-2 (19). To comprehend the dynamics of antibody response, we regarded the occurrence of the negative antibody as a failure, and depicted the survival curve of differential antibodies in 517 convalescents. As shown in Figure 1A, positive rates of IgM antibody declined over time after natural infection with SARS-CoV-2, and IgG antibody prevalence decreased gradually after being infected ( Figure 1B) for 150 days. We also observed a statistical difference between the asymptomatic group and the symptomatic group in the duration of positive IgM antibody (P < 0.05), as well as the duration of positive IgG antibody (P < 0.05). The total positive rates and the long-term duration of symptomatic cases were higher and longer than those of asymptomatic cases, as asymptomatic cases were more likely to be negative. However, the disappearance time of the two specific antibodies still needs further observation. We also found a statistical difference between the asymptomatic group and the symptomatic group (P < 0.05). DISCUSSION The human immune response is usually measured in the blood, and IgG and IgM antibodies are regarded as immune memory markers (20). Our study analyzed the serological outcomes from 517 convalescents of COVID-19 and associated factors of antibody response to SARS-CoV-2, which contributed practical information to the study of seroepidemiology of COVID-19. Our study suggested that the associated factors of being IgM antibody negative were recent travel history and infection type. In particular, recent travel history was associated with IgM. Cases with recent travel history were less likely to be IgM antibody negative [OR = 0.232, 95% CI: (0.128, 0.420)], probably because the virus has mutated to become more virulent and transmissible (21,22). Asymptomatic cases had approximately twice the odds of being IgM antibody negative compared with symptomatic cases [OR = 2.583, 95% CI: (1.554, 4.293)], which indicated that those who were asymptomatic required more attention, such as monitoring antibodies regularly and so on. On the contrary, we found that participants without symptoms were nearly less likely to be IgG seronegative than those with symptoms [OR = 0.511, 95% CI: (0.293, 0.891)], probably owing to the impact of mutations in SARS-CoV-2 on viral infectivity and antigenicity (22,23). As age was proved to be related to the results of the antibody (24), our study found that convalescents aged 40-59 years were less likely to be IgG seronegative than those aged below 20 years [OR = 0.364, 95% CI: (0.138, 0.959)]. We should pay attention to the cases of youngsters because these subpopulation lack sufficient protective antibodies to eradicate the virus. Meanwhile, we should monitor the convalescents without these protective antibodies considering they have a higher risk of getting reinfected (25). We have observed that positive IgM antibodies persisted 365 days, while the IgG persisted for more than 399 days, which is of great significance for prevention and control. Over 90% of infected patients were tested to be seropositive and remained 120 days after diagnosis, suggesting their capacity to neutralize the virus (26). The duration of circulating IgG antibodies is still unclear and might depend on several factors, including the infection type and extent of immune response elicited upon the encounter with the virus (27). As for differentiated characteristics, our findings did not cohere with previous researches. Though other studies indicated a sexual discrepancy in seroprevalence (28), we found no solid association between gender and SARS-CoV-2 immune response outcome, which may be explained by different innate immunity, steroid hormones, and factors related to sex chromosomes (29). Meanwhile, we observed that the duration of specific antibodies lasted more than 12 months, while the previous study reported 8 months (30), which suggested that long-term immunity existed in convalescents after natural infection. However, the duration still demands further surveillance. We found that higher seroprevalence was present in patients aged between 20 and 60 years old, deviating from previous studies. For example, a previous meta-analysis revealed that a pooled SARS-CoV-2 seroprevalence in large population were 2.28% (1.01-3.56%), 3.22% (1.90-4.55%), 2.98% (1.59-4.36%), and 2.57% (1.39-3.76%) in people aged ≤19, 20-49, 50-64 and ≥65 years, respectively (10). Other studies showed that antibodies were often present in younger people (18-30 years old) (31), and that individuals younger than 50 years had a seroprevalence rate significantly higher than people older than 50 (32). There were several plausible explanations for such differences. First of all, different studies have different definitions of populations, some studies performed serological tests in the natural population (e.g., community) (33-35), but our study was conducted in recovered patients. Secondly, we have conducted multiple serum tests on the subjects, and a positive case was defined if any of the tests turned positive, while other large population studies conducted serum tests only once for screening (36)(37)(38), causing differences in age distribution. Thirdly, the diversity of the epidemic in different regions led to different infection conditions. Finally, due to public interventions in China, the elderly and children were mostly isolated at home, thus only a few cases in them. Furthermore, our study showed that the seroprevalence of convalescents aged 20-60 was higher, which was owing to their high mobility and proportion in patients. As we define more clearly the natural immune response to SARS-CoV-2, its associated factors, and the duration of protective immunity, patient-centered practical guidelines will likely emerge. These tests may be useful for guarding public health, renewing risk management, and providing academic perspective, but additional data are required to fully drive this response (39). As the SARS-CoV-2 vaccine put a place in prevention, comparison of vaccine-induced immune responses to those stimulated by natural viral infection will help clarify immunological correlates of protection (16). These experiences from SARS-CoV are expected to provide implications for treatment, management, and surveillance of SARS-CoV-2 patients (40). This research has several limitations. Initially, the research was expected to be based on continuous detection at different time points. While due to the lack of continuous observation on the data of patients, we have failed to report individual antibody responses continuously. Additionally, the individual's stabilization, after an initial drop in antibody levels and the inactivation time of specific antibodies generated by natural infection with SARS-CoV-2, still requires further tracking and testing. To this end, we expect this work will contribute to further long-term and continuous detection to investigate factors strongly related to serological levels and observe antibody dynamics over time, which may provide a deep insight into the immune response to SARS-CoV-2 convalescents and advance the development of vaccines and therapeutics. Furthermore, we expect that serological study of SARS-Cov-2 convalescents during the recovery period would improve our understanding of the immunological response to SARS-CoV-2 infection, provide an auxiliary scientific basis for clinical development and evaluation of SARS-CoV-2 vaccine, and facilitate the continuous development of new vaccines and clinical therapeutics (41). DATA AVAILABILITY STATEMENT The datasets for this article are not publicly available because they will be used for further research. Requests to access the datasets should be directed to Xianping Wu, scjkwxp@163.com. The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors. ETHICS STATEMENT This study was performed in compliance with all relevant ethical regulations and the protocol for human subject studies was approved by the Sichuan Center for Disease Control and Prevention (SCCDCIRB-2020-007). Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS CL and LZ were major contributors to the formation of this manuscript, as they have consulted the literature, analyzed the data, and wrote the programs. HYu, XC, and CX also contributed to writing part of the manuscript. HYa, MP, and JX contributed to tests of serum samples and provided suggestions for this manuscript. XS, YZ, and JT input and cleaned up the data of specimen information. XD, HP, XC, TH, HL, DX, HW, WL, PZ, ZZ, and JL collected serums samples and delivered them to the laboratory. XW designed the serum antibody monitoring project and provided constructive suggestions. TZ contributed significantly to data analysis and manuscript preparation. All authors contributed to the article and approved the submitted version. ACKNOWLEDGMENTS A high tribute should be paid to all the colleagues participating in the Sichuan Field Epidemiology Training Program and Standardized Training of Public Health Physicians in Sichuan Province for their contributions to data collection and manuscript review.
2021-10-28T15:14:47.612Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "77dfc843a9ca2a6293a44f537666dd17141b4ad0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2021.716483/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3cf0bfbcf99d77b6de604aa614c28229cd8c35ff", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
14088134
pes2o/s2orc
v3-fos-license
Computing isolated orbifolds in weighted flag varieties Given a weighted flag variety $w\Sigma(\mu,u)$ corresponding to chosen fixed parameters $\mu$ and $u$, we present an algorithm to compute lists of all possible projectively Gorenstein $n$-folds, having canonical weight $k$ and isolated orbifold points, appearing as weighted complete intersections in $w\Sigma(\mu,u) $ or some projective cone(s) over $w\Sigma(\mu,u)$. We apply our algorithm to compute lists of interesting classes of polarized 3-folds with isolated orbifold points in the codimension 8 weighted $G_2$ variety. We also show the existence of some families of log-terminal $\mathbb Q$-Fano 3-folds in codimension 8 by explicitly constructing them as quasilinear sections of a weighted $G_2$-variety. Introduction This article has two main parts. In the first part, for a weighted flag variety wΣ(µ, u) corresponding to the fixed chosen parameters µ and u, we give an algorithm to compute all possible families of isolated orbifolds of fixed dimension n and canonical weight k that are complete intersections in wΣ(µ, u) or some projective cone(s) over wΣ (µ, u). The dimension n of isolated orbifolds is required to be greater than or equal to 2, due to Theorem 2.7, and k is an integer such that the canonical divisor class K X ∽ kD, for an ample Q-Carter divisor D. In the second part, we use the algorithm to compute numerical candidate lists of different types of families of isolated 3-dimensional orbifolds whose general member is a quasilinear section of a certain weighted flag variety wΣ(µ, u), also called format. In particular, we obtain lists of canonical 3-folds, Calabi-Yau 3-folds and log-terminal Q-Fano 3-folds. The lists are obtained by implementing the algorithm in the computer algebra system Magma [BCP97]. We are interested in computing lists of well-formed and quasi-smooth projectively Gorenstein polarized n-folds (X, D), where X is polarized by an ample Q-Cartier divisor D, appearing as quasilinear sections of some fixed weighted flag variety wΣ (µ, u). The algorithm to compute such lists of candidate families of n-folds is primarily based on the Hilbert series formula of Thrm. 1.3]. The formula gives the decomposition of the Hilbert series P X (t) = P I (t) + P Q (t): P I (t) represents the smooth part and P Q (t) represents the orbifold part of the n-fold (X, D). The term P Q (t) will have the form Q i ∈B P Q i (t), where B is a collection of singularities (possibly with repeats) known as basket, coming from the embedding of X in some weighted projective space. The algorithm starts with computing the Hilbert series and the canonical divisor class O wΣ (−p) of some weighted flag variety wΣ(µ, u) of dimension d, where p >> 0 is a positive integer. The flag varieties are known to be Fano varieties, whence −p for the canonical weight of wΣ. Then we find all possible n-fold quasilinear sections X of wΣ or of projective cone(s) over wΣ such that O X (K X ) = O X (kD). For a chosen n-fold model X, we find its Hilbert series P X (t) and the corresponding initial term P I (t). Since we are interested in quasi-smooth orbifolds, the singularities of X come from the singularities of the weighted projective space containing X. We compute all the possible baskets B of isolated singularities and their contributions P Q i (t) to the Hilbert series P X (t). In the last step we run through each basket to determine the multiplicities of the orbifold terms P Q i (t): the values of m i in the equation P X (t) = P I (t) + m i P Q i (t). (1.1) Then X is a suitable candidate for an isolated polarized n-fold if m i ≥ 0 for all i. In fact, the algorithm can be used to find lists of isolated n-folds which may be realized as weighted complete intersection in any ambient weighted projective variety having computable canonical divisor class and Hilbert series. We implement the algorithm in the computer algebra system Magma to compute lists of candidate families of isolated 3-folds in two cases: the codimension 8 weighted G 2 variety and the codimension 3 weighted Gr(2, 5). We compute the numerical candidate families of logterminal Q-Fano 3-folds, Calabi-Yau 3-folds and canonical 3-folds whose general member is a weighted complete intersection in the corresponding ambient variety. The lists are computed by ordering the input parameters (µ, u) in order of increasing the sum of the weights W of the embedding containing wΣ. The lists do not present the full classification of such 3-folds in a chosen weighted flag variety wΣ but are complete up to a certain value of W in each case. The list of 33 candidate log-terminal Q-Fano 3-folds has been explicitly checked by using their defining equations; 6 of them exist as actual polarized 3-folds with the desired properties, given in Theorem 4.1. The list also confirms the non-existence of terminal Q-Fano 3-folds in the weighted G 2 variety. The lists of families of Calabi-Yau 3-folds and canonical 3-folds shall only be considered as numerical candidate 3-folds in the G 2 case. Their existence can be checked by using their defining equations under their Plücker-style embeddings, and checking their type of singularities using those equations. The case w Gr(2, 5) has been included to check the algorithm against already existing lists of polarized 3-folds. The main motivation to constructing polarized varieties as complete intersections in weighted flag varieties wΣ comes from Mukai's linear section theorem [Muk88,Muk89]: every prime Gorenstein Fano 3-fold of genus 7 ≤ g ≤ 10 is a linear section of some flag variety. The idea was first generalized to the weighted case by Corti-Reid in [CR02] to construct some 3-folds and surfaces with quotient singularities in codimension 3 and 5. They recovered the list of 69 terminal Q-Fano 3-folds of Altınok as a quasilinear section of w Gr(2, 5) or of a projective cone over it. The terminal Q-Fano 3-folds form a bounded family of varieties and more importantly lie in the Mori category of varieties: they are minimal models in dimension 3. Their existence in codimension 1-4 has been established in the past, for instance see [ABR02,BKR12]. Our algorithmic approach allows one to search for terminal Q-Fano 3-folds in higher codimensions. The previous attempts [QS11,Qur15] of constructing them as quasilinear sections of some weighted flag varieties were not successful but using our algorithmic approach, one can at least confirm the non-existence of these varieties in the corresponding weighted flag varieties. Since every quotient singularity is log-terminal [Kaw84], the Q-Fano 3-folds computed in section 4 are log-terminal. The class of log-terminal Q-Fano 3-folds also forms a bounded family of algebraic varieties [Bor96]. We construct some families of isolated log-terminal Q-Fano 3-folds as weighted complete intersection in the codimension 8 weighted G 2 variety. Acknowledgements I am grateful to Gavin Brown for some stimulating discussions which made me think about this project. I am also thankful to Sohail Iqbal, Balázs Szendrői and Shengtian Zhou for fruitful discussions and Miles Reid for making available his Magma function Qorb. Thanks are also due to Klaus Altmann and Berlin Mathematical school for their hospitality during a part of this project at the Freie Universität Berlin. Last but not least, I wish to thank an anonymous referee whose comments on an earlier version helped me a great deal to improve the presentation of the article. This research is supported by the LUMS's faculty start up research grant. Definitions and tools 2.1 Baskets and polarized orbifolds Let the multiplicative group of r-th roots of unity µ r acts on A n by diagonal representation Then the quotient π : A n → A/µ r is called a cyclic quotient singularity of type 1 r (a 1 , . . . , a n ). The cyclic quotient singularity is called isolated if gcd(r, a i ) = 1 for all 1 ≤ i ≤ n. We call a collection (with possible repeats) of singularities 1 r j (a 1 j , . . . , a n j ) the basket of singularities, denoted by B. A polarized n-fold is a pair (X, D), where X is an n-dimensional projective algebraic variety and D is a Q-ample Weil divisor on X. All our polarized n-folds are well-formed and quasismooth; appearing as projective subvarieties of some weighted projective space denoted by P[w 0 , . . . , w N ] or wP N or P[w i ]. We call a weighted projective variety X ⊂ P[w i ] of codimension e well-formed if the singular locus of X does not contain the codimension e + 1 singular strata of P[w i ]. The subvariety X ⊂ P[w i ] is quasi-smooth if the affine cone X = Spec R(X, D) ⊂ A N +1 of X is smooth outside its vertex 0. Thus all our polarized n-folds (X, D) are orbifolds, i.e. they only have quotient singularities induced by the singular strata of P[w i ]. In particular, we are interested in orbifolds with only isolated cyclic quotient singularities. A well-formed n-fold is called projectively Gorenstein if 1. H i (X, O X (m)) = 0 for all m and 0 < i < n; 2. the orbifold canonical sheaf of X is given by ω X ∼ O X (k). The integer k is known as the canonical weight of the n-fold X. A variety X is said to have terminal (log-terminal) singularities if, given a resolution of singularities f : Y → X, with we have a i > 0 (a i > −1). Graded rings and Hilbert Series Given a polarized n-fold (X, D), the associated finite dimensional vector spaces H 0 (X, O X (mD)) fit together to give rise to a finitely generated graded ring The Hilbert series of a polarized projective variety (X, D), which is the Hilbert series of the graded ring R(X, D), is given by where h 0 (X, mD) = dim H 0 (X, O X (mD)). We usually write P X (t) for the Hilbert series for the sake of brevity. By standard Hilbert-Serre theorem [AM69,Theorem 11.1], P X (t) has a compact form . (2.1) The Hilbert numerator H(t) is a Gorenstein symmetric polynomial of degree q: t q H(1/t) = (−1) e H(t) where e is the codimension of X. The polynomial H(t) has the form 1 − t b 0j + t b 1j − · · · + (−1) c t q , where b 0j are the degrees of the equations, b 1j the degrees of the first syzygies, and so on. The degree q of H(t) is called the adjunction number of X. Weighted flag varieties Let G be a reductive Lie group, with fixed Borel and maximal torus T ⊂ B ⊂ G. Let Λ W = Hom(T, C * ) be the weight lattice of G and let V λ denote the G-representation with highest weight λ. Then there is an embedding Σ ֒→ PV λ of a flag variety Σ = G/P , where P = P λ is the parabolic subgroup of G corresponding to the set of simple roots of G orthogonal to the weight vector λ. Let Λ * W = Hom(C * , T ) be the lattice of one-parameter subgroups of G. Choose µ ∈ Λ * W and a non-negative integer u ∈ Z such that wλ, µ + u > 0 for all elements w of the Weyl group W of the Lie group G, where , denotes the perfect pairing between Λ W and Λ * W . Then we define the weighted flag variety wΣ ⊂ wPV λ following Corti-Reid [CR02]: take the affine cone Σ ⊂ V λ of the embedding Σ ⊂ PV λ and quotient out by the C * -action on V λ \{0} defined by The notation wΣ will refer to general weighted flag variety and wΣ(µ, u) for the weighted flag variety with chosen fixed parameters µ and u. We use the term format for each wΣ(µ, u) following [BKZ]. Here ρ is the Weyl vector, half the sum of the positive roots of G, and (−1) w = 1 or−1 depending on whether w consists of an even or odd number of simple reflections in the Weyl group W ; and D is obtained as the pullback of O wΣ (1) under the embedding wΣ ⊂ wPV λ . Remark 2.5 The Hilbert series expression (2.2) always reduces to the standard expression (2.1) after performing some simplifications, see [QS11,QS12] for details. Hilbert series of isolated orbifolds The integral part of our algorithm is based on the following theorem of Buckley-Reid-Zhou [BRZ13]. The theorem gives the decomposition of the Hilbert series of an isolated orbifold (X, D) as a sum of two expressions: the initial term P I (t) which represents the smooth part of P X (t) and the orbifold term Theorem 2.7 [BRZ13] Let X, D be a projectively Gorenstein quasi-smooth orbifold of dimension n ≥ 2 and canonical weight k, with only isolated orbifold points, given by as its only singularities. Then the Hilbert series P with initial term P I (t) and each orbifold term P Q i (t) characterised as follows: • The initial term has numerator A(t), an integral Gorenstein symmetric polynomial, of degree equal to the coindex c = k + n + 1 of X, so that P I (t) equals P X (t) up to degree ⌊ c 2 ⌋ and P I = 0 for c < 0. to the Hilbert series where the numerator This is the unique integral Laurent polynomial supported in The numerator B(t) is a Gorenstein symmetric polynomial of degree k + n + r. Example 2.8 Consider the Hilbert series of the canonical 3-fold hypersurface of degree 7: X 7 contains an isolated singular point of type B = 1 2 (1, 1, 1) . The Hilbert series of X 7 is given by . The coindex of X 7 is given by c = 1 + 1 + 3 = 5. The Hilbert series has a decomposition into the smooth part P I (t) and the orbifold part P Q (t). The smooth part is given by and the orbifold part is given by . . Remark 2.9 The Hilbert series of an isolated orbifold X, appearing as a weighted complete intersection in some format wΣ(µ, u) or cone over wΣ(µ, u), can be computed from the Hilbert series of the wΣ. In fact, the Hilbert numerator H(t) does not change and the denominator corresponds to the weights of the weighted projective space containing X. Algorithm to compute isolated orbifolds Let wΣ(µ, u) ֒→ P[w i ] be the format of dimension d, corresponding to the fixed parameters µ and u. We aim to compute all possible candidate families of n-folds with isolated orbifold points, whose general member is a weighted complete intersection of wΣ(µ, u) or of projective cone(s) over it, denoted by C a wΣ, having fixed canonical weight k. We present an algorithmic approach to compute lists of such n-folds with isolated quotient singularities as a weighted The algorithm Step 1: Compute Hilbert series and canonical class of wΣ. We start with a fixed weighted flag variety wΣ(µ, u) and compute its Hilbert series P wΣ (t). Each choice of input parameters µ and u leads to a codimension e embedding where e = m − d. We choose the parameters µ and u such that w i > 0, for all i = 0, . . . , m. The Hilbert series of wΣ has the compact form If wΣ is well-formed, the canonical divisor class K wΣ of wΣ is given by, Since the flag varieties and therefore weighted flag varieties are Fano varieties, wΣ is an anti-canonically polarized variety: ω wΣ = O wΣ (−p), where p is an integer: a multiple of u if the Lie group corresponding to wΣ is simple. Step 2: Find all possible embeddings of n-folds X with ω X = O(k). Given the embedding we find all possible n-folds X as weighted complete intersections inside wΣ such that ω X = O X (k). We intersect wΣ with generic hypersurfaces of degrees equal to some weights w i of the weighted projective space P[w 0 , . . . , w m ]: More generally we can take complete intersections inside projective cones over C a wΣ; we can add some more variables of degree w i to the coordinate ring which are not involved in any defining relation of wΣ and construct X as its quasilinear section with ω X = O X (k). These newly added variables will be involved in the defining equations of X when we replace some of the variables of P m [w i ] with the homogeneous forms (f j ). The Hilbert numerator H(t) of the Hilbert series of C a wΣ is the same as that of wΣ but we need to multiply the denominator by where a is the number of projective cones wΣ(µ, u). Thus taking each projective cone of degree w i adds −w i to the canonical class K wΣ . This process gives more choices of taking quasilinear sections of an appropriate degree. Step 3: Compute Hilbert series and the initial term of the n-fold X. We choose one of the n-folds X ֒→ P s [w i 0 , . . . , w is ] from the list computed at Step 2, to calculate its Hilbert series P X (t) and the corresponding initial term P I (t) as given by (2.3). In fact the Hilbert numerator of the Hilbert series P X (t) of X will be the same as in (3.1) and the denominator changes to s j=0 1 − t w i j . The initial term P I (t), given by (2.4), of the Hilbert series can be computed from P X (t) by using first ⌊ c 2 ⌋ + 1 terms of P X (t), where c = k + n + 1 is the coindex of X. Step 4: Compute the isolated orbifold loci of wP s . As we are only interested in computing quasi-smooth candidate orbifolds, the singularities of X are induced by the weights of the weighted projective space P s [w i j ]. Therefore we find all possible contributions to the basket B of n-fold isolated cyclic quotient singularities coming from the weights of P s [w i j ]. Each quotient singularity will be of type 1 Each n-tuple of integers (a i 1 , . . . , a in ) is a sublist of the s-tuple (w i 0 , . . . , w is ) and further we require r i > 1 and Step 5: Compute the contributions of orbifold points to the P X (t). For each isolated orbifold point Q i := 1 r i (ai 1 , . . . , a in ) of the basket B, we compute its contribution P Q i (t) to the Hilbert series P X (t); given by the equation (2.5). We compute the list of all possible baskets coming from the weights of wP s−1 , which may contribute the Hilbert series of X. The formula (2.3) can be written as Here m i represents the multiplicities of the isolated orbifold points Q i , which remains the only unknown in the equation (3.2). It is clear from the equation (3.2) that we need to solve a linear algebra problem where P (t) = P X (t) − P I (t). Step 6: Examine the candidate n-fold X. The last step is to compute the coefficients m i appearing in (3.2). If m i ≥ 0 and m i ∈ Z for all i, then (X, O(k)) is a suitable candidate for a projectively Gorenstein n-fold with isolated quotient singularities and we restart from step 3 by picking another model of the n-fold computed at step 2. The actual basket of singularities of the candidate variety X will consists of Q i ∈ B such that m i > 0. Repeat: Step 3-6 We repeat steps 3−6 for all possible n-fold embeddings with K X = O X (k), computed at Step 2. Remark 3.2 The given algorithm essentially describes the process of finding a list of all possible families of isolated orbifolds with fixed canonical weight k whose general member may be realized as a weighted complete intersection in some prescribed format wΣ(µ, u). But the search for candidate varieties inside the weighted flag variety wΣ is an infinite search, as there is no bound on the values of input parameters µ and u. In principle, the algorithm does not give the complete classification of certain type of isolated orbifolds in a given weighted flag variety but essentially computes the complete list up to the certain value of the adjunction number of the Hilbert series, which depend on the values of input parameters µ and u. We search for the candidate varieties until we stop getting new examples or the computer search becomes unreasonably slow due to higher degree weights. Implementation of the algorithm The following remarks describe the implementation of our algorithm in detail. 1. In practice, we search for the candidate orbifolds in some chosen weighted flag variety wΣ by running a code for different values of input parameters µ and u . The different values of parameters µ and u may lead to the same set of weights on the embedding wΣ ֒→ wPV λ . For instance the two choices of parameters, µ 1 = (1, −1) and µ 2 = (−1, 2) with u = 3, give the same embedding of the weighted G 2 variety in P 13 [1, 2 4 , 3 4 , 4 4 , 5], see Section 4. To avoid repetition in the computer search, we perform the computer search with predetermined lists of input parameters µ and u, so that only the distinct embeddings are searched. This allows us to check for candidate orbifolds in distinct embeddings of wΣ in some weighted projective space. One can use a simple minded program to compute such lists of input parameters in any computer algebra system. This can essentially be stated as the Step 0 of the search process. 2. The choice of weights on wΣ, consequently on X, is determined by the values of the parameters µ and u. We arrange the inputs and run the code, in the order of increasing the sum of the weights in P[w 0 , . . . , w m ], i.e. the sum of the weights on the (i + 1)-st input is greater or equal than the sum on the i-th input. This is equivalent to ordering the inputs in order of increasing the adjunction number (the degree of the Hilbert numerator H(t)) of wΣ. If wΣ corresponds to a simple Lie group G then a bound on u automatically bounds the parameter µ. At Step 2 of the algorithm we compute all possible n-folds X as weighted complete intersections inside wΣ or of projective cone(s) over wΣ such that K X = O X (k). Since the adjunction number q remains unchanged through the process of taking cone(s) and quasilinear sections, there are only finitely many choices of weights w i for the embedding This essentially makes the process of taking projective cones to be a finite one, which can easily be handled by using simple algorithms. The geometry behind the construction of orbifolds imposes further conditions on the choices of degree w i of projective cones and quasilinear sections; which makes the implementation of the algorithm strikingly faster. The degree of the projective cones is bounded by w max − 1. As if the degree of a cone is greater or equal than the maximum weight w max of ambient space containing wΣ, the newly introduced variable will not appear in any of the defining equations of wΣ and its weighted complete intersections X. Thus a cone of degree greater or equal than w max will not contribute to the orbifold part of P X (t); bounding the degree of the projective cone by w max − 1. Further, the degree of the forms intersecting with wΣ(µ, u) ⊂ P m [w i ] must be equal to one of the ambient weights of the original space containing wΣ(µ, u). Otherwise the process of projective cones will become redundant, in the frame work our construction. The number of projective cones must always be less than or equal to s; the dimension of ambient weighted projective space containing X. Let be the the basket of isolated orbifold points of X ֒→ wP s ; where m i represents the multiplicity of the singular point of type 1 r i (a i 1 , a i 2 , . . . , a in ). Then we define the integer j to be the length of the basket B. It is evident that 1 ≤ j ≤ b ≤ s, where b is the number of non-trivial weights of wP s . Step 4 of the algorithm computes all possible n-fold distinct isolated orbifold points of the weighted projective P[w 0 , . . . , w s ] which may potentially lie on the candidate orbifold X, contributing to the orbifold part of the P X (t). The total number B of such orbifold points is usually larger than the actual admissible length b of the basket B on X, except when the weights of wP s are relatively small. In implementing the algorithm, we find the set B of all the admissible baskets on X; the length of the baskets range from length 1 up to minimum(b, B). We search for the candidate orbifolds by running the code through all the elements of B: solving equation (3.2) for each basket B in B. 5. Step 6 of the algorithm checks for the solutions m i to the equation (3.2) for the given basket B of orbifold points. In certain cases -there may be a kernel of the singular stratathere may exist a collection of orbifold points {Q i } in B such that P Q i (t) = 0. For example, if Q 1 = 1 5 (3, 3, 4) and Q 2 = 1 5 (1, 2, 2) are two orbifold points with k = 0 then the corresponding orbifold terms P Q 1 (t) and P Q 2 (t) are linearly dependent: In the implementation of the algorithm, we calculate all possible such combinations of the singular strata of the ambient weighted projective space wP s containing the candidate orbifold X. Though from the routine computer search, we can only conclude that the candidate orbifold X either contains some combination of those points with each point having equal multiplicity or contain none of them. But one can precisely answer this question by explicitly computing the orbifold loci of X by using the equations. 6. The equation (3.3) leads to some matrix equation once it is evaluated on the the appropriate j integers, where j is the length of the corresponding basket. The solution of the corresponding matrix represents the multiplicities of the orbifold points of X. For a given isolated orbifold (X, D) the polynomial equation of type (3.2), obtained after appropriately clearing the denominators appearing in P I (t) and P Q i (t)'s, holds for every value of t. Therefore the corresponding matrix equation (3.3) will always have a unique solution. We are not given with an isolated orbifold; instead we we are rather searching for the one by using the numerics of the Hilbert series and the ambient weighted flag variety. But if the numerics of the singularities and Hilbert series correspond to some isolated orbifold in the given format then the linear system will have a unique solution: which are exactly the cases of our interest. Thus the list we obtain will simply be the over list of the actual number of orbifolds in the given wΣ(µ, u). Applications in 3-fold case In this section, we apply the algorithm by using a Magma code, given in appendix A, to compute lists of candidate 3-folds with k equal to −1, 0, and 1 in two types of weighted flag varieties having embedding in codimension 3 and 8. More precisely, we compute lists of Fano 3folds with isolated log-terminal quotient singularities, Calabi-Yau 3-folds with isolated canonical quotient singularities and canonical 3-folds with terminal quotient singularities. We explicitly construct 5 new families of of log-terminal Q-Fano 3-folds as weighted completed intersection of codimension 8 weighted G 2 variety. Weighted G 2 variety: codimension 8 We briefly review the construction of the codimension eight weighted flag variety; a weighted homogeneous variety for the simple Lie group G 2 . The more detailed treatment can be found in [QS11]. α 2 α 1 ω 1 = 2α 1 + α 2 ω 2 = 3α 1 + 2α 2 Figure 1: Root System of G 2 Let α 1 , α 2 ∈ Λ W be the pair of simple roots of the root system ∇ of the simple Lie group G 2 . We take α 1 to be the short simple root and α 2 the long one; see Figure 1. The fundamental weights are ω 1 = 2α 1 + α 2 and ω 2 = 3α 1 + 2α 2 . The sum of the fundamental weights; also known as Weyl vector ρ is given ρ = 5α 1 + 3α 2 . The cone of dominant weights is spanned by ω 1 and ω 2 . Then the G 2 -representation with highest weight λ = ω 2 = 3α 1 + 2α 2 is 14 dimensional. The corresponding homogeneous variety Σ ⊂ PV λ is five dimensional, so we have a of codimension 8 embedding Σ 5 ֒→ P 13 . Let {β 1 , β 2 } be the basis of the lattice Λ * W ; dual to {α 1 , α 2 }. The weighted version can be constructed by taking µ = aβ 1 + bβ 2 ∈ Λ * W and u ∈ Z + ; following Section 2.3. The defining equations of the weighted G 2 variety wΣ can be calculated by using the decomposition of the second symmetric power S 2 (V * λ ) of the dual representation of V λ , as a module over the Lie algebra g 2 : see [QS11] for the details. The 28 quadrics cut out the defining locus of wΣ, explicitly given in [QS11,Appendix A]. The weights of the ambient weighted projective space wP 13 are parameterized by an integer value vector µ = (a, b) and u; given by [±a + u, ±b + u, ±(a + b) + u, ±(2a + b) + u, ±(3a + b) + u, ±(3a + 2b) + u, u, u]. (4.1) As we need all the weights to be positive; there are finite choices of the parameters µ for each positive integer u. The adjunction number q, the degree of the Hilbert numerator H(t), is 11u. The sum of the weights is 14u; if wΣ is well-formed then the canonical divisor class is K wΣ = O wΣ (−3u). We search for the examples in order of increasing the sum of the weights w i = 14u on wPV λ , which corresponds to the increase in u and consequently in order of increasing the adjunction number q = 11u. Examples and lists of isolated orbifolds in codimension 8 In this section we prove the existence of some families of isolated log-terminal Q-Fano 3-folds whose general member can be embedded in a codimension 8 weighted G 2 variety. We also present the list of examples obtained for the Calabi-Yau 3-folds and canonical 3-folds in the codimension 8 weighted G 2 variety. Theorem 4.1 Let wΣ be the codimension eight weighted G 2 -variety. Then there exist 6 families of isolated log-terminal Q-Fano 3-folds whose general member is a weighted complete intersection in wΣ or some projective cone(s) over wΣ, given by the Table 1. Table 1 Log-terminal Q-Fano 3-folds in weighted G 2 variety, (µ, u)-Input parameters, Weightsweights of weighted projective space containing X, (−K X ) 3 -Degree of X, Basket-orbifold points of X, and BK-Basket kernel for X. We check all the singular strata of X by using the equations given in [QS11, Appendix A]. 1/5 singularities: Since we are not taking any degree five sections of wΣ and the variable of weight 5 does not appear as monomial of degree 2 in the defining equations of wΣ, X contains this point. By using the implicit function theorem we find the local transverse parameters near this point to be of degree 4,4 and 3. Therefore X has a singular point of type 1 5 (3, 4, 4). 1/4 singularities: X does not contain this singular point. The existence of the remaining log-terminal Q-Fano 3-folds appearing in Table 1, has been established by using explicit equations and a similar analysis of the singular strata. Remark 4.3 If X is a terminal Q-Fano 3-fold appearing as a quasilinear section of some weighted G 2 variety with parameters (µ : u), then the adjunction number of X will be 11u. The list of all possible codimension 8 terminal Q-Fano 3-fold, with their basket of singularities, adjunction number, degree etc; is available on the graded ring database page [BK]. The highest adjunction number for the codimension 8 terminal Q-Fano 3-folds which is an integer multiple of 11 is 66 in the database, so a terminal Q-Fano 3-fold in the weighted G 2 variety can exist only for u ≤ 6. But the computer search does not produce even a candidate terminal Q-Fano 3-fold for 1 ≤ u ≤ 6, leading to the following corollary. Corollary 4.4 There does not exist a terminal Q-Fano 3-fold in codimension 8 which can be realized as weighted complete intersection in codimension 8 weighted G 2 variety. Remark 4.5 Table 1 does not represent a complete classification of isolated log-terminal Q-Fano 3-folds in the codimension 8 weighted G 2 variety. Certainly, it is a sublist of the complete classification of such 3-folds in the given weighted G 2 variety, as the computer search has been completely performed only up to u = 7. One should expect more examples for the higher values of u. Similarly Table 2 and Table 3 do not represent the full classification of Canonical and Calabi-Yau 3-folds respectively. On the other hand the computer routine returns an over list of numerical candidates, up to certain values of the adjunction number q. In the case of log-terminal Q-Fano 3-folds, we get a list of 33 suggested models of candidate orbifolds up to q = 77 but only 6 of them exist as an actually variety with the suggested invariants and singularities. We include the smooth Fano 3-fold of genus 10 in the list for the sake of completeness as it already appeared in [QS11]. The existence of these candidates is established by checking through the equations of each of these 33 numerical candidates. The checking is done partly by hand and partly by the computer algebra, as the number of defining equations is substantially large. The following example represents a suggested candidate orbifold which does not exists as an actual variety with the suggested numerical data. Computations of other known lists There are some famous lists of 3-dimensional orbifolds, for example 95 Q-Fano 3-fold hypersurfaces in weighted projective spaces [IF00] or 69 families of codimension 3 Q-fano 3-folds in the weighted Grassmannian w Gr(2, 5) format etc. The summary of such lists and corresponding references can be found in Table 1 of [BKZ]; where the results were obtained by using a different approach than ours. Since weighted projective spaces are a particular type of weighted flag varieties; those lists of orbifolds can be recovered by making slight modifications to our computer routine, except the lists of codimension 1 Calabi-Yau 3-folds of [KS00]. Theoretically, we will eventually obtain the full lists of these 3-folds as well but the computer search gets hopelessly slow once the weights of the ambient space get larger. In theory, the algorithm can be used to find lists of orbifolds inside any ambient weighted projective variety with a computable canonical divisor class and Hilbert series. As a description, we recover lists of canonical, Calabi-Yau and log-terminal Q-Fano 3-folds (including the terminal Q-Fano 3-folds) inside weighted Grassmannian w Gr(2, 5) format and present the results in table 4. Table 4 presents the summary of the results obtained in two cases: codimension 8 weighted G 2 variety and codimension 3 weighted Grassmannian Gr(2, 5). In each case the results are searched up to the adjunction number q max . The number q res represents the adjunction number for which the last numerical 3-fold was found. The column #wΣ dis gives the number of distinct embeddings searched in the given format, and #wΣ res is the number of the embedding where the last result was found. The column #output represents the number of suggested candidate 3-folds and #result gives the number of plausible candidate 3-folds in the given weighted flag variety. In the case of w Gr(2, 5), we recover the list of 18 canonical for 3-folds computed in [BKZ] for k = 1. The list of famous 69 families of Q-Fano 3-folds in codimension 3 is obtained as a sublist of 403 numerical examples of log-terminal Q-Fano 3-folds computed for k = −1. In the case of k = 0 the number of Calabi-Yau 3-folds obtained up to adjunction number 71 is an over list of the 187 such 3-folds appearing on the graded ring database page [BK], computed by using a different approach than ours. For the codimension 8 weighted G 2 variety the adjunction number increases quite rapidly; leading to fewer cases of distinct embeddings. For k = 1 we obtained a list of 14 numerical candidates with 12 plausible examples and the case of k = 0 gives 13 candidate families of Table 4 The summary of results showing the number of families of log-terminal Q-Fano 3folds, Calabi-Yau 3-folds, and canonical 3-folds with isolated orbifold points in two formats: codimension 8 G 2 , and Gr(2, 5). The column #wΣ dis gives the number of distinct embeddings searched for examples, #wΣ last is the last embedding where the example appeared, q res gives the largest adjunction number for which a result was found; q max gives the largest adjunction number searched; #outputs gives the number of candidates found by the computer; #results gives the number of candidates after removing the candidates with obvious failure. Format codim k #wΣ dis #wΣ res q max q res #outputs #results G Calabi-Yau 3-folds with 6 of them being plausible. For k = −1 we get 33 numerical candidates of isolated log-terminal Q-Fano 3-folds with 6 of them existing as actual varieties; explicitly constructed in Theorem 4.1. Remark 4.9 There are some other types of weighted flag varieties constructed in [QS11,Qur15] in codimension 6, 7, and 9. The detailed list of isolated orbifolds in those weighted flag varieties will appear elsewhere [BKQ]. In this article, we restrict the attention to description of the algorithm and its applications to sample cases. A Magma code to compute n-folds This code consists of the main Magma function "Format", which uses some auxiliary functions and some extra data to produce the required lists of examples. The following is the most general form of the implementation of our algorithm. The search process can be significantly fastened by a slight modification in the code in particular cases. For example, in the case of isolated Calabi-Yau 3-folds the index of singularity must be odd, so a minor modification in the function " Porb Cont" fastens the search significantly. For the whole calculation we run the following basic commands to start calculations after logging into Magma. Q:=Rationals(); R<t>:=PolynomialRing(Q); K:=FieldOfFractions(R); S<s>:=PowerSeriesRing(Q,50); The function Qorb calculates the contribution P Q i (t) of each isolated singular point 1 r i (a 1 , . . . , a n ) to the Hilbert series P X (t) of X. The input to this function are the index of singularity r, the weights of the local coordinates LL = [a 1 , . . . , a n ] and the canonical weight kx of X. This function is the own implementation of their algorithm by the authors of [BRZ13]. function Qorb(r,LL,kx) L := [ Integers() | i : i in LL ]; if (kx + &+L) mod r ne 0 then error "Error: Canonical weight not compatible"; end if; n := #LL; Pi := &*[ R | 1-t^i : i in LL]; h := Degree(GCD(1-t^r, Pi)); l := Floor((kx+n+1)/2+h); de := Maximum(0,Ceiling(-l/r)); m := l + de*r; A := (1-t^r) div (1-t); B := Pi div (1-t)^n; H,al_throwaway,be:=XGCD(A,t^m*B); return t^m*be/(H*(1-t)^n*(1-t^r)*t^(de*r)); end function; The function "Init Term" computes the contribution of the initial term P I (t) of the Hilbert series P X (t) of X, as given by (2.3). The input is: hs-the Hilbert series of X, n-dimension of X, and c-coindex of X. The function "Pos Wt" computes all possible embeddings of X ֒→ wP s−1 , as a quasi-linear section of wΣ ֒→ PL and of all possible projective cone(s) over it, with a desired canonical divisor class. As an input we use the weights of the embedding wΣ ֒→ P[w i ] as a list of integers L and integers s, w, where w is the required sum of the weights on P s−1 . As an output we get lists of integer lists of length s such that their sum is w and corresponding ambient weighted projective space P s−1 [w i ] is well-formed. The function "Porb Cont" calculates all the possible singularities coming from the embedding of X. As an input it takes the list of weights of wP s−1 , the canonical weight kx and the dimension n of X. The function "Baskets" computes all possible baskets which may lie on X, induced from the weights of embedding P s−1 [w i ]. The input of this function is the output from the function "Porb Cont". we know that X ֒→ P s−1 [w i ], so the maximum length (defined in the Section 3.3) of the baskets is s. The function "Bask Kernel" computes the kernel of the basket of singularities B induced from the weights of P s−1 [w i ], i.e. given a set of isolated orbifold points Q i , it computes all possible
2015-11-11T17:30:59.000Z
2015-09-12T00:00:00.000
{ "year": 2015, "sha1": "aeb46e6e39872d9ec0d1805933326e11e79488c5", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jsc.2016.02.018", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "0704579bbe8b6d28d3fb9e776ba73c71e8a9aada", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
115925216
pes2o/s2orc
v3-fos-license
Challenges of Downstream Processing for the Production of Biodiesel from Microalgae The continuous reliance on fossil fuels is unsustainable, due to the depletion of global reserves and the greenhouse gas emissions associated with their use. Therefore, there are vigorous research initiatives intended to develop renewable alternatives. Microalgae are a promising alternative for biodiesel production and have received increasing attention during the last few decades. However, is not yet sufficiently cost-effective to compete with petroleum-based conventional fuels. This happens essentially because of downstream processing – harvesting microalgae biomass and extraction of lipids are two of the most expensive processes from the overall process. Harvesting, drying, cell disruption, oil extraction and transesterification (into biodiesel) are highlighted processes in this review article. The techniques associated with each process present advantages and handicaps that are here discussed. Improvements that will directly affect the final production costs of microalgal biomass-based biofuels are also proposed. Subject Headings. Bioenergy, Biotechnology, Bioprocess Author Introduction The rapid growth of world's population and rise of developing countries has led to increased energy needs (Harun, Danquah, and Forde 2010).This is the cause of nowadays scarcity of petrochemical resources as well as environmental pollution problems.These are two critical challenges that need to be addressed by our society.In a result of this, many countries set national biofuel production targets and provided incentives and support to accelerate the growth of the bioenergy industry.For example, in the United States, Renewable Fuels Standard (RFS), part of the Energy Independence and Security Act (EISA) of 2007, set the annual production goal of ≈ 136 billion litres typically of biofuels until 2022 (U.S. Congress 2007).To avoid adverse impacts on the supply of food for human consumption, EISA further specifies that 60 of the 136 billion liters of renewable fuels produced in 2022 should be advanced biofuels produced from algal biomass, for example.However, the current production capacity of advanced biofuels is less than 37 billion liters worldwide (Yue, You, and Snyder 2014).Microalgae are leading the development of third generation of biofuels and have recently received much attention for the commercial production of advanced biofuels, including biodiesel.Despite the wide range of advantages associated with the production of biodiesel with these microorganisms (e.g.high carbon adsorption capacity (contributing to greenhouse gas (GHG)) reduction), an economic feasibility of the microalgae-based biofuels industry has not yet been achieved because it still doesn't have an highly economic process that integrates the multiple steps associated with the downstream process -harvest, extraction, and conversion of biomass to biodiesel (Mata, Martins, and Caetano 2012).Downstream process represents 60% of the total biodiesel production cost.In this way, it is of great importance to reduce these costs.Harvesting of microalgae corresponds of 20% of the total production cost of biodiesel (varies according with the type of harvest technology used and the density of the microalgal culture (Mata, Martins, and Caetano 2010).The oil extraction from dried biomass can be accomplished using various cell rupturing techniques including autoclave, ultrasound, homogenization, and bead milling.Organic solvents, acids, alkalis, or enzymes can also be used for the chemical or biological breakdown of the cell wall.After the oil is extracted, it's converted into biodiesel through a transesterification reaction in methanol with an acid or an alkaline catalyst (normally alkaline).However, this reaction produces toxic chemicals and the recovery of the product isn't easy (Chisti 2007).Despite the recognition of all the difficulties that downstream processes for the production of biodiesel offer, there is a tremendous potential to improve the economics of microalgal biofuels (Rawat et al. 2013).This review discusses the techniques used for harvesting microalgae, extract the lipids, and convert them to biodiesel.It also provides perspectives on the most suitability technique according with its techno-economic feasibility. Downstream Processing An integrated production of biodiesel from microalgae (Fig. 1) includes a microalgal cultivation step, followed by the separation of the cells from the growth medium and subsequent lipid extraction for biodiesel production through transesterification. Cell Harvesting Harvesting is an extremely challenging process and in need for research, in order to develop an appropriate and economical system for any microalga species, since an universal harvesting method, does not exists (Uduman et al. 2010).The main problems associated are (1) cells size, which is typically small (3 to 30 μm) making it difficult to separate them from de culture media (Mata, Martins, and Caetano 2012); (2) the cells normally carry negative charge and excess algogenic organic material prevent easily settling by gravity (Brennan and Owende 2010); (3) low concentrations of biomass found in the production systems (Mata, Martins, and Caetano 2012).All of this makes harvesting an expensive process.Most common harvesting methods include sedimentation, centrifugation, filtration, ultrafiltration, sometimes with an additional flocculation step or with a combination of flocculation-flotation (Mata, Martins, and Caetano 2010).Harvesting normally involves two processes: bulk harvesting, whose purpose is to separate biomass from the bulk suspension (e.g. through flocculation and flotation) and thickening, which follows the bulk harvesting and aims to concentrate the slurry (possible to use either centrifugation or filtration) (Golueke and Oswald 1965).Selection of the most appropriate harvesting technique depends on the properties of the microalga at stake (density and size, respectively), as well as the specifications of the desired product (Golueke and Oswald 1965).Flocculation is the first step in biomass recovery process and is intended to aggregate the microalgae, to facilitate and increase the efficiency of the processes that follow (Elmaleh et al. 1991).Since microalgae are negatively charged, there are repulsive forces that do not allow the cells to aggregate.However, this charge can be neutralized or reduced by the addition of chemicals, better known as flocculants.Some researchers (Weissman and Goebel 1987) compared four primary harvesting methods for the purpose of biofuels production: microstraining, belt filtering, flotation with float collection, and sedimentation.All these methods make separation based on the size and density of microalgae.Microstrainers are available in large unit sizes have a mechanical simplicity and a low cost for large-sized microalgae.However, incomplete solids removal and difficulty in handling fluctuations of solid contents (Rossignol et al. 1999) as well as the need for an earlier flocculation process are problems associated with this approach.Filter presses can be operated under pressure or vacuum and are used to recover large quantities of biomass but it can be a slow process and because of this, sometimes, unsatisfying.Larger microalgae (as Coelastrum proboscideum and S. platensis) can be easily recover through filtration, however this process is not so suitable to species with smaller dimensions (as Scenedesmus, Dunaliella, or Chlorella) (Grima et al. 2003).Other filtration processes are also suitable for biomass recovery, such as microfiltration and ultrafiltration (especially for the recovery of fragile cells), however, these processes are more expensive because of the need for membrane replacement and pumping.Same thing happens when applying tangential flow filtration.According to Richmond (2004), to select the most adequate harvesting procedure, it's very important to take into account the quality of the desired product.For low value products, gravity sedimentation may be used, possibly enhanced by flocculation.For high-value products (such as for food or aquaculture applications), it is often recommended to use continuously operating centrifuges that can process large volumes of biomass.Centrifuges concentrate biomass quickly and can be easily cleaned or sterilized to effectively avoid bacterial contamination or fouling of raw product.Against this process is the considerable cost associated.In order to choose the proper harvesting method it is important to know which cell disruption technique will be used (next step of the overall process, if drying is dismissed), since the accepted moisture level will depend on it.(Richmond 2004;Grima et al. 2003;Amaro, Macedo, and Malcata 2012).Taking into consideration the cost of thermal drying comparing to mechanical dewatering, a combination of methods can be used, such as, a pre-concentration with a mechanical dewatering step (microstrainer, filtration, or centrifugation) and then, a post-concentration (with a centrifuge or a thermal drying) (Mata, Martins, and Caetano 2010).The right is to apply first the processes leading to larger volume reductions, followed by those that are more selective (and also more expensive).Another harvesting process is the electrolytic method, which is able to separate cells without chemicals addition.As a downside, the power that requires produces high temperatures that can destruct the system.Another disadvantage is the cathode fouling.A potential alternative for efficient mass harvesting of microalgae is flotation, which uses an oxidant to destabilize suspended microalgal cells (Engler 1985). Biomass Drying In order for biodiesel production to be an economically feasible process, shortening the processes adjacent to its production is fundamental and, therefore, has been the starter for several investigations.According to some authors (Xu et al. 2011), the use of wet biomass for the subsequent processes gives rise to higher value biofuels, on the other hand the use of dry biomass gives a higher fossil energy ratio, which is defined as the ratio between the amount of energy that goes into the final fuel product (fuel energy output) and the amount of fossil energy input (non-renewable energy) required for fuel production.Several methods can be used to dry biomass, such as spray-drying (Leach, Oliveira, and Morais 1998), convective drying (Desmorieux and Decaen 2005), drum-drying, fluidized bed drying, freeze-drying (Cordero and Voltolina 1997), refractance window dehydration technology (Nindo and Tang 2007), low-pressure shelf drying and sun-drying (Prakash et al. 1997).These processes can increase biomass shelf life.The method that is more economically suitable for the biodiesel production would be the sun-drying, but due to the high content of water, the necessity for large drying surface and the possibility of losing some of the bioreactive products makes it ineffective (Li et al. 2008).As for the other techniques, although they're more efficient, they're not economically feasible for biodiesel (Mata, Martins, and Caetano 2010).In order to choose the most suitable drying method or even decide to bypass it, will depend on the desired final product.Considering that biodiesel is a low value product, skipping the drying process can be a possibility, not neglecting that further studies will be needed. Cell Disruption Cell disruption is a very important step of the overall process because most microalgae have a strong cell-wall, which difficult the lipids extraction and reduces the yield obtained.To overcome this problem, several methods can be used to break the cell wall.These methods can be mechanical (e.g.cell homogenizers, bead mills, ultrasound, autoclaving and spray drying), chemical (comprise the use of chemical treatments and osmotic shock) and biological (involve the use of enzymes to degrade polysaccharides and/or proteins) (Kim et al. 2013;Mata, Martins, and Caetano 2012).Ultrasound and microwave assisted extractions were reported by Cravotto et al. (2008) as being great to improve the extraction of bioactive substances from microalgae at shorter reaction times and up to 10 times lower solvent consumption.These authors also found that ultrasounds worked better than microwaves, improving higher yields of oil extraction.Additionally, they proved these methods to be less toxic, more economical and more efficient than the co-solvents extraction approaches.Another authors (Burja et al. 2007) proved the efficiency of ultrasound combined with solvents (methanol:chloroform) for breaking the cell walls of Thraustochytrium sp.ONC-T18, concluding that this was the best technique to extract fatty acids, from all the evaluated ones.Lee et al. (2010) tested autoclaving, bead-beating, microwaves and sonication to identify the most effective for lipids extraction from Botryococcus sp., Chlorella vulgaris and Scenedesmus sp..They concluded that the most effective, simple and easy was microwave.The efficiency of fatty acid extraction from Scenedesmus obliquus was assessed by Wiltshire et al. 2000, who compared a technique that included quartz sand, solvent and ultrasound (in freeze dried algal) with another technique combining solvents with subsequent incubation, concluding that the first one had twice the extraction efficiency and it is a method that conserves the fatty acids present in the biomass without affecting the products.Another method referred (Sommerfeld et al. 2010) as being good at improving the lipid extraction efficiency is electroporation.This method alters the cell membranes and walls, reducing the duration of the extraction and the use of solvent without affecting the composition of the extracted fatty acids.Bead-beating method has also been studied and proved to be better in lipids extraction of Botryococcus braunii, than sonication, homogenization, French pressing or lyophilization (Geciova, Bury, and Jelen 2002).However, this is not an easy to scale up method.Given the above, microwave has been proved to be the most efficient, simple and easy to scale-up disruption cell method.Nonetheless, the choice of the technique to be used must take into account the species that will be used. Oil Extraction Microalgae possess two kinds of lipids -polar lipids and non-polar lipids.Polar lipids are associated with the membrane structure and flexibility (e.g.phospholipids and glycolipids).Non-polar lipids are the glycerides (mono, di and tri) and also some vitamins, hydrocarbons, was esters and sterols (Benemann and Oswald 1996).The species of microalgae have a great role on the applicability of the extraction method and it's efficiency.For example microalgae that lack a cell wall are more prone to shear breakage than ones that contain a rigid cell wall.Other important aspects are the cell wall composition, strength and structure (when applicable) (Mata, Martins, and Caetano 2012).The most recognized method for total lipid quantification is the Bligh and Dyer co-solvent system (Bligh and Dyer 1959;Grima et al. 1994).This technique consists in exposing the lipid-containing tissue to a miscible co-solvent mixture -methanol, chloroform and waterin the ratios of 1:2:0.8 and 2:2:1.8before and after dilution, respectively.The homogenate obtained is then diluted in water, creating a biphasic system where the non-lipids accumulate on the water/methanol phase and the lipids accumulate on the chloroform phase.After isolating the chloroform, it is evaporated leaving an extract of purified lipid (Mata, Martins, and Caetano 2012). The downsides of this method are that the chloroform can extract more than just lipids compromising the true lipid content (Iverson, Lang, and Cooper 2001); the toxicity of chloroform and methanol; as well as the difficulty to scale up. There has been some research on the Bligh and Dyer method to mitigate its disadvantages.(Grima et al. 1994) compared seven solvent mixtures, concluding that a mixture of ethanol at 96 % and hexane/ethanol at 96 % resulted in higher lipids yield.Hara and Radin (1978) proposed the use of less toxic and cheaper solvents (hexane/isopropanol). Lewis, Nichols, and McMeekin (2000) concluded that adding to the biomass the solvents chloroform, methanol and then water, in this order of increasing polarity, the total amount of lipids and fatty acids extraction increased about 30 %.What happens is that the initial contact with the chloroform or with a mixture of chloroform and metanol, weakens the association between lipids and proteins in the cell membrane, making them permeable for water, to separate them. Solvent extraction techniques still have some problems that need to be overcome, such as the difficulty to scale-up and the high quantity of solvents required for extraction (Benemann and Oswald 1996).In this way, some other options should be considered.Methods that employ elevated temperatures and pressures can be used.Some examples of these, are pressurized fluid extraction (PFE) (Denery et al. 2004) or accelerated solventextraction (ASE) (Richter et al. 1996), supercritical fluid extraction (SFE) (Cheung 1999) and subcritical water extraction (SCWE) (Eikani, Golmohammad, and Rowshanzamir 2007).All of these have the same advantage in common, which is the facility of access to the lipids extracted because of the higher temperature and pressure applied, that increases the solvation power of the solvents as well as their capacity to solubilize the lipids extracted.Despite this, these techniques are not suitable for large scale, since the organic solvents used are expensive and the methods require an energy expenditure (Cooney, Young, and Nagle 2009).Richter et al. (1996) and Mulbry et al. (2009) used ASE for lipid-extraction.The first ones, reported a very short extraction time (less than 15 min), without causing thermal degradation of temperature-sensitive compounds.The second group of authors tested the ASE with three solvents (chloroform/methanol, isopropanol/hexane and hexane) and compared it with Folch method, concluding that the ASE method yielded higher values for total oil content.SFE is a less effective technique in extracting polar compounds from natural matrices because of supercritical CO2 low polarity.This implies the addition of co-solvents, which are highly polar compounds that when added in small amounts, can produce substantial changes in the solvent properties of pure supercritical CO2 (Herrero, Cifuentes, and Ibanez 2006).In the end of the process, the product can be easily separated from the solvent just by lowering the temperature and pressure to atmospheric conditions, where the fluid returns to its original gaseous state and the extracted product remains as a liquid or solid. Various authors already tested SFE of lipids, in order to test its suitability.Canela et al. (2002) concluded that both the temperature and the pressure affected the extraction rate and the effect of the temperature prevailed over that of the pressure.The extracts were rich in essential fatty acids. The main disadvantage of SFE of lipids from microalgae is that it requires high pressure equipment, which is expensive and energy intensive, increasing the operating costs and limiting applicability in biofuels production from microalgae. Lipids extraction based on the use of water at temperatures just below the critical temperature (between 100 and 374 °C) and a pressure high enough to maintain the liquid state, is possible with SCWE.The advantages and obstacles of SCWE, as an effective method for the isolation of high-quality essential oils were discussed by Luque de Castro, Jiménez-Carmona, and Fernández-Pérez (1999).In this technique, the lipids are easily extracted, because when the temperature and pressure return to atmospheric, the water is no longer miscible with the lipids.The greatest advantage associated with this method is the possibility of being applied directly to harvested microalgae without need for the dewatering step.However, such as the other techniques discussed above, SCWE has the same constraintsenergy intensive process and difficulty to scale-up. Oil Transesterification into Biodiesel Biodiesel is a mixture of fatty acid alkyl esters obtained by transesterification of oils or fats, whose main components are triglycerides (90 -98 %).A smaller fraction of mono-and diglycerides, free fatty acids, residual amounts of phospholipids, phosphatides, carotenes, tocopherols, sulphur compounds and water (Bozbas 2008). Transesterification is a three reversible steps reaction: triglycerides are first converted to diglycerides, then to monoglycerides and finally to esters (biodiesel) and glycerol (byproduct).Then, it follows re-esterification with a short chain alcohol (usually methanol), which ensures high volatility.These reagents, alcohol and oil, are used in the presence of a catalyst (usually NaOH) in a molar ratio of 6:1 to guarantee that the reaction is driven in the direction of methyl esters (towards biodiesel -despite theoretical molar ratio is 3:1) (Fukuda, Kondo, and Noda 2001).The relationship between the feedstock mass input and biodiesel mass output is about 1:1, which means that the quantity of oil used results in the same quantity of biodiesel produced (theoretically 1 kg of oils produces 1 kg of biodiesel) (Mata, Martins, and Caetano 2010). Other catalysts can be used (instead of NaOH), such as acids (Fukuda, Kondo, and Noda 2001) and lipase enzymes (Sharma, Chisti, and Banerjee 2001).However, alkali-catalyzed transesterification is much faster (about 4000 times) than the acid catalyzed reaction (Fukuda, Kondo, and Noda 2001).Although the use of lipases offers important advantages, is not currently feasible because of the high cost associated (Fukuda, Kondo, and Noda 2001). In respect to the alcohols used, another ones can be utilized, in spite of methanol, however methanol is the cheapest.A consequence of transesterification reaction is soap formation, which causes yield loss.To prevent this, the oil and alcohol must be dry and the oil should have a minimum of free fatty acids.In the end, biodiesel is recovered by repeated washing with water to remove glycerol and methanol (Chisti 2007). An alternative to transesterification process is in situ transesterification, which happens directly inside the biomass and facilitates directly conversion of fatty acids to their alkyl esters, reducing the processing costs, by eliminating the solvent extraction step and alleviating the need for biomass drying in harvesting.It's a very promising alternative because alcoholysis of oil while in the biomass matrix leads to biodiesel higher yields and waste production is also reduced (Ehimen, Sun, and Carrington 2010). Conclusions Most of the companies that intend to produce biodiesel from microalgae economically in the next few years, don't have sufficient technical expertise to do it, because of the current limitations essentially related to microalgae harvesting and oil extraction processes.In this way, investment in technological development and technical expertise is needed before algal biodiesel is economically viable and can become a reality.In this way, it is essential to develop new strategies and to incorporate positive elements of the existing approaches.New approaches should be applicable to any microalgal species without significant differences in effectiveness and must be scalable to meet the challenge of commercial biodiesel production.One of the best solutions to achieve these goals may be the direct conversion of wet microalgal biomass to biodiesel (avoiding the drying step).The techniques used for harvest, extraction, and conversion must be environmentally sustainable while being sufficiently inexpensive to make microalgal biofuels competitive with petroleum-based transportation fuels.This work allowed to conclude that most of the techniques available for downstream processing require high energy consumption, which is critical from an economic and environmental point of view.Furthermore, the choice of the techniques for harvesting and dewatering determines the subsequent downstream unit operations, including the methods to be used for lipid extraction and possibly the transesterification reaction. Figure 1 : Figure 1: Integrated process for biodiesel production from microalgae
2019-04-16T13:26:06.702Z
2018-03-26T00:00:00.000
{ "year": 2018, "sha1": "cf0cee827a640ef3e6fb4e966b8c964d4490f2e3", "oa_license": "CCBY", "oa_url": "https://journalengineering.fe.up.pt/article/download/2183-6493_003.001_0005/66", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cf0cee827a640ef3e6fb4e966b8c964d4490f2e3", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Engineering" ] }
85630954
pes2o/s2orc
v3-fos-license
Atypical Memory Phenotype T Cells with Low Homeostatic Potential and Impaired TCR Signaling and Regulatory T Cell Function in Foxn1Δ/Δ Mutant Mice1 Foxn1Δ/Δ mutants have a block in thymic epithelial cell differentiation at an intermediate progenitor stage, resulting in reduced thymocyte cellularity and blocks at the double-negative and double-positive stages. Whereas naive single-positive thymocytes were reduced >500-fold in the adult Foxn1Δ/Δ thymus, peripheral T cell numbers were reduced only 10-fold. The current data shows that Foxn1Δ/Δ peripheral T cells had increased expression of activation markers and the ability to produce IL-2 and IFN-γ. These cells acquired this profile immediately after leaving the thymus as early as the newborn stage and maintained high steady-state proliferation in vivo but decreased proliferation in response to TCR stimulation in vitro. Single-positive thymocytes and naive T cells also had constitutively low αβTCR and IL7R expression. These cells also displayed reduced ability to undergo homeostatic proliferation and increased rates of apoptosis. Although the frequency of Foxp3+CD4+CD25+ T cells was normal in Foxn1Δ/Δ mutant mice, these cells failed to have suppressor function, resulting in reduced regulatory T cell activity. Recent data from our laboratory suggest that T cells in the Foxn1Δ/Δ thymus develop from atypical progenitor cells via a noncanonical pathway. Our results suggest that the phenotype of peripheral T cells in Foxn1Δ/Δ mutant mice is the result of atypical progenitor cells developing in an abnormal thymic microenvironment with a deficient TCR and IL7 signaling system. T hymic epithelial cells (TECs) 4 play many roles in thymocyte development, including the presentation of self-peptide-MHC complexes to the TCR for positive selection, the participation of medullary TECs in negative selection, and the elaboration of cytokines and chemokines that affect thymocyte migration and development (1,2). Interactions between thymic stromal cells and thymocytes and factors secreted by both types of cells control T cell development and maturation in the thymus, which also directly affects T cell function and the size of the naive T cell pool in the periphery (3)(4)(5). Thus, the efficient production of a large, diverse, and effective peripheral T cell pool originates with and depends on thymic epithelial cells organized into a normal thymic architecture. Once T cells leave the thymus, the number of peripheral T cells in a normal individual is always regulated by homeostatic proliferation and remains relatively constant in the absence of infection (6,7). Many factors that result in the depletion of lymphocytes (or lymphopenia) such as irradiation, chemotherapy, or infection may initiate homeostatic proliferation (8 -11). Under homeostatic proliferation, total lymphocyte number will compensate to normal level by an increase in phenotypic and functional memory T cells. However, the composition of the T cell pool may be anomalous, especially if thymic function is compromised, because the naive T cell pool depends on thymic output for long-term maintenance (12)(13)(14). Most homeostatic proliferation studies are based on transferring T cells into lymphopenic recipients. The process of homeostatic proliferation is similar to Ag-driven activation and proliferation in that the interaction of TCR/MHC and cytokines is also required for homeostatic expansion. After homeostatic proliferation, naive cells acquire memory T cell markers and functional properties like those of Ag-experienced memory cells (9,15,16). Unlike the process of Ag-driven proliferation, homeostatic proliferation displays slower progression through the cell cycle, does not up-regulate activation markers such as CD25 and CD69, and increases CD44 expression progressively (17). Recent studies indicate that competition for limited resources such as TCR signals, IL-7, thymic stromal lymphopoietin, and CD4 ϩ CD25 ϩ regulatory T (Treg) cells is involved in homeostatic regulation (18 -22). Neonatal mice are also physiologically lymphopenic (23), but the processes driving this proliferation are distinct in some ways from lymphopenia-induced or Ag-driven expansion. For example, CD4 ϩ neonatal expansion is IL7 independent (10). This neonatal expansion has been termed "spontaneous proliferation," distinct from slow homeostatic proliferation, and leads to the generation of a memory cell repertoire of broad specificity (15). This neonatal expansion has similar characteristics to the "fast" expansion that occurs after the transfer of cells to adult constitutively immunodeficient hosts, which is distinct from slow homeostatic proliferation, is IL7 independent, and may be driven primarily by foreign progress (25), resulting in an adult thymus consisting primarily of TECs with an immature progenitor phenotype and no mature thymic cortical or medullary regions. Overall thymic cellularity in these mutants is dramatically reduced, and few CD4 ϩ and CD8 ϩ single-positive (SP) thymocytes are produced. In this report, we characterized the phenotype and function of peripheral CD4 ϩ and CD8 ϩ T cells in Foxn1 ⌬/⌬ mice. These cells had an activated and memory-like phenotype and function that differ from those in cells stimulated by exogenous Ag or derived from homeostatic proliferation because of their short lifespan, persistent low cell numbers, larger cell size, and expression of CD25 and CD69. Naive T cells in Foxn1 ⌬/⌬ mice also had constitutively low cell surface TCR and IL7R expression and were hyperresponsive in both allo-and auto-MLR assays. In addition, Foxp3 ϩ CD4 ϩ CD25 ϩ Treg cells did not have normal suppressive function. Foxn1 ⌬/⌬ T cells display these phenotypes as early as the newborn stage. Our data indicate that these phenotypes may arise from atypical progenitors developing in the abnormal thymic microenvironment in Foxn1 ⌬/⌬ mice. Mice Foxn1 ⌬/⌬ mutant mice generated in our laboratory were described previously (25). The mice were of a mixed C57BL/6J and 129SvJ background. Adult mice were 6 -10 wk old. Timed pregnancies were generated by mating homozygous mutant mice with heterozygous mice. The day a visible vaginal plug was detected was counted as embryonic day (E) 0.5 (E0.5). Adult and embryonic mice were genotyped by PCR analysis of tail DNA. BrdU incorporation Each mouse was injected with 1 mg of BrdU (Sigma-Aldrich) once i.p. and then fed with BrdU-containing water (0.8 mg/ml). Peripheral lymphocytes were harvested from the spleen and lymph nodes (LNs) after 7 days. For BrdU detection, CD4 and CD8 surface-stained cells were fixed and permeabilized in PBS containing 1% paraformaldehyde plus 0.01% Tween 20 for 48 h at 4°C and then submitted to the BrdU DNase detection technique. FITC-anti-BrdU Ab (clone 3D4; BD Pharmingen) was used for BrdU staining. Intracellular IL-2, IFN-␥, and Foxp3 staining Peripheral lymphocytes (2 ϫ 10 6 ) prepared from spleen and LNs were plated in 24-well plates previously coated with 10 g/ml anti-CD3 Ab (clone 145-2C11) and cultured with or without an anti-CD28 Ab or PMA (Sigma-Aldrich) at 25 ng/ml plus ionomycin at 500 ng/ml for 5 or 24 h. As a control, cells were cultured in medium alone. To inhibit the secretion of newly synthesized IL-2 and IFN␥, stimulations were conducted in the presence of 5 g/ml brefeldin A (Sigma-Aldrich). After stimulation, cells were harvested and stained with anti-CD4, anti-CD8, and anti-CD44 for cell surface markers, fixed in 4% paraformaldehyde at 4°C overnight, washed, and permeabilized with 0.5% saponin (Sigma-Aldrich) in Dulbecco's PBS, 1% BSA, and 0.05% NaN3. Cells were then stained with anti-IL-2 FITC or anti-IFN-␥ PE Ab (BD Pharmingen), washed twice with 0.5% saponin buffer, and kept in FACS store solution for flow cytometry analysis. Foxp3 staining was performed according to the protocol of the Treg staining kit from eBioscience. Apoptosis analysis Dead cells were removed from freshly isolated cells by centrifugation in gradient Lympholyte-M solution (Cedarlane Laboratories), at 2000 rpm for 15 min. Cocultured cells underwent FITC-annexin V staining according to the protocol of a kit from BD Pharmingen. Adoptive transfer of CFSE-labeled cells Peripheral lymphocyte suspensions were prepared from freshly isolated spleen and all LNs (mesentery, axillary, inguinal, superficial cervical, and mandibular) from Ly5.2 or Ly5.1 genotype donor mice. Ly 5.2 T cells from Foxn1 ⌬/⌬ and Foxn1 ϩ/⌬ mice were sorted by a MoFlo FACS (DakoCytomation) and Ly5.1 CD4 T cells from Bl6 mice were purified by anti-CD4 beads (Miltenyi Biotec). Purified cells were labeled by incubation with 2 m of CFSE (Molecular Probes) for 10 min at 37°C in a solution of PBS plus 0.1% BSA, and then unlabeled CFSE was quenched by adding 10% FCS medium and washing. Depending on the experiment, sublethal irradiated Bl6 Ly5.1 and RAG Ϫ/Ϫ Ly5.1 mice or C57BL6 and nude (Ly5.2) mice were injected retro-orbitally with CFSE-labeled cells. Five or 8 days after the transfer, peripheral lymphocytes collected from all LNs and spleen were analyzed by flow cytometry. Proliferation and cell culture Peripheral CD4 ϩ T cells were purified by incubation with anti-CD4 beads and passage through a column (Miltenyi Biotec); the purity of cells from heterozygous mice was ϳ98% and that from homozygous mice was ϳ92% as determined by flow cytometry. Purified CD4 ϩ cells were then plated in 96-well plates coated with 10 g/ml anti-CD3 Ab (clone 145-2C11) (BD Pharmingen) in the presence or absence of an anti-CD28 Ab (clone 37.51) (BD Pharmingen) for 48 h. For Treg cell coculture, CD25 ϩ CD4 ϩ and CD25 Ϫ CD4 ϩ T cells were sorted by a MoFlo FACS (DakoCytomation). T-deleted spleen cells prepared from Foxn1 ϩ/⌬ control mice were irradiated (3000 rad) as accessory cells (ACs). CD25 Ϫ CD4 ϩ cells (5 ϫ 10 4 ) were cultured in 96-well round-bottom plates in a total volume 200 l/well with 5 ϫ 10 4 ACs and 1 g/ml soluble anti-CD3 and 2 g/ml CD28 in the presence or absence of 2.5 ϫ 10 4 CD25 ϩ CD4 ϩ cells at 37°C in 5% CO2 for 72 h. For MLR, 5 ϫ 10 5 Foxn1 ϩ/⌬ and T enriched Foxn1 ⌬/⌬ cells from LNs (both ϳ40 -50% of T cells in total peripheral mononuclear cells) were cultured in 96-well round-bottom plates with the same number of irradiated ACs from BL6 (auto-MLR) or BALB/c (allo-MLR) mice at 37°C in 5% CO2 for 72 h. Cultures were pulsed with CellTiter 96 AQ ueous One Solution Reagent (Promega) at 20 l/well for the last 1-4 h of culture. As a control, cells were cultured in medium alone. We measured and recorded absorbance at 490 nm using a 96-well plate reader. RT-PCR and real-time PCR To delete thymocytes, F14.5 and F17.5 fetal thymi were individually treated with 1.35 mM 2dGuo in a high-oxygen fetal thymic organ culture for 5 days. Total RNA from treated thymi was then extracted using a Micro RNA isolation kit (Stratagene). Total RNA was extracted from freshly isolated adult thymi and mesenteric LNs using TRIzol reagent. (Invitrogen Life Technologies). RT-PCR for total RNA analysis was performed using reverse transcriptase (Invitrogen Life Technologies) and primer random P (P(dN)7; Roche Diagnostics). For real-time PCR, primers and probes were designed and synthesized by Applied Biosystems). The IL7 primers were 5Ј-CATATGAGAGTGTACTGATGATCAGCA-3Ј (forward) and 5Ј-TTGG TTCATTATTCGGGCAATTAC-3Ј (reverse) and the IL7 probe was 6-FAM-TCAGTTCCTGTCATTTTGTCCAATTCATCG-TAMRA). Primers and probes were tested for the optimal dose. The 18S rRNA VIC-TAMRA primerprobe kit from Applied Biosciences was used as an endogenous control. All data were normalized to constitutive rRNA values. An Applied Biosystems 7700 Sequence Detector was used to amplify target mRNA and quantify the difference between samples as calculated according to the manufacturer's instructions. Peripheral T cell populations are severely reduced in Foxn1 ⌬/⌬ mutant mice We have previously reported that Foxn1 ⌬/⌬ mice have a 50 -100fold decrease in the number of CD4 ϩ and CD8 ϩ SP thymocytes (25). To assess the effect of this decreased thymic output on peripheral T cell populations, we measured the numbers and composition of peripheral lymphocytes from adult spleen and mesenteric LNs in control and Foxn1 ⌬/⌬ mutant mice. Although the total number of peripheral lymphocytes from Foxn1 ⌬/⌬ mutant mice was not significantly different from that of control mice, the percentages of CD4 ϩ and CD8 ϩ T cells were reduced ϳ10-fold, from 18.2 and 8.4% in control mice to 2.0 and 1.1% in Foxn1 ⌬/⌬ mutants mice, respectively ( Fig. 1, a and b). In addition to the drop in total T cell number, the mean fluorescence intensity of CD4 and CD8 expression on the surface of Foxn1 ⌬/⌬ T cells was reduced, indicating that the expression of both molecules on the T cell surface was down-regulated. This result was consistent with our previous data showing that surface TCR␤ was reduced on adult SP thymocytes (25). These data showed that the decreased thymic output was not overcome in the periphery, resulting in a reduced peripheral T cell pool. We also examined other peripheral lymphoid populations. The total number of peripheral B cells increased significantly (Fig. 1b), but NK cells were not significantly affected (data not shown). Furthermore, the number of TCR␥␦ T cells was not significantly different between Foxn1 ⌬/⌬ mutants and control littermate mice (data not shown), indicating that the defect primarily affects ␣␤TCR ϩ T cells. Peripheral T cells from Foxn1 ⌬/⌬ mutant mice display an activated/memory-like phenotype Although the peripheral T cell population in Foxn1 ⌬/⌬ mice was reduced, it was not as severe as the reduction in SP thymocytes FIGURE 1. The number of peripheral CD4 ϩ and CD8 ϩ T cells is significantly reduced in Foxn1 ⌬/⌬ mutant mice. a, CD4 and CD8 expression on spleen cells. b, Absolute numbers of total lymphocytes, CD4 ϩ and CD8 ϩ subsets, B cells, and total T cells in the periphery of wild-type (Wt), Foxn1 ϩ/⌬ , and Foxn1 ⌬/⌬ mice. (10-fold vs 100-fold), suggesting that some expansion of the peripheral cell population had occurred although not enough to reach wild-type levels. To investigate the mechanism underlying this apparent expansion of CD4 ϩ and CD8 ϩ T cells in Foxn1 ⌬/⌬ mice, we analyzed markers of homeostatic proliferation in adult peripheral T cells, including CD44, CD69, and CD25. Ninety percent of CD4 ϩ and 80% of CD8 ϩ Foxn1 ⌬/⌬ T cells were CD44 high , whereas 27% of CD4 ϩ and 26% of CD8 ϩ cells were CD44 high in control mice (Fig. 2a). The expression of CD69 and CD25 on Foxn1 ⌬/⌬ T cells was also increased significantly compared with control littermate mice (Fig. 2, a and b). In addition, T cells from Foxn1 ⌬/⌬ mutant mice were on average larger than those in control mice as measured by forward scatter (Fig. 2c). In vivo BrdU incorporation also confirmed that T cells in Foxn1 ⌬/⌬ mutant mice had a higher proportion of proliferating cells (Fig. 2d). These profiles differ from those of cells derived from homeostasis-driven proliferation, which are typically CD44 high but do not up-regulate CD69 and CD25. To determine whether these cells represent memory cells, we analyzed the expression of CD62L (Fig. 3a). Most of the T cells from Foxn1 ⌬/⌬ adult mice showed a CD44 high CD62L low/Ϫ effector memory (T EM ) phenotype, (83.7% of CD4 ϩ and 61.4% of CD8 ϩ T cells, compared with only 19.8 and 15.3% in control mice). In the periphery, even at the newborn stage the majority of both CD4 ϩ and CD8 ϩ peripheral T cells from Foxn1 ⌬/⌬ mice are CD44 high CD62L low/Ϫ and the percentage of cells with this phenotype does not significantly change with increasing age, although the cell numbers do increase (Fig. 3a). In contrast, in control mice the percentage drops over time. Thus this phenotype was acquired very rapidly after cells leave the thymus. In the thymus, the percentages of CD44 high CD62L low/Ϫ for both genotypes is essentially zero at newborn stage, then tracks behind the appearance of cells in the periphery. These results indicated that the memory-like phenotype is a post-thymus event and that T cells acquire the memory phenotype very soon after they emigrate from the thymus into the periphery. Because most SP thymocytes in Foxn1 ⌬/⌬ adult mice were also CD44 high CD62L low/Ϫ (Fig. 3b), the actual number of SP thymocytes in the adult thymus is 5-10-fold lower than what we previously reported, as many of the SP cells in the adult thymus are likely recirculated CD44 high CD62L low/Ϫ peripheral T cells. This high proportion of memory-like T cells in Foxn1 ⌬/⌬ newborn mice results in an absolute number of effector T memory type cells (CD44 high CD62 Ϫ/low ) that is similar to the number in control mice (Fig. 3c), followed by a 2-fold reduction in 14-day-old mice Most peripheral T cells from Foxn1 ⌬/⌬ mutant mice display an effector memory phenotype. a and b, Peripheral T lymphocytes or thymocytes from day 1 (NB), day 14, and adult mice were analyzed for CD44 and CD62L. Panel a shows the results from peripheral lymphocytes and panel b shows the results from thymocytes. All dot plots show an analysis of CD44 and CD62L memory markers in gated CD4 ϩ and CD8 ϩ cells. c, Cell numbers of CD44 high CD62L low/Ϫ subset and total T lymphocyte in day 1, day 14, and adult mice. Wt, wild type. and a 3-fold reduction in adult mice. Thus, the major cell population that was lost in Foxn1 ⌬/⌬ mutants mice was naive T cells, which were decreased 100-fold in the adult periphery. Foxn1 ⌬/⌬ peripheral T cells have characteristics of functional memory-like T cells CD44 high expression is a general marker for memory T cells in mice, and most peripheral T cells in Foxn1 ⌬/⌬ mutant mice have this phenotype. However, it is not clear whether these cells are functional T memory cells that have expanded because of Ag recognition or if they have arisen because of homeostatic expansion. One of the characteristics of the functional phenotype of memory T cells is the ability to rapidly produce IL-2 or IFN-␥ following activation (26). Therefore, we monitored the production of intracellular IL-2 and IFN-␥ after 5 and 24 h of stimulation in culture (Fig. 4a). The ability to produce IL-2 is shown for the CD4 ϩ subset. About 40% of total T cells (39.9%) and CD44 high CD4 ϩ T cells (43.5%) from Foxn1 ⌬/⌬ mice produced IL-2 under costimulation with anti-CD3 and anti-CD28 for 5 hours. In contrast, only 8.7% of total T cells from control mice and 15% of CD44 high CD4 ϩ T cells produced IL-2 at 5 h. Similar results were obtained with stimulation by using PMA plus ionomycin. After activation for 24 h, the production of IL-2 was reduced greatly in CD44 high CD4 ϩ cells derived from Foxn1 ⌬/⌬ mice; only 4.7% of total T cells and 5.2% of CD44 high CD4 ϩ cells produced IL-2, even with activation by PMA. In contrast, 25% of total T cells and 33% of CD44 high CD4 ϩ T cells from control mice produced IL-2 under stimulation with PMA for 24 h. The CD8 ϩ subset had similar results (data not shown). The ability to produce IFN-␥ was also measured and is shown for CD8 ϩ cells. Thirty seven point five percent (37.5%) of total T cells and 47.5% of CD44 high CD8 ϩ T cells from Foxn1 ⌬/⌬ mice produced IFN-␥ under stimulation with anti-CD3 ϩ anti-CD28 for 5 h, and the percentages increased to 63.1 and 69.5%, respectively, after activation for 24 h. In control cells, 5.4% of total T cells and 17.5% of CD44 high CD8 ϩ T cells produced IFN-␥ at 5 h, increasing to 38.5% of total T cells and 49.1% of CD44 high CD8 ϩ T cells at 24 h. (Fig. 4, a and b) The results were similar for CD4 ϩ cells. These results indicate that both CD4 ϩ and CD8 ϩ T cells from Foxn1 ⌬/⌬ mice have the ability to produce cytokines even more rapidly than cells from control mice, which is consistent with a memory phenotype. We then analyzed cytokine production by cells from young mice. Although there was a high proportion of CD44 high cells in the periphery of both normal and Foxn1 ⌬/⌬ mice, the early CD44 high memory-like cells from young control mice were unable to rapidly produce cytokines in response to stimulation. The cells from Foxn1 ⌬/⌬ mice acquired the ability to produce IFN-␥ 7 days after birth, and had a high proportion of cells producing IFN-␥ 14 days after birth (Fig. 4b and data not shown). Low levels of IL-2 were first produced 14 days after birth (Fig. 4a). In contrast, the cells from control mice could not produce IL-2 or IFN-␥ even 14 days after birth. These results paralleled the more rapid appearance of CD44 high memory-like cells in Foxn1 ⌬/⌬ neonates (Fig. 3c) and further indicated that the Foxn1 ⌬/⌬ T cells are functionally distinct from T cells that have undergone normal neonatal expansion. T cells from Foxn1 ⌬/⌬ mice had reduced capacity for homeostasis-driven proliferation Although peripheral T cells from Foxn1 ⌬/⌬ mice had a higher level of proliferation in vivo, these cells failed to accumulate to normal cell numbers. This result suggested that the T cells produced in the Foxn1 ⌬/⌬ thymus were either unable to undergo homeostatic expansion efficiently or underwent increased cell death. To assess the capacity for homeostatic proliferation of Foxn1 ⌬/⌬ T cells, we transferred CFSE-labeled T cells from control or Foxn1 ⌬/⌬ mice into sublethally irradiated Rag Ϫ/Ϫ and Bl6Ly5.1mice. Five days after transfer, 38.8% of CD4 ϩ and 30% of CD8 ϩ cells transferred from control mice had undergone more than seven generations in Rag Ϫ/Ϫ mice and showed multiple rounds of slow homeostatic expansion on the right side of graph (Fig. 5a). As previously reported, the rapid expansion subset was significantly reduced after transfer into an irradiated wild-type host, whereas slow homeotic expansion was retained in control mice (Fig. 5b). In contrast, transferred cells from Foxn1 ⌬/⌬ mice displayed a low percentage of rapidly proliferating cells in both Rag Ϫ/Ϫ and wild-type hosts (Fig. 5, a and b). Whereas CD8 ϩ cells were able to undergo some level of slow homeostatic expansion in both hosts (although still reduced relative to controls), homeostasis-driven proliferation was nearly undetectable in CD4 ϩ cells. Consistent with this result, a 6-fold lower percentage of Foxn1 ⌬/⌬ T cells were recovered after transfer into Rag Ϫ/Ϫ hosts than that of T cells taken from control mice (Fig. 5c). These results indicated that, despite their memory phenotype, most T cells derived from Foxn1 ⌬/⌬ adult mice had a severely reduced ability to undergo homeostatic proliferation in response to lymphopenia. Because the percentage of CD4 ϩ CD25 ϩ cells was increased in Foxn1 ⌬/⌬ mice (Fig. 2b), it is possible that these cells represented Treg cells that suppressed homeostatic proliferation in Foxn1 ⌬/⌬ T cells, both in situ and after cotransfer into lymphopenic hosts. To test this possibility, we transferred purified CD4 ϩ wild-type Ly5.1 cells into Foxn1 ⌬/⌬ mice. Because peripheral T cells are reduced ϳ90% in nonirradiated Foxn1 ⌬/⌬ mutant mice, transferred T cells should proliferate if they were not suppressed by endogenous CD25 ϩ CD4 ϩ cells. Transferred cells proliferated in Foxn1 ⌬/⌬ mice as well as they did in nude mice (Fig. 5e), indicating that the peripheral T cells present in the Foxn1 ⌬/⌬ mice neither suppressed nor competed efficiently with the introduced cells. Furthermore, transferred wild-type cells in the Foxn1 ⌬/⌬ mice up-regulated the expression of CD44, but not that of CD25 or CD69 (Fig. 5e), indicating that the peripheral environment in the Foxn1 ⌬/⌬ mice can support normal homeostatic proliferation. This result confirmed that the relative increase in CD4 ϩ CD25 ϩ cells was not responsible for the decreased homeostatic potential of T cells from Foxn1 ⌬/⌬ mice. Peripheral T cells derived from Foxn1 ⌬/⌬ mutants display a reduced response to stimulation through TCR Signaling through the TCR is critical for normal homeostatic expansion, as well as for T cell development in the thymus and activation, proliferation, and survival in the periphery. CD4 and CD8 molecules provide critical costimulation in these processes. Therefore, reduced cell surface expression of TCR, CD4, and CD8 molecules might directly affect TCR signal and T cell function. The reduced CD4 and CD8 expression on peripheral T cells in Foxn1 ⌬/⌬ mice suggested that the previously reported reduction in surface TCR␤ on SP thymocytes persisted in the periphery. Consistent with this expectation, the expression of ␣␤TCR was reduced to 86 and 80% on peripheral CD4 ϩ and CD8 ϩ cells, respectively, in Foxn1 ⌬/⌬ mice and mean fluorescence intensity on the surface of Foxn1 ⌬/⌬ T cells was reduced as well (Fig. 6a). Consistent with the reduced expression of TCR, CD4, and CD8, the CD4 ϩ T cells derived from Foxn1 ⌬/⌬ mice are less able to respond to stimulation than those in control mice, especially under stimulation from both anti-CD3 and anti-CD28 (Fig. 6b). Because there was no significant difference between Foxn1 ⌬/⌬ and control mice in the expression of CD28 (data not shown), the lack of an effect upon the addition of anti-CD28 indicated that the proliferation of T cells from Foxn1 ⌬/⌬ mice is costimulatory molecule independent. Because the data shown in Fig. 2 suggested that the majority of SP thymocytes cells in the adult Foxn1 ⌬/⌬ thymus were actually recirculated peripheral T cells, it was possible that our previous analysis had failed to detect high level TCR␤ expression on a much smaller number of true SP thymocytes (25). However, after excluding the CD44 high recirculated peripheral cells from our analysis, the remaining SP thymocytes were still TCR low (data not shown). These data, combined with our previously published analysis of thymocytes, suggested that T cells in Foxn1 ⌬/⌬ mice never have high levels of ␣␤TCR. Reduced Treg function in Foxn1 ⌬/⌬ mice Treg cells are also very important for regulating the activation and proliferation of T cells in the periphery. Because the percentage of CD4 ϩ CD25 ϩ cells was increased in Foxn1 ⌬/⌬ mice, we analyzed the expression of Foxp3 and the function of CD4 ϩ CD25 ϩ Treg cells. We found that the percentage of Foxp3-expressing cells in Foxn1 ⌬/⌬ CD4 ϩ CD25 ϩ T cells was ϳ20% lower than that in Treg cells from control mice (70 vs Ͼ90%; Fig. 6c). Combined with the increased percentage of CD4 ϩ CD25 ϩ cells, this resulted in a normal frequency of Foxp3 ϩ putative Treg cells in Foxn1 ⌬/⌬ mice (ϳ10% of CD4 ϩ cells in both controls and mutants). Surprisingly, CD4 ϩ CD25 ϩ T cells from Foxn1 ⌬/⌬ mice were unable to suppress the proliferation of control T cells in an in vitro activation assay (Fig. 6d). This failure to display Treg function was consistent with the normal homeostatic proliferation of wild-type T cells transferred into a Foxn1 ⌬/⌬ host. We further tested T cell function in an MLR. T cells from Foxn1 ⌬/⌬ mice also showed both alloreactivity and autoreactivity in MLR (Fig. 6e). These autoreactive T cells could be due to failure of thymic selection in these mice, resulting in reduced MHC restriction. These results further reflect reduced Treg activity that fails to suppress autoreactive T cells present in the periphery in these mice. T cells from Foxn1 ⌬/⌬ mice have higher rates of apoptosis To investigate whether T cells from Foxn1 ⌬/⌬ adult mice also have a higher propensity for apoptosis, we measured the cell surface expression of annexin V on freshly isolated peripheral lymphocytes. Whereas only 6.8% of CD4 ϩ and 3.2% of CD8 ϩ cells in control mice were annexin V ϩ , 21.5% of CD4 ϩ and 13.2% of CD8 ϩ fresh isolated cells from Foxn1 ⌬/⌬ mice were annexin V ϩ (Fig. 7a). Foxn1 ⌬/⌬ T cells were also more likely to undergo apoptosis in response to the activation of TCR in vitro than T cells from control mice (Fig. 7b). It is worth noting that the T cells from Foxn1 ⌬/⌬ mice also had more annexin V ϩ cells under mediumonly culture conditions in vitro, which indicated that they have a short lifespan both in vivo and in vitro. IL-7 and IL-7R␣ are reduced on Foxn1 ⌬/⌬ thymic epithelial cells and naive T cells Both memory and homeostatic T cells require IL-7 for proliferation and survival. To test whether there was a defect in IL-7/IL-7R signaling in Foxn1 ⌬/⌬ mice, we assessed the expression of IL-7R␣ and IL-7 by FACS analysis on T cells and the mRNA level in the thymus. Overall, the expression of IL-7R␣ on adult peripheral CD4 ϩ and CD8 ϩ cells was reduced in Foxn1 ⌬/⌬ mice relative to control mice (Fig. 8a). In cells from control mice, IL-7R␣ levels FIGURE 7. Peripheral T cells from Foxn1 ⌬/⌬ mutant mice are highly sensitive to the induction of apoptosis. a, Percentage of annexin V ϩ cells among freshly isolated, gated CD4 ϩ (open bars) and CD8 ϩ (filled bars) lymphocytes. Wt, Wild type. b, Percentage of annexin V ϩ cells among gated CD4 ϩ cells after activation. Whole lymphocytes were cultured under stimulation as indicated to the right of the histograms. Cells were recovered and stained after activation for 5 or 24 h as indicated above the histograms. were high regardless of the CD44 expression level (Fig. 8b). When examined relative to CD44 levels, it was apparent that in the Foxn1 ⌬/⌬ mice CD44 high cells had normal, high levels of IL7R␣, with a subpopulation even exceeding normal levels (Fig. 8b). In contrast, CD44 low cells had reduced levels of IL-7R␣ (Fig. 8b). This result was particularly striking for CD4 ϩ T cells. CD4 ϩ SP thymocytes expressing high levels of heat-stable Ag (HSA high ) showed a similarly lower expression of IL7R␣ than control cells (data not shown). In CD8 ϩ cells, CD44 low cells were divided evenly into two distinct subpopulations, one similar to controls and the other one low or negative for IL7R expression. These results suggest that most T cells from Foxn1 ⌬/⌬ mice were initially IL-7R␣ low and then up-regulated IL-7R␣ expression with CD44 upregulation in the periphery. Thus, although the average IL-7R␣ expression was lower than that in controls, there was a significant population of peripheral T cells with normal levels of IL-7R␣. Because the primary defect in these mice is in TEC differentiation, it is possible that the low IL-7R expression on CD44 low T cells was related to low IL-7 in the thymic microenvironment. We assayed IL-7 expression by RT-PCR in purified TECs from control and Foxn1 ⌬/⌬ mice at fetal and postnatal stages. Compared with control mice, the expression of IL-7 mRNA in Foxn1 ⌬/⌬ fetal TECs was moderately reduced by ϳ3-fold at E14.5 (Fig. 8c) and 1.3-fold at E17.5 (data not shown). In the adult thymus, IL-7 mRNA levels were further reduced (Fig. 8c). This reduction is likely to be an underestimate, as the T cell number is reduced dramatically in the Foxn1 ⌬/⌬ thymus, resulting in a relatively greater proportion of TECs in the mutant thymus. This lower level of IL-7 in the thymic microenvironment may contribute both to defects in intrathymic proliferation and differentiation of thymocytes, as well as to reduced IL-7R levels on naive T cells. IL7 mRNA levels were also reduced in adult Foxn1 ⌬/⌬ peripheral LNs to ϳ50% of controls. Despite this reduction in peripheral IL7 mRNA, the mRNA levels of Jak3 and Bcl2 were not significantly different in peripheral T cells from mutant and control mice (data no shown), indicating that IL7 signaling in the periphery may not be significantly impaired. Discussion The primary defect in Foxn1 ⌬/⌬ mice is a cell-autonomous effect on TEC differentiation, resulting in a highly abnormal microenvironment with no identifiable cortical or medullary regions. Although this mutation does not directly affect T cells, the abnormal thymic microenvironment in Foxn1 ⌬/⌬ mice not only results in a reduction of the peripheral T cell pool but also causes changes of the peripheral T cell profiles and impacts the normal function of T cell. Other data from our laboratory has shown that T cells made in the Foxn1 ⌬/⌬ thymus develop from an atypical CD117 low/Ϫ progenitor population via an unusual differentiation pathway that does not appear to involve transit through the double-negative (DN)2 or DN3 stages (25, 27)(S. Xiao, D.-M. Su, and N. R. Manley, unpublished data). Thus, peripheral T cells in Foxn1 ⌬/⌬ mice appear to arise from atypical progenitors in an abnormal environment, and peripheral T cell defects could be due to interactions with abnormal TECs and/or to differing intrinsic capabilities of the atypical progenitors. The characteristics of both CD4 ϩ and CD8 ϩ peripheral T cell phenotypes are likely due to a combination of reduced thymic output, abnormal thymic selection, and the intrinsic properties of these cells. One of the most striking aspects of the phenotype is the constitutively low ␣␤TCR levels, which are present even on SP thymocytes. This reduced expression is not likely to be a result of low MHC expression in the thymus for two reasons. First, although MHC class II levels are significantly reduced in the Foxn1 ⌬/⌬ thy-mus, class I levels are relatively normal (our unpublished data) and both CD4 ϩ and CD8 ϩ cells have low TCR levels. Secondly, even very low MHC levels should not result in this phenotype, because in Ii Ϫ/Ϫ mice the few CD4 ϩ T cells that are made are TCR high (28). The T cells are also not anergic because even thymocytes and newly made SP T cells are TCR low , can rapidly make cytokines after stimulation, and have high proliferation rates. We interpret this phenotype as evidence for an intrinsic defect in the T cells themselves due to development from atypical progenitors; i.e., these cells are unable to up-regulate TCR in response to positive selection or other inducements. A second striking aspect of the phenotype is the apparent failure of Foxp3 ϩ CD4 ϩ CD25 ϩ T cells to have the suppressive activity normally associated with Treg cells. This is particularly surprising given that Foxp3 expression in CD4 ϩ CD25 ϩ T cells is sufficient to confer Treg function (29 -32). Although the mechanism underlying this failure in regulatory activity is unclear, the most likely conclusion is that this phenotype also represents an intrinsic inability of these T cells to develop normally. Alternatively, there could be a previously unappreciated required contribution from the microenvironment during thymocyte development that makes CD4 ϩ T cells competent to become Treg cells in response to the cell-autonomous expression of Foxp3 that is absent in these mice. The reduced ability of both CD4 ϩ and CD8 ϩ cells from Foxn1 ⌬/⌬ mice to undergo slow homeostatic proliferation, even after transfer to a lymphopenic host and in the absence of Treg function, suggests that these cells are intrinsically unable to undergo this type of proliferation, which may in turn be due to their low TCR levels. Such homeostatic dysregulations have also been described in mice with TCR/MHC or IL-7/IL-7R/Jak3 signaling abnormalities, both of which are required for T cell proliferation and survival in the periphery (8,(33)(34)(35)(36). In the thymus, TCR/MHC signals determine selection; thymocytes with high affinity TCR/ MHC interaction are deleted by negative selection, and thymocytes with relatively low affinity TCR/MHC interaction are positively selected and maintain the peripheral T cell pool (37). In Foxn1 ⌬/⌬ mutant mice the expression of both MHC II and TCR are comparatively low on TECs and T cells, respectively (Ref. 25, this report, and our unpublished data). Thus, thymocytes with high affinity TCR/MHC interaction might be positively selected and might not be efficiently deleted by negative selection in the Foxn1 ⌬/⌬ thymus because of the absence of mature medulla (25) (negative selection for CD4 T cells is reduced in Foxn1 ⌬/⌬ mutant mice; our unpublished data). Consequently, these CD4 ϩ T cells with high affinity for TCR/MHC interaction would be highly sensitive to the same peptides presented by normal MHC-II molecules in the periphery. Thus, Foxn1 ⌬/⌬ CD4 ϩ T cells are easily triggered and activated in the periphery, resulting in an activated/memory phenotype. This failure of selection could also lead to an increased incidence of autoimmune T cells as evidenced by the autoreactivity in the MLR assay. In contrast, these activated effector cells do not develop into true memory T cells and only survive a short time. This could explain both why peripheral T cell levels never reach wild-type levels and why few Foxn1 ⌬/⌬ CD4 ϩ T cells are collected after transfer into a lymphopenic host. Consistent with these results, we found that both freshly isolated and activated Foxn1 ⌬/⌬ T cells easily undergo apoptosis in vitro. In contrast, CD8 ϩ peripheral T cells from Foxn1 ⌬/⌬ mice are able to undergo at least some slow homeostatic proliferation when introduced into a lymphopenic host. A contributing factor may be that the thymic selection of CD8 ϩ T cells, which is dependent on MHC-I, might be more normal than Foxn1 ⌬/⌬ CD4 ϩ T cells because the expression of MHC-I is not reduced in Foxn1 ⌬/⌬ mice (our unpublished data). The low capability of homeostatic proliferation for CD8 ϩ cells is likely due to the low expression of TCR, which, in turn, diminishes signaling from the TCR-peptide/MHC-I complexes necessary for proliferation and survival. Thus, in the case of CD4 ϩ cells, the primary effect is on selection and is therefore intrathymic, whereas for CD8 ϩ cells the primary effect may be on their behavior in the periphery. The transfer experiment results are consistent with these conclusions. Further study of thymic selection in Foxn1 ⌬/⌬ mice is ongoing in our laboratory. The low levels of IL7R expression on both SP thymocytes and naive peripheral T cells may reflect their development in an IL7poor thymic environment at both the fetal and adult stages. This situation could contribute to the reduced proliferation of thymocytes in these mutants. Unlike the constitutively low TCR expression, once these T cells encounter elevated IL7 in the periphery a substantial fraction of them readily up-regulate IL7R, suggesting that the low level of receptors on naive cells may be an environmental rather than an intrinsic defect in these cells. However, the fact that a significant percentage of naive T cells does not upregulate IL7R in the periphery may be due in part to these lower IL7 levels in the lymph nodes. Because Foxn1 is not expressed in the LN we cannot explain this phenotype directly, although it raises the intriguing possibility that peripheral IL7 levels may be influenced by T cells themselves. Interestingly, ␥ѨTCR cells are present in relatively normal numbers, even though their development depends on IL7 (38 -41). We have previously reported a similar phenotype for Hoxa3 ϩ/Ϫ Pax1 Ϫ/Ϫ mice (38 -43). The current results further suggest either that low levels of IL7 are sufficient for ␥ѨTCR cell development or that the high levels of IL7 encountered in a small microenvironment in these mutants are sufficient for their development. Other aspects of the peripheral T cells in these mice are likely to be secondary to the reduced thymic output in the Foxn1 ⌬/⌬ mutant mice since the neonatal stage. Neonates regardless of genotype have a functionally lymphopenic environment that allows homeostatic expansion, wherein naive cells produced from the neonatal thymus proliferate and acquire the phenotype and function of activated or memory T cells (17,23,44). In wild-type mice this proliferation is balanced by a continuous thymic output of new naive cells that constitute at least a third of peripheral T cells even in newborns and, by 2 wk postnatally, represent the vast majority of peripheral T cells. In contrast, nearly all peripheral T cells (but not thymocytes) in Foxn1 ⌬/⌬ newborns have the CD44 high CD62L low/Ϫ phenotype and maintain this phenotype to adulthood. This is true for both CD4 ϩ and CD8 ϩ T cells, although the effect is more pronounced for CD4 ϩ cells. This result suggests that the few naive cells produced by the Foxn1 ⌬/⌬ thymus have an enhanced ability to immediately acquire a memory phenotype after thymic exit. In addition, the further production of naive cells is significantly reduced in Foxn1 ⌬/⌬ mutant mice. Thus, the peripheral environment remains functionally lymphopenic in Foxn1 ⌬/⌬ mutant mice. Ordinarily, this should trigger homeostatic expansion to restore the total T cell number, resulting in the acquisition of memory T cell markers and functional properties (9,12,16,18,36,45). However, naive T cells in the adult Foxn1 ⌬/⌬ mutant mice (both CD4 ϩ and CD8 ϩ subsets) may have not undergone correct positive and negative selection due to low TCR levels and an abnormal thymic environment. Thus, when they encounter the peripheral environment they respond by becoming similar to activated effector memory T cells, arising following exposure to foreign or endogenous Ags (24,46) rather than by homeostasisdriven proliferation. The question remains, how does the defect in TEC differentiation and the resulting abnormal thymic microenvironment result in these peripheral T cell phenotypes? Data from our laboratory indicate that the failure of TEC differentiation results in a complete block to the commonly known thymocyte development pathway, from c-kit ϩ progenitors via the DN3 stage to double-positive and SP cells, at least in the postnatal thymus (S. Xiao, D.-M. Su, and N. R. Manley, unpublished data). These studies suggest that any T cells produced from the Foxn1 ⌬/⌬ adult thymus likely arise from progenitors that are normally present in the wild-type thymus but do not normally make T cells, or at least not efficiently. It is unclear whether the production of T cells from this atypical development pathway is dependent on the Foxn1 ⌬/⌬ microenvironment or can occur with low efficiency even in wild-type mice. These cells, when transferred to a wild-type thymus, are not recovered, presumably because of the huge competitive advantage of the canonical pathway (27). However, it is possible that a small number of T cells come from this pathway even in wild-type mice. Foxn1 ⌬/⌬ mice may therefore be an experimental model in which the T cell-generating potential of these atypical progenitors is revealed. In this case, peripheral T cells generated in the adult Foxn1 ⌬/⌬ mice may represent a cohort of T cells that are normally produced in small numbers but have distinct properties from those normal T cells and may therefore represent a novel class of peripheral T cells.
2019-03-30T13:10:14.524Z
2007-12-15T00:00:00.000
{ "year": 2007, "sha1": "2d763b86b4170b6bb50e6612db404186b532cb9e", "oa_license": "CCBY", "oa_url": "http://www.jimmunol.org/content/179/12/8153.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "4a47346ba9f2907a9cb72c15a0ee40df963c6947", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
118330369
pes2o/s2orc
v3-fos-license
On the relevance of center vortices to QCD In a numerical experiment, we remove center vortices from an ensemble of lattice SU(2) gauge configurations. This removal adds short-range disorder. Nevertheless, we observe long-range order in the modified ensemble: confinement is lost and chiral symmetry is restored (together with trivial topology), proving that center vortices are responsible for both phenomena. As for the Abelian monopoles, they survive but their percolation properties are lost. On the relevance of center vortices to QCD Philippe de Forcrand 1 and Massimo D'Elia 2 In a numerical experiment, we remove center vortices from an ensemble of lattice SU (2) gauge configurations. This removal adds short-range disorder. Nevertheless, we observe long-range order in the modified ensemble: confinement is lost and chiral symmetry is restored (together with trivial topology), proving that center vortices are responsible for both phenomena. As for the Abelian monopoles, they survive but their percolation properties are lost. PACS numbers: 11.15.Ha, 12.38.Aw, 12.38.Gc, 11.30.Rd The essential non-perturbative properties of QCD are confinement and chiral symmetry breaking (χSB). It has been observed through numerical lattice simulations that these two properties persist in the quenched theory up to a critical temperature T c ∼ 220M eV , where confinement is lost; chiral symmetry appears to be simultaneously restored [1]. The disorder which leads to the area law for the Wilson loop thus seems to be somehow tied to the existence of a chiral condensate. Although effective mechanisms have been proposed to explain confinement or χSB, no successful common explanation is yet available. Two effective descriptions of QCD have been receiving a lot of attention: one considers instantons as the effective degrees of freedom (d.o.f.), the other chromomagnetic monopoles. Instantons are natural candidates to explain χSB: each instanton is associated with a zero mode of the Dirac operator [2], and there must be an accumulation of zero eigenvalues to obtain a quark condensate [3]. Above T c , the vanishing of the condensate must correspond to a qualitative change in the instanton ensemble (see, e.g., [4]). On the other hand, it is unlikely that instantons play a significant role in confinement (see [5] for a recent discussion). An attractive mechanism for confinement is dual superconductivity of the QCD vacuum [6]. Considerable evidence for this dual Meissner effect has been accumulated on the lattice, including a disorder parameter demonstrating the condensation of chromo-magnetic monopoles below T c [7]. This condensation has been observed directly after gauge-fixing to "Maximal Abelian Gauge" [8], and the idea of "Abelian dominance" has emerged, according to which the Abelian d.o.f. of the Yang-Mills field encode all its long-distance (IR) properties. Indeed, χSB and its restoration have been observed in the Abelian sector [9]. The Abelian dominance scenario has some flaws, however: it does not explain the breaking of the adjoint string, and the Abelian string tension differs slightly from the Yang-Mills one [10]. Moreover, d.o.f. more elementary than Abelian monopoles, embedded in them and solely responsible for the physics assigned to them, cannot be ruled out. The idea of center vortices, which initially failed due to their misidentification [11], has been successfully revived, first as an embedded model inside the Abelian sector [12], then without reference to Abelian projection [13]. Center vortices are exposed by gauge-fixing: after a gauge transformation which brings each lattice link as close as possible to a center element of the gauge group, vortices consist of defects in the center-projected gauge field. The idea of center dominance is again that the center d.o.f. encode all the IR physics. The density of center vortices seems to be a well-defined continuum quantity [13,14]; the center string tension appears to more or less match the original one; and an explanation for the behaviour of the adjoint potential has been proposed [15]. However, chiral symmetry has not yet been studied in this context. An additional problem with Abelian and center dominance is that the relevant d.o.f. are identified only after gauge-fixing. Gauge-fixing non-Abelian fields is notoriously ambiguous, and different Gribov copies produce different Abelian monopoles or center vortices, whose properties like the string tension differ slightly. For this reason, we are a priori suspicious of effective models which involve gauge-fixing, and so we designed a simple numerical experiment to disprove the center-dominance scenario. For simplicity, we consider the gauge group SU (2), with center Z 2 . Our starting point is an ensemble of lattice gauge fields representative of the continuum. We identify center vortices in this ensemble, and construct from it a modified ensemble where all center vortices have been removed by flipping the sign of a subset of SU (2) gauge links. This operation introduces a lot of disorder in the gauge field. Nonetheless, these disordered gauge fields now have a trivial, vortex-free center projection, and so according to the credo of center dominance they should not confine. That is, our introduction of shortrange disorder should at the same time bring long-range order. To our surprise, this is indeed what happens. One may then ask if the spectral properties of the Dirac operator are not also dominated by the center components of the gauge field. In that case, our modified ensemble should show no sign of χSB, since its center projection is the trivial (perturbative) vacuum. Indeed, this is what we observe: removal of center vortices causes both loss of confinement and restoration of chiral symmetry. The next intriguing question regards the fate of Abelian monopoles as center vortices are removed. They do not disappear; on the contrary, the introduction of short-range disorder increases their number. However, we observe the complete disappearance of monopole current loops winding around the periodic lattice: we can thus identify these as the fundamental objects associated with confinement in the Abelian sector, apparently influenced by the underlying center d.o.f. Finally, we investigated the effect of Gribov copies which caused our initial skepticism. We repeated our experiment on the same SU (2) ensemble, but introduced a systematic tolerance in the gauge condition to be satisfied before identifying the center vortices to be removed. Although the location and number of center vortices we removed varied appreciably, the modified SU (2) ensemble was always non-confining and chirally symmetric. The numerical experiment -We start from an ensemble of SU (2) lattice gauge fields obtained by Monte Carlo using the standard Wilson plaquette action. To identify center vortices, we gauge-fix our configurations in order to bring each SU (2) gauge link U µ (x) as close as possible to an element of the center Z 2 = {+1, −1}. We therefore try to iteratively maximize as in [13], where this gauge is called the "direct maximal center gauge." The gauge-fixed SU (2) links, denoted U GF µ (x), are then projected to Z 2 elements Z µ (x) using Plaquettes in the Z 2 -projected theory with value −1 represent defects of the Z 2 gauge field called P-vortices [12]. Numerical evidence has been presented [12,13] showing that plaquette-like P-vortices signal the presence of macroscopic, physical excitations, called center vortices, in the unprojected original SU (2) configuration. Consider then the modified SU (2) configuration made of gauge links U ′ µ (x) constructed as The gauge transformation which maximizes Q({U µ }) in (1) also gives the same maximum to Q({U ′ µ }), so that the modified gauge-fixed links U ′GF µ (x) are simply Z µ (x) U GF µ (x). Therefore, we instantly know the center Every modified configuration U ′ thus projects onto the trivial Z 2 vacuum: all center vortices have been removed. Our ensemble consists of about 1000 SU (2) configurations, on a 16 4 lattice at β = 2.4. To maximize Q({U µ }) (Eq.1) we use standard overrelaxation, stopping when from one gauge-fixing sweep to the next. In Fig.1 we show the distribution of SU (2) plaquette values on the original and modified ensembles. It is apparent that under the sign flip Eq.(3), many SU (2) plaquettes acquire a negative value. The modified ensemble has an increased action, i.e. more short-range disorder. Results -In Fig.2 we present our results for the Creutz ratios χ R,R ≡ − ln[ W R,R W R−1,R−1 / W R,R−1 2 ] constructed from averages W R,T of R by T Wilson loops on the original and modified ensembles. For large R, χ R,R tends to the string tension σ. On the modified ensemble, the Creutz ratios clearly decrease and tend to zero. Despite the increased short-range disorder, long-range order has been created and confinement has been lost. This is even clearer if one looks directly at the Wilson loop values. In Fig.3 we show − ln W R,T as a function of T . For a fixed R, points at successively larger T form a line whose asymptotic slope is the value of the static potential V (R). The lines corresponding to the modified ensemble are parallel, indicating that V (R) does not grow with R: the string tension has vanished. 4 illustrates our study of chiral symmetry on the original and modified ensembles. As is well-known, χSB cannot occur on a finite lattice. Therefore, we measure ψ ψ (m q ) = Tr(/ D + m q ) −1 for a range of quark masses m q where finite-size effects are small, and extrapolate to m q → 0. In the original ensemble, ψ ψ clearly extrapolates to a non-zero value which signals χSB. In the modified ensemble, the extrapolated value is zero within errors: center-vortex removal restores chiral symmetry. We expect then the instanton content of the Yang-Mills field to be modified also. To check this, we use improved cooling [17] to measure the topological charge of the modified field: the striking result is that the removal of center vortices always leads to the trivial topological sector. We therefore have clear evidence for "center dominance": in our modified ensemble, where the centerprojected field is the trivial vacuum (all links equal to 1), the Yang-Mills field shows the IR properties of the trivial vacuum, i.e., no confinement, no χSB and no topology. The IR properties of the Yang-Mills field appear to be determined by its center projection. On the other hand, a large number of studies now support the alternative scenario of "Abelian dominance." We use our approach of center-vortex removal to directly assess the relationship between these two scenarios. In a first experiment, we construct the Abelian projection of our original SU (2) ensemble by gauge-fixing to Maximal Abelian Gauge in the usual way [8], then identify and remove center vortices from the Abelian sector. While the original Abelian-projected ensemble shows confinement, with a string tension similar to the non-Abelian one, the modified Abelian-projected configurations do not confine. Therefore, we find no contradiction between "Abelian dominance" and "center dominance." The latter simply appears more fundamental because of the greater reduction of the number of d.o.f. In a second experiment, we look at clusters of Abelian monopole currents, whose percolation has been identified as the signal for confinement [18], obtained from the original and modified ensembles. We find that the removal of center vortices changes the distribution of monopole cluster sizes in a crucial way (see Fig.5) : whereas in the original ensemble, each configuration contains typically one very large, percolating monopole cluster and many very small ones, the modified ensemble gives a more homogeneous size distribution, with a handful of large clusters per configuration; these are the remnants of the very large one, broken into pieces by the vortex removal. Some of them still percolate, even though confinement has disappeared. Therefore, we are led to associate confinement with a more specific feature of the monopole clusters: monopole current loops which wind around the periodic lattice. Such loops can be found frequently on the original, confining ensemble, but never on the modified, non-confining one. We conclude that: (i) on a finite lattice confinement manifests itself in the Abelian sec-tor by the presence of monopole current loops with nontrivial topology; (ii) center-vortex removal, which destroys confinement, always finds the "weak links" of these non-trivial loops and breaks them into trivial pieces. Now let us consider the issue of gauge-fixing ambiguities, which was the reason for our initial skepticism about the center-vortex idea. These ambiguities come from the structure of Q (Eq.1), which has many local maxima, any of which can be selected by a local iterative maximization algorithm. Each local maximum, or Gribov copy, will have its own set of P-vortices, differing in number and location. The proposal of [12] is that, no matter which Gribov copy one chooses, P-vortices are the traces of physical center vortices and are roughly located at their center. This argument may account for P-vortices differing in location but not in number. To study this question in more detail, we magnified the effect of gauge-fixing ambiguities, by stopping our iterative algorithm early, as soon as ǫ (Eq.5) < 10 2 . Thus we not only explore a different basin of attraction of Q, but we do not even stop at a local maximum. One effect of this partial gauge-fixing is expected: the density of P-vortices increases from ρ ≈ 5.5% to ≈ 7.4%, i.e. shows an increase δρ ≈ 1.9%. The string tension measured in the Z 2 -projected ensemble increases accordingly: whereas the Z 2 string tension after "complete" gauge-fixing (σa 2 ∼ 0.075) is a little larger than but compatible with the non-Abelian string tension (0.0708(11) [16], see Fig.2), it jumps to ≈ 0.12 after partial gauge-fixing. What is remarkable is that this increase δσ ≈ 0.045 is similar to that obtained by placing the surplus δρ of P-vortices at random, uncorrelated locations: δσ ≈ − ln(1 − 2δρ). This makes it plausible that the center projection always captures the core d.o.f. relevant for IR properties, plus a varying amount of unrelated noise [20]. Indeed, it has been argued that gauge-fixing is not even necessary for center projection [19]. In our description, center-gauge-fixing acts as a UV noise-filtering device, with different Gribov copies letting through different noise components. Further evidence for this is obtained by removing from the original SU (2) ensemble the center vortices identified after partial gauge-fixing only. Just as for "complete" gauge-fixing, we observe that confinement is lost, chiral symmetry restored, and the topology trivial. The only difference is that the modified ensemble now has much more short-range disorder. In conclusion, we have shown that removal of centervortices from an SU (2) Yang-Mills ensemble causes the loss of confinement and the restoration of chiral symmetry. One may ask about the connection of the modified ensemble {U ′ } to the physics of the original SU (2) theory. Note that only plaquettes of {U ′ } at the locations of P-vortices differ from those of {U }: hence, as a → 0, their proportion goes to zero as a 2 , since the density of P-vortices is physical [13,14]. Therefore, rewriting Eq.3 as U µ (x) ≡ Z µ (x) × U ′ µ (x), we see that the original field {U } has been factorized into a (maximally) central part {Z} and a quotient, {U ′ }, whose field strength differs from the original one only on defects of codimension 2. Nevertheless, this small difference alters the physics dramatically: {U ′ } has perturbative properties, so that all the non-perturbative, IR physics must be carried by {Z}, which by definition encodes the center vortices. It would be desirable, of course, to formulate an effective action for the center-projected theory. Ref. [21] considers an extension of the Nambu-Goto action, where the fundamental d.o.f. are the 2-dimensional random surfaces dual to the P-vortices. Ref. [22] instead proposes to consider center monopoles and their world-lines. We suggest identifying a "minimum spanning tree" of negative Z 2 links responsible for the P-vortices: perhaps only a subset of them form the essential d.o.f. governing the IR properties. Finally, our vortex-removal procedure can be used to study properties of non-confining non-Abelian fields and effects of center-symmetry breaking. For instance, removing time-like center disorder only would be similar to raising the temperature above T c .
2019-04-14T02:30:48.703Z
1999-01-25T00:00:00.000
{ "year": 1999, "sha1": "23f53da6c3cfca953b06cd153f523da75860ad2f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-lat/9901020", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "70d4a1a6d5f5a7a7f92675c34dde72d2edd89b3e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
222211486
pes2o/s2orc
v3-fos-license
Influence of running shoes on muscle activity Studies on the paradigm of the preferred movement path are scarce, and as a result, many aspects of the paradigm remain elusive. It remains unknown, for instance, how muscle activity adapts when differences in joint kinematics, due to altered running conditions, are of low / high magnitudes. Therefore, the purpose of this work was to investigate changes in muscle activity of the lower extremities in runners with minimal (≤ 3°) or substantial (> 3°) mean absolute differences in the ankle and knee joint angle trajectories when subjected to different running footwear. Mean absolute differences in the integral of the muscle activity were quantified for the tibialis anterior (TA), peroneus longus (PL), gastrocnemius medialis (GM), soleus (SO), vastus lateralis (VL), and biceps femoris (BF) muscles during over ground running. In runners with minimal changes in 3D joint angle trajectories (≤ 3°), muscle activity was found to change drastically when comparing barefoot to shod running (TA: 35%; PL: 11%; GM: 17%; SO: 10%; VL: 27%; BF: 16%), and minimally when comparing shod to shod running (TA: 10%; PL: 9%; GM: 13%; SO: 8%; VL: 8%; BF: 12%). For runners who showed substantial changes in joint angle trajectories (> 3°), muscle activity changed drastically in barefoot to shod comparisons (TA: 39%; PL: 14%; GM: 16%; SO: 16%; VL: 25%; BF: 24%). It was concluded that a movement path can be maintained with small adaptations in muscle activation when running conditions are similar, while large adaptations in muscle activation are needed when running conditions are substantially different. Introduction In the last four decades, scientific discussions on running biomechanics and running injuries have been dominated by two paradigms: the "impact force" paradigm and the "pronation" paradigm [1]. In short, these paradigms suggest that higher magnitudes of impact forces and / or pronation that may occur during running are harmful to the human body and may lead to the development of running injuries. Consequently, advancements in running shoes, shoe inserts, and orthotics have aimed to reduce impact forces [2], and / or to re-align ankle kinematics [3]. Despite the vast financial investment in the development of these products, however, running injury rates have remained relatively unchanged [4][5][6]. This lack of epidemiological evidence led recent publications to question the validity of the these paradigms, arguing that they were derived from an inappropriate functional understanding of running biomechanics [7]. As a result, new paradigms have been proposed, aiming to redirect future studies to the functional a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 aspects of running, by focusing on the effect of internal forces, their influence on running biomechanics, and how they can be impacted by different running shoes [1,8,9]. It is important to note that these novel paradigms do not suggest that the interpretation and analysis of traditional variables (e.g., ground reaction force, joint kinematics) is frivolous. Instead, these novel paradigms aim to provide new perspectives on running biomechanics that are based on a functional understanding of running. One of these newly proposed paradigms-the preferred movement path paradigm-suggests that runners are likely to maintain a consistent movement path (i.e., movement trajectories) when changing between reasonably similar shoes (e.g., cushioned shoe vs. motion control shoe). It was speculated that the locomotor system aims to maintain this preferred movement path as it may be associated with reduced energy demands, lower joint and tissue loading, and / or lower risk of injury [10]. Potential implications have been investigated by a recent study [11] that showed that the loss in cartilage volume after a prolonged run could be reduced in runners who wore footwear that facilitated a runner's natural joint motion. Consequently, footwear constructions that do not support a preferred movement path may be harmful to the locomotor system and may potentially cause an increased energy / muscle activity demand, and / or an increased risk of injury. The preferred movement path of a given runner, however, is not expected to be constant. Rather, it may depend on factors such as fatigue, training status, the presence of injury, and / or substantial changes in footwear constructions. For instance: a preferred movement path may be different in a running shoe compared to a worker's boot. It was reported, for example, that more than 80% of runners exhibited changes of less than 3˚in ankle and knee joint kinematics when running in two similar shoe conditions [10]. Conversely, for a more dramatically different comparison (running barefoot vs. shod), most participants (91%) changed their segment trajectories by more than 3˚. It appears, therefore, that small changes in footwear constructions do allow runners to maintain a consistent movement path, while larger modifications may force adaptations in gait patterns. Many aspects of the preferred movement path remain elusive. It is unclear, for instance, how the locomotor system is able to maintain a consistent movement path despite changing footwear constructions. Furthermore, the role of footwear constructions with respect to their beneficial and / or detrimental effects on a runner's preferred movement path remains unknown. It has been proposed that muscle activation patterns play an important role in the underlying principles that govern a runner's preferred movement path [10]. One can speculate that adaptations in muscle activity would allow the locomotor system to maintain a movement path that is preferred when boundary conditions (e.g., footwear constructions, occurrence of injuries, etc.) change. Consequently, footwear constructions that reduce muscular activity without forcing a runner to change their preferred movement path may be beneficial (i.e., reduce injury risks and / or energy demands). However, when the locomotor system is forced to adopt a novel preferred movement path, such as when changing from barefoot to shod running (where kinematic changes are substantial), one would expect muscle activity to change drastically, in order to accommodate this new situation. Previous studies already highlighted some changes in muscle activation when comparing barefoot to shod running [12,13]. During barefoot running, for example, the activity of the plantarflexors (gastrocnemius medialis / lateralis, and soleus) was shown to increase before heel strike [14] and the tibialis anterior has been shown to increase during the stance phase [15]. It appears evident, therefore, that muscle activation strategies are altered when kinematic differences are substantial. These outcomes, however, have yet to be investigated through the lens of the preferred movement path paradigm. It is currently unknown how muscle activation changes when a movement path is maintained (i.e., small kinematic differences) as opposed to Mizuno Corporation (Osaka, Japan) also provided the shoes that were used in the testing. However, the results presented in this article do not in any way represent a bias toward Mizuno products over other brands. The results of the study are also presented clearly, honestly, and without fabrication, falsification, or inappropriate data manipulation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: This work was funded by Mizuno Corporation (Osaka, Japan) and Biomechanigg Sport and Health Research (BSHR; Calgary, Canada). Mizuno Corporation (Osaka, Japan) also provided the shoes that were used in the testing. However, the results presented in this article do not in any way represent a bias toward Mizuno products over other brands. The principal investigator, Dr. Benno Nigg, is also the Chief Science Officer of the sponsoring company BSHR. BSHR covered material costs for this research and was simply interested in the outcome of the study, regardless of the findings. The company BSHR did not benefit from the results of the findings. BSHR had no influence on the outcome of this study. The results of the study were presented clearly, honestly, and without fabrication, falsification, or inappropriate data manipulation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. This does not alter our adherence to PLOS ONE policies on sharing data and materials. when a novel movement path is adopted (i.e., large kinematic differences). As a result, the purpose of this work was to investigate changes in lower extremity muscle activation in runners with minimal or substantial (� 3˚or > 3˚) mean absolute differences in the ankle and knee joint angle trajectories when subjected to different running footwear. Specifically, mean absolute changes in the integral of muscle activation for the tibialis anterior (TA), peroneus longus (PL), gastrocnemius medialis (GM), soleus (SO), vastus lateralis (VL), and biceps femoris (BF) were quantified in six footwear comparisons. Participants Thirty-three heel-toe runners ([mean ± SD]: 17 men: age 31.6 ± 9.9 yrs, mass 77.3 ± 9.0 kg; and 16 women: age 28 ± 9.9 yrs, mass 60 ± 7.6 kg) took part in this study. The focus was placed on heel-to-toe running as it represents the dominant foot strike pattern amongst runners [16]. All participants were healthy (injury free for at least 6 months) and physically active recreational runners (at least 2 runs per week). The average running distance by each participant for any given run was not collected but was estimated to be between 5 and 10 km, according to conversations with the participants. All runners gave written informed consent prior to participation. This study was reviewed and approved by the University of Calgary's Conjoint Health Research Ethics Board under the number REB13-0275. Protocol Testing took place on a single day in an indoor laboratory at the Human Performance Laboratory of the University of Calgary. Participants performed ten running trials (approx. 10 steps per trial) at 3.3 m/s (± 15%) in three shoe conditions that varied in their material properties (Fig 1, Table 1) and barefoot along a 30 m runway. These footwear models were selected to represent a wide range of available footwear solutions, namely a minimalist (Be), a conventionally cushioned (Rider), and a racing flat (Universe) shoe. An important difference between the designs of the Universe and Be was that the Universe had a flat, thin outer sole with a middle groove on the outer sole heel, whereas the Be design incorporated a round outer sole and a gap space under the toe area. Each shoe model was available in multiple sizes, and two pairs were available in each size and condition. Therefore, each test shoe was either new, or had been worn at-most by two previous participants. The four running conditions were tested in a randomized order. Special care was taken to ensure that participants remained in their habitual rearfoot running style in all conditions by monitoring the runner directly and by confirming the presence of an impact peak and heel strike in the force and motion data. Instrumentation Three-dimensional (3D) marker trajectories of 16 retroreflective markers were collected using an eight-camera motion analysis system (Motion Analysis Corporation, Santa Rosa, CA, USA) operating at a sampling rate of 240 Hz. Following a previously reported setup [10], markers were placed on the right forefoot (3), hindfoot (3), shank (3), thigh (3), and on the right and left anterior and posterior superior iliac spine (4). An additional seven markers were placed on the first and fifth metatarsal, the medial and lateral malleoli and femoral epicondyles, and on the greater trochanter of the right leg to collect data for a neutral standing trial. The data of the standing trial were used to define segment coordinate systems based on the anatomical landmarks and the additional seven markers were removed for the subsequent running trials. A single force plate (Kistler, 9281CA) was synchronised with the motion analysis system and collected ground reaction force data at 2400 Hz. Additionally, timing lights were placed 1.9 m apart along the runway to monitor running speed. In addition to the kinematic and kinetic recordings, surface electromyography (EMG) data were collected at a sampling frequency of 2400 Hz from the muscle bellies of the tibialis anterior (TA), peroneus longus (PL), gastrocnemius medialis (GM), soleus (SO), vastus lateralis (VL), and biceps femoris (BF) of the same leg, using bipolar Ag-AgCI surface electrodes (Norotrode Myotronics-Noromed Inc., Kent, WA, USA) with a diameter of 10 mm and an inter electrode spacing of 22 mm. Prior to applying the electrodes, the skin surface was shaved, slightly abraded using sand paper and cleaned with an isopropyl wipe. All electrodes were placed parallel to the direction of the underlying muscle fibres based on the SENIAM guidelines [17]. Finally, a one-dimensional (1D) accelerometer (ADXL 78, Analog Devices USA) with a measuring range of ± 50 g, and sampling at 2400 Hz was placed on the right heel and synchronized with the EMG recordings in order to detect heel strike (HS) events. A HS was defined as the first peak in acceleration due to ground impact. Data analysis Prior to any analysis, all data (kinematic and EMG) were visually inspected to ensure data integrity and remove trials that displayed artifacts. Specifically, running trials that did not show a clear rearfoot strike pattern (determined via visual inspection of kinematic data) and EMG signals with movement artifacts (determined by high intensities in the lower frequencies of the power spectrum) were removed from further analyses. As a result, the number of trials included in the analysis varied across participants. However, a minimum of five trials per running condition was ensured. Subsequently, kinematic and EMG data were analysed separately. Resulting kinematic marker trajectories and EMG intensity signals were then compared between all running conditions (Barefoot vs. Rider / Be / Universe, Rider vs. Be, Rider vs. Universe, Be vs. Universe). Analysis of kinematic data was performed as described in [10]. Specifically, Cortex (Motion Analysis) and Visual 3D (C-Motion Inc., Germantown, MD) were used to process kinematic and kinetic data. Marker trajectories were filtered using a fourth-order low-pass Butterworth filter with a cut-off frequency of 10 Hz. Subsequently, 3D joint angles of the ankle and knee were calculated as the relative rotation between the thigh and shank segments and the shank and hindfoot segments, respectively, using a X-Y-Z Cardan rotation sequence. All joint angles were expressed relative to the standing posture, and temporally normalised to stance phase. Stance phase was defined as the period between touch down and toe-off, which were identified using a 10 N threshold in the vertical ground reaction force. Finally, using custom written Matlab scripts, absolute differences in kinematic movement trajectories were calculated and averaged for the ankle / knee joint over the time-normalised stance phase (0-100%) for each participant and comparison (Figs 2 and 3). For this study, runners were grouped into those who displayed mean absolute differences in movement trajectories below or equal to 3˚, and runners with mean absolute differences in movement trajectories above 3˚, representing a conservative threshold for clinical relevance as suggested in [10]. EMG data were processed using a custom written Matlab script to analyse the same step as the kinematic data, thus enabling comparisons between the two data sets. A window of 300 ms (i.e., 150 ms before to 150 ms after HS) was analysed for all participants. For each step and muscle, the raw EMG signal was exposed to a wavelet transform with 13 non-linearly scaled wavelets (centre frequencies: 6 [18,19]. Then, each EMG signal was normalised to the sum of the wavelets above 100 Hz (wavelets 6 to 13) of the mean of the barefoot condition. This step reduced the effect of potential movement artifacts, which are associated with lower frequency components (< 100 Hz). Subsequently, the square root of the normalised time-frequency space was summed across all frequencies to obtain the respective EMG intensity signal, of which the area under the curve (AUC) was calculated. For each participant, the mean absolute differences in the AUC were then calculated across all six comparisons (Barefoot vs Rider / Be / Universe, Rider vs Be, Rider vs Universe, Be vs Universe) and each muscle (TA, PL, GM, SO, VL, BF). The outcome was then expressed as a percentage with respect to the first of the two running conditions in each comparison (i.e., Barefoot, Rider, or Be). Statistics Wilcoxon signed-rank tests with a Bonferroni-Holm correction were used to analyse changes in the AUC in runners who showed minimal / substantial (� 3˚/ > 3˚) differences in 3D joint angle trajectories stratified by six possible footwear comparisons. An obtained p-value smaller than the corrected alpha level indicated significant changes in muscle activation in a given comparison of running conditions. Results The average proportion of participants with mean absolute differences in joint kinematics of � 3˚and > 3˚across barefoot to shod comparisons were 57% and 43%, respectively (Table 2: barefoot vs shod). In shod to shod comparisons, on average 100% of runners had mean absolute differences in joint kinematics of � to 3˚ (Table 2: shod vs shod), while no runner changed their average joint kinematics by more than 3˚. In runners with kinematic differences of � 3˚across running comparisons mean absolute differences in the AUC across all muscles were 19% for barefoot to shod comparisons and 10% for shod to shod comparisons (Fig 4). Specifically, for the barefoot to shod comparisons, the mean absolute differences in the TA, PL, GM, SO, VL, and BF were 35%, 11%, 17%, 10%, 27%, and 16%, respectively. The activity of the TA differed significantly when comparing Barefoot to Be and when comparing Barefoot to Universe (p < 0.001 for both). The activity of the VL was significantly different when comparing Barefoot to Rider (p = 0.001). When comparing between shoe conditions, differences in EMG were substantially smaller. On average, absolute differences in the TA, PL, GM, SO, VL, and BF were 10%, 9%, 13%, 8%, 8%, and 12%, respectively. Kinematic differences of more than 3˚were only observed in Barefoot to Shod comparisons ( Table 2). In these comparisons, the mean absolute differences in AUC across all muscles was 12% (Fig 5). The mean differences stratified by muscles were 39%, 14%, 16%, 16%, 25%, and 24% for the TA, PL, GM, SO, VL, and BF respectively. In all three Barefoot to Shod comparisons (Barefoot vs Rider / Be / Universe) the differences in the activity of the TA and VL were significant (TA: p < 0.001, p = 0.002, p = 0.001; VL: p = 0.001, p = 0.002, p = 0.002). In the Barefoot to Universe comparison only, the changes observed in the GM were also significant (p = 0.002). Discussion The paradigm of the preferred movement path has been proposed as a replacement for traditional paradigms (i.e., impact force, pronation). It aims to provide a novel perspective on running biomechanics that is based on a functional understanding of running [1]. Many aspects of the preferred movement path paradigm, however, remain disputed and unclear [20,21]. As a result, its concept will be outlined first in order to discuss the findings of this work within the scope of the novel paradigm. The paradigm of the preferred movement path suggests that individuals who perform a given task (e.g., running, jumping, etc.), subconsciously adopt a movement pattern (i.e., kinematic movement trajectories) that is preferred under the current set of constraints / boundary conditions (i.e., training status, footwear, etc.). This preferred movement pattern (or movement path) is thought to be the optimal solution (or at least very close to it) for the given task [8]. In other words, the locomotor system fine-tunes its internal parameters (i.e., muscle activation) to perform the task at hand in an optimal way. It is important to note here, that optimal does not exclusively mean most economical (i.e., reduced energy consumption). Instead, the locomotor system aims to optimize for multiple factors. Possible optimization criteria might including an increased feeling of comfort, a reduction in perceived pain, and / or a reduced risk of injury in addition to a reduction in energy consumption. As a result, the solution to this optimization problem is the preferred movement path. While it is currently unknown how to determine a preferred movement path before a task execution, it has been speculated that observing changes in movement patterns (i.e., joint angle trajectories) may allow researchers to determine whether the preferred movement paths were similar in different interventions [10]. Following this notion, small kinematic deviations may be interpreted as the same movement path across interventions, while larger kinematic deviations may be interpreted as different preferred movement paths. For the present work, a threshold of 3˚was applied to the mean absolute difference in kinematic movement trajectories across running comparisons. This threshold was selected as it represents a conservative threshold for clinical relevance and was suggested in previous work [10]. Therefore, runners who changed their movement pattern by less than or exactly 3˚were considered to have had the same movement path across interventions, while runners who changed their movement pattern by more than 3˚might have selected a novel (more preferred) movement path for the new intervention. For both groups (� 3˚and > 3˚), the paradigm of the preferred movement path holds specific implications with respect to the tuning of internal parameters (i.e., muscle activation) in a situation where constraints (i.e., footwear) were altered, but the task (i.e., running at 3.3 m/s) remained the same. For instance, when constraints are altered marginally (i.e., shod to shod comparisons), one would expect small adaptations in internal parameters in runners who maintained the same movement path (� 3˚). When constraints are altered drastically (i.e., barefoot to shod comparisons), however, one would expect large adaptations in internal parameters, as that is the only way a runner could maintain the same movement path in the novel situation. For runners who adopt a new preferred movement path (> 3˚), one would expect largely altered internal parameters, when constrains remained similar, but also when constrains are altered drastically. In the present work, participants were asked to run over-ground at 3.3 m/s in four different running conditions (Barefoot / Rider / Be / Universe). With regard to the preferred movement path paradigm, this describes the same task with altered constraints. Comparisons between footwear conditions (i.e., Rider vs Be, etc.) are considered small changes in constraints, while comparisons between barefoot and shod describe large changes in constraints. As such, the outcomes of this study support the above outline speculations: small adaptations in EMG in runner who maintained a movement path when switching between running shoes (Fig 4; Shod to Shod), large adaptations in EMG in runners who maintained a movement path but switched between barefoot and shod (Fig 4; Barefoot to Shod), and large adaptations in EMG in runners who adopted a novel preferred movement path (Fig 5). While these outcomes strengthen the background of the preferred movement path paradigm, they did not provide any explanation as to why some runners maintained a preferred movement path and others did not, despite an identical task. From a functional perspective one could speculate that based on running experience (or expertise) some runners would be more / less willing to adopt a novel movement path. Specifically, more experienced runners would be less likely to change a preferred movement pattern (even under vastly different constraints) because their current movement pattern is as close to the optimal solution as possible. Conversely, in less experienced runners there might be a slightly more optimal solution for the given task and by adopting a novel preferred movement path they perform the movement in a more optimal fashion. The experience level of the runners who participated in this study was, unfortunately, not quantified. Stratifying the response of runners based on experience level should therefore be considered in future investigations. From a methodological perspective, the selection of a 3˚threshold may present some limitations as a threshold indicating a transition to a novel preferred movement path may be runner-specific rather than global. Previous work, for instance, has shown that joint movements, which result in the least amount of resistance are highly variable amongst individuals and specific to a given specimen [22,23]. Additionally, this study combined deviations in ankle and knee joint kinematics in all three planes within a single measurement, while it has been shown that certain joint components comply better with the concept of the preferred movement path than others [10]. Further, it can be argued that the mean absolute difference in joint trajectories is not an adequate measure of change. While the current study followed a previous example [10], it would be interesting to explore other methodologies to stratify kinematic responses. A comparison across multiple methodologies, for example, may provide strengthening evidence to the paradigm. Future studies are therefore advised to revaluate how to determine deviations from a preferred movement path. Interpretations from this study should be done with considerations to the following limitations. First, participants were not given an adaptation period after switching between the running conditions. An extended adaptation period may have resulted in smaller deviations in joint angle trajectories, ultimately, reducing the number of participants who selected a novel preferred movement path in the new running condition. Considering, however, that all participants showed minimal (� 3˚) differences in joint kinematics across all Shod to Shod comparisons, this would strengthen the perspective of the paradigm. With respect to Barefoot to Shod comparisons, a reduction in participants who changed their preferred movement path would indicate that running Barefoot is not as different from running Shod as initially speculated. To scrutinize this speculation, future studies might explore the effect of prolonged adaptation periods on changes in joint kinematics and investigate more drastic footwear constructions (i.e., worker's boot, barefoot, running shoe, etc.). Second, the outcomes of this study have been discussed under the light of the preferred movement path paradigm. While the paradigm does explain the outcomes of this study, the paradigm itself is not widely accepted. The present work and the majority of research regarding the paradigm was performed by the research team of Dr. Benno Nigg. This fact highlights a potential research bias with respect to the paradigm and it is possible the findings of the present work could also be interpreted differently. Finally, it was speculated that the locomotor system aims to optimize for multiple factors (i.e., energy consumption, comfort, etc.). The present work, however, did not assess any of these possible optimization factors and does not provide any evidence for this speculation. Future research is, therefore, advised to incorporate an assessment of possible optimization factors when investigating the paradigm of the preferred movement path. Conclusion A movement path can be maintained with small adaptations in muscle activation when running conditions are similar, while large adaptations in muscle activation are needed when running conditions are drastically different. When a movement path is not maintained, adaptations in muscle activation are drastic.
2020-10-09T13:05:28.096Z
2020-10-07T00:00:00.000
{ "year": 2020, "sha1": "453082c9f7295eeb0edf81fcd889895ebcc8feda", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0239852&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6bc0f7fbec479dc65e3031623f6b17c1853aab9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
52156996
pes2o/s2orc
v3-fos-license
Compulsory community service for doctors in South Africa : A 15-year review Since the implementation in 1998 of the community service (CS) programme for 12 months of compulsory service for health professionals up to and including 2014, a total of 17 413 doctors of ~44 000 CS health professionals completed their year of service in public health facilities in South Africa (SA).[1] Doctors were the first to be contracted under this programme, followed by dentists, pharmacists and eventually all other health professionals, including nurses, who form the largest cohort.[2] The scheme currently employs an annual cohort of ~8 000 young professionals on 12-month contracts, who are allocated to public health facilities in different provinces according to the human resources need. While a number of studies have described the initial experience and effects of CS officers of various professional groups qualitatively,[3-7] none has yet looked at the programme longitudinally. This article focuses on medical practitioners, about whom most of the longitudinal data have been accumulated based on an annual survey of CS officers that was instituted in 2000.[3] CS in SA primarily aimed to improve the supply of professional health personnel in underserved areas, thereby improving health service provision to all South Africans.[3] The objectives, gathered from a speech by Dr Ayanda Ntsaluba, director general of health in 1998, were as follows: • to ensure improved provision of health services to all citizens of the country • to provide our young professionals with an opportunity to further develop their skills, acquire knowledge and develop behaviour patterns and critical thinking that would help them in their professional development and future careers. RESEARCH Since the implementation in 1998 of the community service (CS) programme for 12 months of compulsory service for health professionals up to and including 2014, a total of 17 413 doctors of ~44 000 CS health professionals completed their year of service in public health facilities in South Africa (SA). [1]Doctors were the first to be contracted under this programme, followed by dentists, pharmacists and eventually all other health professionals, including nurses, who form the largest cohort. [2]The scheme currently employs an annual cohort of ~8 000 young professionals on 12-month contracts, who are allocated to public health facilities in different provinces according to the human resources need.While a number of studies have described the initial experience and effects of CS officers of various professional groups qualitatively, [3][4][5][6][7] none has yet looked at the programme longitudinally.This article focuses on medical practitioners, about whom most of the longitudinal data have been accumulated based on an annual survey of CS officers that was instituted in 2000. [3]S in SA primarily aimed to improve the supply of professional health personnel in underserved areas, thereby improving health service provision to all South Africans. [3]The objectives, gathered from a speech by Dr Ayanda Ntsaluba, director general of health in 1998, were as follows: • to ensure improved provision of health services to all citizens of the country • to provide our young professionals with an opportunity to further develop their skills, acquire knowledge and develop behaviour patterns and critical thinking that would help them in their professional development and future careers. It is significant that the two objectives appear to be given equal importance, although the former might be regarded as the main purpose of the CS year.However, as the programme was said to be 'service, not training' , CS officers were allocated according to healthcare needs, as determined by the National Department of Health, [4] rather than according to available supervision. [3]rehywot et al., [9] in their study of compulsory service programmes worldwide, found that SA is one of 70 countries globally that implements compulsory CS. [9] They described three different types of compulsory service in different countries as follows: • a condition of service/state employment programme, e.g. for foreignqualified professionals • compulsory service with incentives, such as education, employment or living conditions • compulsory service without incentives. RESEARCH SA falls into the second category, with CS as a requirement for attaining full registration to practise publically or privately.A second strategy in the same category, which operates in countries such as Pakistan and Peru, uses a period of service in an underserved area as a prerequisite for career advancement such as specialisation.No rigorous study has systematically compared countries with rural and remote workforce disparities with compulsory service with those that do not have such programmes.They concluded that compulsory service programmes are a mechanism for staffing and reinforcing the health workforce, especially in areas where access to primary and essential healthcare services and systems are weak, but this should not be the only mechanism. In 2010 the World Health Organization (WHO) developed a comprehensive set of guidelines based on the best available evidence for the recruitment and retention of healthcare professionals in rural and remote areas (Table 1), which include 'regulatory' interventions, such as compulsory service. [10]This places the strategy of coercion into a broader set of options for increasing the supply of health professionals in areas that are difficult to staff. [11]n light of these alternatives, it is important to analyse the implementation and subsequent effect of the CS programme in SA against the initial objectives.Since its launch in 1998, various evaluations of the CS programme in SA have been completed, most of them qualitative in nature, focusing on its effectiveness, [12][13][14][15] to the extent of calling for its revision. [16]An analysis of the first year of CS implementation, using mixed methods, revealed a situation of some confusion in the absence of more specific guidelines.Consequently, provinces were left to make their own rules, which resulted in very variable implementation. [3]This initial study developed the first version of a survey tool, which was subsequently used annually until 2015, with some modifications, and forms the basis of the current review.Although the survey instrument has been modified over the intervening years, a sufficient number of data elements has remained unchanged to allow longitudinal trends to be described and comparisons made between the outcomes and demographic information, placement sites, provinces and universities of graduation. Using the same data, a detailed study of the 2009 cohort of CS doctors developed a supervision satisfaction score (SSS) and found a high level of participant satisfaction with CS. [17] The authors noted that participants reporting professional development during the CS year were twice as likely to report intentions to remain in rural, underserved communities. The initial 2001 study called for the establishment of a 'comprehensive policy of human resources for medically underserved areas in South Africa, with obligatory CS for doctors constituting only a part of it' . [3]It is in this context that the National Department of Health eventually developed a Human Resources for Health (HRH) strategy in 2011, [18] which included three strategic objectives directly relevant to CS: human resource management, quality of care and access in rural areas (Table 2). By the time that the HRH strategy for 2012/2013 -2016/2017 was published in 2011, the CS programme had already been running for 13 years and was institutionalised; therefore, the complementary strategies of the broader HRH policy framework that needed to optimise CS were not implemented for most of the period studied. A summit on CS, convened by the Foundation for Professional Development (FPD) in 2015, aimed to review the progress of the programme and make recommendations for its further development. [19]ome of the results reported in this article were reported then, and the recommendations arising from the summit workshop have been Table 1.Categories of interventions used to improve attraction, recruitment and retention of health workers in remote and rural areas [10] RESEARCH incorporated into this article, representing the collective proposals of stakeholders rather than those of the authors alone. Study objectives This project aims to describe findings and analyse trends from surveys of CS doctors in SA between 2000 and 2014, specifically with regard to their distribution, support, feedback and career plans. Study design A consecutive cross-sectional descriptive study design was used based on annual national surveys of CS doctors. Study variables A structured self-assessment questionnaire developed from initial qualitative research conducted in 2000 [3] was used, with some changes, adjustments and additions throughout the period.Most items requiring a subjective response were presented in the form of a 5-point Likert scale.Demographic variables included gender, race, marital status and receipt of a provincial bursary.Medical training characteristics included the university attended and level of the hospital of internship.A number of items explored characteristics of CS placement, including whether the facility was the participant's first, second or third choice in the allocation process.Rural placement was determined by participants, who indicated whether they received a government rural allowance.Placement satisfaction included practical items such as quality of accommodation, overtime duties, personal safety, fairness of remuneration and timeous payment of salaries.Their experiences of CS were indicated, e.g. by questions related to their attitude to CS, professional development, supervision and availability of seniors.Future career plans were assessed in terms of their intention to work in the public service or private sector, to specialise, or to work overseas or in a rural or underserved community. Data collection National, cross-sectional data were collected from CS officers using the abovementioned survey tool administered in the final quarter of each respective year.The cross-sectional design and methodology for the study from 2000 to 2008 made use of hard copies of the survey forms distributed and collected via provincial and hospital CS co-ordinators throughout SA.Attention was paid to confidentiality by using sealed envelopes when the hard-copy questionnaires were collected by the hospital CS co-ordinators.For the 2009 and subsequent studies, in an attempt to improve the response rate and validity, the methodology was according to that performed by Hatcher et al., [17] in which participants were invited to complete the same survey instrument online, supplemented by follow-up phone calls to non-respondents, collated by an independent non-governmental organisation, Africa Health Placements.An information sheet explaining the study stipulated that completion of the survey implied consent, and that the survey was anonymous. Additional information regarding turn-up rates was obtained directly from the National Department of Health.Turn-up rates were defined as the proportion of doctors who took up CS posts after completing their internship. Data analysis Data were cleaned and entered into Microsoft Access or Excel, analysed, and managed in Stata 13 (StataCorp., USA).Descriptive data (frequencies and proportions) were conducted for the entire cohort and the participating respondents.Not all data were available for every year, because the survey tool varied slightly as the programme progressed.The results presented below represent the data that could be directly correlated to demonstrate trends. Research ethics The protocol for the earlier surveys up to 2011 was approved by the University of KwaZulu-Natal, Durban, and subsequent surveys were approved by the University of Cape Town Human Research Ethics Committee (ref.no.HREC 450/2012).The survey was anonymous and the covering letter containing information about the study made it clear that completion of the questionnaire implied consent. Turn-up and response rates The total study population each year varied between 1 057 and 1 308.Turn-up rates were calculated independently of the surveys from data provided by the National Department of Health.Fig. 1 shows a slight decline in the year-on-year turn-up rate -from 91% in 2000 to ~87% in 2013, with an average of 89%, although there were wide variations.The data were interrupted by the introduction of the 2-year internship in 2008. Response rates to the survey varied from a high of 77% in 2001 to a low of 20% in 2002, with an average of 51%, as data collection methods and availability of funding changed.For 2001, 2009, 2011 and 2013, which were used for comparative analysis below, the number of respondents was 902 (77%), 628 (48%), 668 (55%) and 648 (68%), respectively.Response rates for doctors from different universities were very similar across all years, whereas response rates within different provinces varied between 15% and 73% from 2000 to 2009, and between 42% and 51% from 2010 to 2014. Demographics and allocations Of the total sample, 58% were female and 36% were married.While the proportion of respondents who stated coloured and Indian as race remained fairly constant, the proportion stating black as race increased from 17% in 2001 to 45% in 2012, and those stating white as race decreased from 50% to 33% over the same period.These reciprocal changes were particularly rapid in the period 2009 -2012. There was a steadily increasing trend in the proportion of respondents who received a provincial study bursary -from 22% in 2009 to 42% in 2014 (no data were available before 2009).Similarly, RESEARCH the proportion of those allocated to rural hospitals, as measured by the payment of the rural allowance, rose from 24% in 1999, stabilising at ~50% between 2012 and 2014, with a high of 60% in 2007.The majority of CS respondents were placed at district hospitals, showing an increasing trend from 41% in 2001 to 49% in 2013, while those placed at tertiary and specialised hospitals decreased over the same period -from 18% to 15% -and regional hospital placements also decreased from 33% to 22%. Fig. 2 plots the number of CS doctors allocated and the number of accredited facilities in each province against the percentage of the national population and the percentage of each rural provincial population, using data from StatsSA censuses 2001 and 2011, [20] with the latter in decreasing order.The aim of this comparison was to show allocations in terms of relative need in rural areas.Limpopo Province received a disproportionately low number of CS doctors for its rural needs, while the Western Cape and Gauteng provinces received disproportionately high numbers of CS doctors. Applicants to the programme were required to make five choices from a list of public healthcare facilities approved for CS by the National Department of Health.If an allocation was not made within these initial requests, a second round of five choices was made available.A third round followed for the few who were still not allocated in the second round.The results show that an average of 80% of applicants were placed in the first round; this remained fairly constant from 2001 onwards, with a range of 77 -87%.An average of 70% of respondents were satisfied with the allocation process.On analysing the proportion of posts filled by the end of the second round, the rural provinces (Eastern Cape, Limpopo, Northern Cape and North West) had disproportionately high numbers of vacant CS posts, indicating that these were not popular choices.Turn-up rates in the third round of allocations were the lowest of all (H Groenewald -personal communication, 2013). Experiences of community service A significant majority of respondents consistently stated that they had made a difference during their year of CS (76% in 2001, rising to 91% in 2014), and that they had experienced professional development (range 72 -91%).An average of 96% performed overtime duties.Of those provided with accommodation, an average of 61% were satisfied with it, but 64% felt some risk to their personal safety (Table 3). A majority of CS doctors felt well orientated in their jobs (average 65%) and thought that they received good clinical supervision (average 52%).The latter varied according to placement site, with significantly fewer respondents in rural sites reporting good supervision than those in urban sites.However, just ˂50% of respondents said that their managers handled their concerns well and they were satisfied with the support received from them.This varied significantly by province -from 39% in North West to 69% in Mpumalanga (Fig. 3). The attitude towards CS has been increasingly perceived as positive overall from 1999 to 2013 (Fig. 4).In response to the statement: 'My attitude towards community service has become more negative/ positive as a result of my experience this year' , the majority of respondents had shifted from a neutral attitude to a positive one over the course of the 15 years. Fig. 2. Provincial rural population (%), in decreasing order, and national population (%) compared with community service doctors (n) and facilities accredited for community service (n) (by province) . [20] Table 3 Future career plans Future work intentions varied widely on an annual basis.When comparing the average of respondents for each career intention for the last 3 years, the intention to move overseas had decreased significantly since a record figure of 43% was obtained from the 2001 survey (Table 4).However, the intention to move into the private sector and specialise had increased somewhat, with an average of 50% of CS doctors planning to specialise immediately after completing their CS year.The intention to remain at the same health facility and to work in rural, underserved communities had remained relatively static at ~30% and ~15%, respectively. Discussion CS in SA has become institutionalised and has stabilised as a programme.In contrast to the first few years of uncertainty and resentment, the expectations of all current SA health science graduates are that they will undergo their CS year in a public hospital, which is likely to be in a rural area, and conversely, hospital managers have come to rely on them as human resources. Studies in other low-and middle-income countries, including Puerto Rico, [21] Indonesia, [22] Turkey, [23] and Thailand, [24] have demonstrated an improvement of staffing levels by doctors after the introduction of a period of compulsory service.As Frehywot et al. [9] assert: 'Compulsory service programmes are an instrument of social justice, an exercise in health equity, in that they enable governments to direct or augment services to geographical areas that are not well served and in communities that are not favoured by market forces and health worker preferences.' The increase in provincial study bursaries indicates an increasing sense of ownership of the future workforce by provincial health departments, and the allocation rate of 80% in the first round is commendable.However, if the turn-up rate is an indicator of the acceptability of CS among those who are eligible, the 11% who do not take up CS annually is cause for concern, as these 120 -150 young doctors represent the output of one entire medical school.It is clear that there are significant personal choices to be made at that stage in life, such as starting a family or taking a break, and a degree of flexibility is needed.A small percentage might quit the profession either temporarily or permanently, and another group might head abroad directly after internship.Some applicants who are allocated in the second or third rounds might adopt a 'wait and see' strategy and do not turn up, preferring to wait unemployed for more desirable posts to become available in urban areas.The primary objective of CS, to improve the distribution of health professionals throughout the country, has been partially achieved, as rural placements have increased to ~50%, but not to the extent of the relative need in rural provinces.Clearly, the CS workforce is a reliable recruitment strategy, bringing 8 000 fresh young graduates into the public service each year to fill the posts vacated by their predecessors, but the temporary contract nature of these posts creates a situation of constant staff turnover and does little to create a stable long-term workforce.Other human resource mechanisms complementary to CS are needed to achieve this.In other words, CS is an effective recruitment strategy, but in the absence of other interventions, does nothing for the development of an effective long-term workforce.Few international studies have shown increased retention of doctors after compulsory CS, while one SA study found 16% of CS doctors in one province remaining at the same district hospital beyond the obligatory time. [14]Pathman et al., [25] in a 9-year follow-up study of doctors in the National Health Corps in the USA, found that those who were contracted into service in rural areas as compensation for the payment of their education costs, did not remain longer than their required service obligations.As previously pointed out, CS might to some extent defeat its own ends if newly qualified professionals assume that they have 'done their duty' and have compensated society for the costs of their studies after only 1 year in public service. [2]onetheless, the Umthombo Youth Development Foundation has raised the retention of graduates from rural areas that they support to >70% after they have completed their year-for-year contract time by means of effective mentoring and support. [26]It is, therefore, possible to achieve a much higher rate of retention in SA through complementary strategies. Considering the data on career plans of CS doctors, the consistent 30% who preferred to stay on at their CS placement site, and the 15% who were prepared to work in rural or underserved areas after CS, consistent with other studies, represented an important human resource.If the latter were to be permanently employed and incentivised to create a longer-term experienced workforce, the problem of retention of professional staff in rural and remote areas could be solved in a few years by the accumulation of successive cohorts of willing professionals.It is better to have 1 doctor for 10 years than 10 doctors for 1 year each, as the continuity of relationships in medicine is not only more efficient but also leads to greater job satisfaction.In terms of achieving the first objective of CS, this hypothetical approach could be contrasted with forcing all graduates to work as so-called 'slaves of the state' . [27]However, despite RESEARCH the significance of the reaction to the coercive nature of CS, it has stabilised over time, as the turn-up rate of almost 90% and the rural career plans of 15% of each cohort mentioned above indicate.The year-on-year variability of career plans is difficult to explain logically, apart from fluctuations in collective aspirations as opportunities changed.Since 2001, the decrease in intentions to practise abroad has been substantial, and probably is the result of the tightening of registration requirements in other countries rather than reduced local 'push' factors. [28]There may also have been a response bias, as those who were considering leaving the country may have been reluctant to reveal their plans in the survey, despite assurances of anonymity.The 30% who preferred to stay on at their CS hospital were probably disappointed because of the fixed-term contracts, as noted above, and the increasing challenge that provincial health departments are now having in funding permanent posts with the current budgetary restrictions. The second objective of CS is also important, i.e. the professional development of young professionals.Often CS is the first in-depth exposure that junior doctors have to rural or underserved communities, and it is a significant wake-up call to the real health needs of large numbers of South Africans.Having the skills and confidence to make a difference after the 2-year internship, allows young doctors to stand on their own feet professionally and fulfil a real need, which carries the sense of professional satisfaction seen in the results.The direct exposure to the consequences of resource constraints in the public health service, including a relative lack of supervision and support, while not ideal, nevertheless serves to develop resilience in our young professionals for the challenges of future practice.By comparison, those trained in well-resourced settings do not cope as well. [29]ttitudes towards CS have become progressively more positive over 15 years, rated by successive cohorts of CS doctors, which is an interesting finding, as it indicates that the experience of CS has shifted significantly, and the uncertainty and resentment that surrounded CS in its early years have possibly given way to accepting it as an unavoidable part of career development.The introduction of the occupation-specific dispensation in 2007, [30] which raised doctors' salaries to relatively high levels, might be linked to the improvement in attitudes, while more indirect links could have been the parallel increase in the proportion of black graduates, provincial bursary holders and rural placements.More qualitative research is warranted to explore this phenomenon. The level of support from managers, rated at an average of 50%, is inadequate and represents a waste of human resource potential.That RESEARCH ˂50% of the respondents in KwaZulu-Natal, Limpopo and Mpumalanga felt adequately supervised clinically and supported by management, is an indictment on those seniors and managers.This young cohort of professionals could contribute their skills and energy far more effectively if they are proactively incorporated into working teams, supervised and mentored by more experienced practitioners, and supported administratively through decent housing and living conditions. Study limitations The response rates were reasonable for repeated surveys of this nature, but the limitations of the study include a substantial response bias.Those who completed the surveys, although demographically similar to the study population as a whole, were more likely to have been positive about their experiences; therefore, the results may reflect a more optimistic view than the reality.They also rely on selfreported feedback, which cannot easily be verified, and so the true picture of CS may be different to what is reported.The changes in the survey tool over 15 years introduced some variations, but having a single principal investigator played a part in ensuring consistency of data collection and results.Finally, a possible social desirability bias was mitigated to some extent by emphasising that the results would be collated by an organisation independent of the National Department of Health -in earlier surveys the University of KwaZulu-Natal and in later ones, Africa Health Placements. Conclusion This study is the first to track the experience of compulsory CS over time in any country in order to describe the trends once CS has become institutionalised.The SA experience of CS for doctors over the first 15 years appears to have been a successively positive one, and it has largely met its original objectives of redistribution and professional development.CS has become an indispensable part of the provincial health services, particularly in rural hospitals, but also in larger urban hospitals.It is a medical workforce that managers can rely on each year without having to actively recruit, at the cost of annual orientation and management of successive cohorts of young professionals.As the results show, however, they could improve this renewable resource by giving more attention to orientation, management support and clinical supervision, and by focusing professional development opportunities on the important minority who are prepared to stay on longer than their obligatory year.As an entrenched feature of the national HRH strategy, CS still needs to be complemented by other interventions to capitalise on the potential it represents.Alternative strategies to retain doctors and other health professionals in rural and underserved areas, as suggested by the WHO, must be considered rather than relying only on coerced junior health professionals who rotate out after a year.A stable long-term workforce can only be achieved, particularly in rural and underserved areas, through a multifaceted human resource management plan. Recommendations Recommendations from the Community Service for Health Professionals summit, held at the FPD, are set out in Table 5. [19] Fig. 1 . Fig. 1.Turn-up rates by doctors for community service from internship (by year). Full-time positions in the public sector should be marketed to CS officers The role of communities around rural and underserved facilities should be defined and promoted, including community awareness of the services and involvement in providing support Community members should be included in orientation programmes -More peripheral sites should be accredited for academic rotations to ensure increased access to senior professionals for clinical support and exposure to specialties Training and support partners should be engaged to design and implement orientation for CS professionals (cultural, logistical and clinical) based on existing programmes -CS = community service; CSI = corporate social investment; HRH = Human Resources for Health; PPP = public-private partnership. Table 4 . Future career plans of 2001 -2014 community service doctors Fig.4.Responses (%) to the statement: 'My attitude towards community service has become more negative/positive because of my experience this year' (by year).
2018-09-16T03:15:11.144Z
2018-08-30T00:00:00.000
{ "year": 2018, "sha1": "a0e68e7c59824ef5303f8f2298e24a9a089f9b68", "oa_license": "CCBYNC", "oa_url": "http://www.samj.org.za/index.php/samj/article/download/12418/8619", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "80646ee1a0b8e7749d09c1210ad9314ce8d5d495", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
1781058
pes2o/s2orc
v3-fos-license
Vitrification of in vitro matured oocytes collected from antral follicles at the time of ovarian tissue cryopreservation Background In the past few years, cryopreservation of ovarian tissue has become an established procedure proposed in many centers around the world and transplantation has successfully resulted in full-term pregnancies and deliveries in human. This prospective study aims to evaluate the feasibility of vitrifying in vitro matured oocytes (IVM) isolated at the time of ovarian tissue cryopreservation to improve the efficiency of fertility preservation programs. Methods Oocyte-cumulus complexes were retrieved from freshly collected ovarian cortex by aspirating antral follicular fluid, and were matured in vitro for 24-48 h prior to vitrification. Oocytes were matured in an IVM commercial medium (Copper Surgical, USA) supplemented with 75 mIU/ml FSH and 75 mIU/ml LH and vitrified using a commercial vitrification kit (Irvine Scientific, California) in high security vitrification straws (CryoBioSystem, France). Oocyte collection and IVM rates were evaluated according to the age, the cycle period and the amount of tissue collected. Results Immature oocyte retrieval from ovarian tissue was carried out in 57 patients between 8 and 35 years of age, undergoing ovarian tissue cryopreservation. A total of 266 oocytes were isolated, 28 of them were degenerated, 200 were at germinal vesicle stage (GV), 35 were in metaphase I (MI) and 3 displayed a visible polar body (MII). The number of oocytes collected was positively correlated with the amount of tissue cryopreserved (p < 0.001) and negatively correlated with the age of the patients (p = 0.005). Oocytes were obtained regardless of menstrual cycle period or contraception. A total maturation rate of 31% was achieved, leading to the vitrification of at least one mature oocyte for half of the cohort. Conclusions The study showed that a significant number of immature oocytes can be collected from excised ovarian tissue whatever the menstrual cycle phases and the age of the patients, even for prepubertal girls. Background Advances in cancer therapy have improved the longterm survival of patients suffering from malignancies. Thus, the number of young adults wishing to become parents following cancer treatment has significantly increased. However, cancer treatment often involves adverse side effects, including loss of gonadal function and sterility [1,2]. Chemotherapy using high doses of alkylating agents and radiotherapy by ionizing radiation reduces the primordial follicle reserve, which may trigger premature ovarian failure (POF). This represents a major concern for young patients hoping to have children. In this context, all options for maintaining or restoring fertility must be considered [3,4]. Oocytes and embryos can be vitrified after ovarian stimulation or during a natural cycle, but this strategy is not recommended to all patients [5][6][7]. Furthermore, the number of oocytes or embryos vitrified is often not sufficient for more than one or two transfer attempts. In the past few years, cryopreservation of ovarian tissue has become an established procedure proposed in many centers around the world in order to store a large amount of primordial follicles prior to gonadotoxic treatment [8][9][10]. Cryopreservation and transplantation of ovarian tissue have successfully resulted in full-term pregnancies and deliveries in humans [11]. One of the major issues regarding ovarian tissue transplantation is the risk of transmission of cancer cells that may have infiltrated the ovarian tissue before the cryopreservation procedure. In these cases, alternatives include in vitro growth of primordial follicles but unfortunately ovarian tissue culture system is not yet available for human application [12,13]. Furthermore, ovarian tissue cryopreservation preserves the primordial and primary follicles, but not the immature oocytes within the antral follicles that do not survive the procedure. These oocytes could however be recovered and subjected to in vitro maturation (IVM) [14]. As reported by many authors, healthy infants were born following IVM [15,16]. Vitrification is now a widely applied and highly successful approach for cryopreservation in reproductive biology, including for the storage of human oocytes [17][18][19]. Recently, studies reported almost 100% morphological survival rate after vitrification of in vivo aspirated mature oocytes. The authors also reported in vitro embryo development, implantation and pregnancy rates comparable to those achieved with fresh oocytes [20][21][22]. The present study assessed the efficiency of IVM and vitrification procedures of the immature oocytes in excised ovarian tissue according to the age of the patients and their menstrual cycles. Methods The procedure was approved by the local ethical committee. It was explained to patients and informed consent was obtained. Patients From November 2008 to December 2010, 57 patients between 8 and 35 years of age (mean age 26), referred to Erasme Hospital for ovarian tissue cryopreservation as part of a fertility preservation program, underwent a combined oocyte vitrification procedure after counseling. Seven patients underwent oophorectomy, while the others had ovarian cortex biopsies. The indications for ovarian tissue cryopreservation were breast cancers (n = 26), hematological diseases (n = 20), gynecological diseases (n = 7), solid malignancies (n = 2) and autoimmune diseases (n = 2). The inclusion criteria for the cryopreservation of ovarian tissue were previously described [8]. Patients treated with chemotherapy before the ovarian tissue cryopreservation procedure were excluded from this study [23]. Oocyte collection and in vitro maturation The surgical collection and freezing procedures for the ovarian tissue were described elsewhere [8]. Large biopsies of the ovarian cortex (approximately total of half ovary) were removed by laparoscopy except for patients treated with high-dose alkylating agents and autologous stem-cell transplantation, who underwent a unilateral oophorectomy considering their high risk of premature ovarian failure. Oophorectomy was also required in some ovarian diseases, but only a part of the cortex was designated for cryopreservation in these patients. The ovarian cortex was transported to the IVF laboratory in Leibovitz L-15 medium (Life Technologies, Merelbeke, Belgium) at 4°C within the hour. At arrival, oocytecumulus complexes (OCCs) were recovered by aspirating all visible antral follicles using 18-gauge syringe needles. The aspirated follicular fluid was poured directly into a Petri dish and examined for OCCs under a stereomicroscope. The ovarian specimens were transferred into a new dish containing the same medium and carefully dissected in order to obtain small slices of ovarian cortex (0.5-1 cm diameter, 1-2 mm thickness). After dissection of the ovarian tissue, the discarded material was filtered through a cell strainer (Falcon, Cell Strainer 352350, 70 μm Nylon) in a Petri dish (Falcon, Petri dishes 3004, 60 × 15 mm) containing 3-5 ml of IVM Washing Medium (Sage, IVM Kit media) at 37°C on a warm stage or plate to prevent the OCCs from drying in the strainer. After filtering, the collected material was rinsed with pre-warmed IVM Washing Medium and transferred into a Petri dish to search a second time for OCCs under a stereomicroscope. All retrieved OCCs were scored and classified according to the oocyte nuclear stage as germinal vesicle (GV), germinal vesicle breakdown (metaphase I-MI) or metaphase II when a first polar body is visible in the perivitelline space (MII). OCCs with unvisualized oocyte nuclear status due to the compact cumulus cells surrounded it were considered as GV. OCCs were washed at least three times in pre-warmed IVM Washing Medium and transferred into an Organ Tissue Culture Dish (Nunc, 176742, 4 wells dishes) containing 0.5 ml IVM Maturation Medium (Sage, IVM Kit media) supplemented with 75 mIU/ ml FSH and 75 mIU/ml LH and incubated at 37°C in a 5% CO 2 humidified atmosphere. The IVM Maturation Medium was prepared for equilibration at least two hours before the immature oocyte retrieval. The immature OCCs were cultured in the IVM Maturation Medium for 24 to 48 hours. Vitrification Twenty-four hours after IVM, all the OCCs were denuded using a 130 micron finely drawn pipette following one minute of exposure to 80 IU/ml hyaluronidase solution (Sigma Aldrich SrL, UK.). The mature oocytes (MII) were then subjected to vitrification following a standard protocol (Irvine, Vitrification Kit media) using aseptic devices (CryoBiosystem, VHS Kit). The remaining immature oocytes (GV and MI) were kept in IVM Maturation Medium for an additional 24 hours. Forty-eight hours after IVM, the remaining oocytes that reached MII were vitrified. Statistical analysis Statistical analyses were performed using the Chisquared, t-Test and Non parametric Mann-Whitney Test as appropriate. Linear correlations between two variables were analyzed by calculation of the r-values (Pearson's moment-correlation coefficient); the significance (two-tailed probability values) of r coefficients were calculated on the basis of the correlation values. Values of p < 0.05 indicated statistical significance. In 15/57 patients, no oocytes were found (26.3%). The number of oocytes collected was positively correlated with the amount of tissue cryopreserved (p < 0.001) and negatively correlated with the age of the patients (p = 0.005). As shown in Table 1 immature oocytes were retrieved regardless of the menstrual cycle phases. The mean number of oocytes retrieved was however higher in prepubertal compared to post-pubertal patients ( Table 1). For post-pubertal patients, no difference in the number of oocyte collected was observed between patients less or over 30 year-old ( Table 2). The total IVM rate was 31% (20% after 24 h IVM and 11% after 48 h IVM). For patients with natural cycles, the IVM rate was similar whatever the phase of the menstrual cycle. No significant difference was observed in the maturation rate between patients using contraception or those in a natural cycle as well as between pre-and post-pubertal patients. For 3 patients, in vitro matured oocytes (4, 1 and 5 respectively) were fertilized by ICSI and the embryos obtained (1, 1 and 3 respectively) were vitrified. In more than half of the patients (54%), at least one mature oocyte was vitrified after 24 or 48 hours. In 3/4 prepubertal patients, MII were vitrified, suggesting that the procedure is suitable and more efficient for prepubertal patients where antral follicles are present. Discussion The cryopreservation of ovarian tissue allows the preservation of a large number of primary follicles before gonadotoxic treatment. However, growing immature oocytes from antral follicles are lost during the procedure. Vitrification of in vitro matured oocytes collected after punction of these antral follicles in the excised ovarian tissue before cryopreservation has been proposed as an additional technique to preserve fertility. This procedure may increase fertility restoration potential and may also be an important alternative whereby neoplasic cells can potentially infiltrate ovarian tissue, leading to a risk of disease recurrence after transplantation, as in some hematological diseases or in advanced breast cancer [24,25]. This study shows that isolated immature oocytes from antral follicles can be retrieved from the ovarian tissue biopsy, consequently in vitro matured and vitrified during any phase of the menstrual cycle and whether or not the patient is using oral contraception. The number of oocytes collected is correlated with the number of cryopreserved fragments and the age of the patients. However, the IVM rate is similar whatever the phase of the menstrual cycles or the age of patient. These results suggest that the procedure can be proposed to any patient undergoing ovarian tissue cryopreservation, with the exception of those who have already begun chemotherapy. The procedure described above, allows the vitrification of matured oocytes in approximately half of the patients although the total IVM rate was lower than the one previously described using oocytes directly retrieved in vivo. In a recent study, a mean oocyte maturation rate of around 70% was achieved following direct oocyte retrieval after hCG injection during a natural cycle [26]. The recent success of in vitro oocyte maturation strategies has been attributed to the improvement of the culture media composition [27][28][29]. In immature oocytes retrieved from ovarian tissue, the delay in relation to transport and oocyte collection is a major issue and may decrease the efficiency of IVM. Revel et al. [30] first described oocyte collection during the cryopreservation of ovarian tissue in 9 patients, amongst whom 3 attempted IVM. In these patients, 5 of 8 MI oocytes were matured in vitro. Other studies including small number of patients or case reports have been described, confirming the feasibility of the procedure [31][32][33]. In our cohort, the IVM rate was highly variable from one patient to other ranging from 0 to 100%, with an average of 31%. This result is consistent with the only previous study using a large cohort of 19 patients under 20 years of age, showing that 34% of immature oocytes collected from ovarian tissue before cryopreservation are competent to resume meiosis in vitro [34]. Healthy infants have been born following IVM [35][36][37], and oocyte cryopreservation by vitrification seems a promising technique that appears to be more effective than the conventional slow-freezing method for mature oocytes vitrified shortly after collection [38][39][40]. In this context, vitrification results in high survival rates of 89-100% and many successful live births worldwide [41][42][43]. Given these recent successes, although the efficiency of this combined procedure has to be determined by testing the potential for the implantation of embryos derived from these in vitro matured and vitrified oocytes, it is reasonable to suggest this innovative and non-invasive alternative, under institutional review board supervision. Furthermore, if a male partner is present, the in vitro matured oocytes may be fertilized using the in vitro fertilization (IVF) technique, and the resulting embryos may be cryopreserved. For 3 patients, in vitro matured oocytes were fertilized and the embryos obtained were vitrified. Theses cases are reassuring regarding the oocyte quality as it suggests that these oocytes are competent to be fertilized. Conclusion This study shows that the combination of ovarian tissue cryopreservation and immature oocyte retrieval is feasible whatever the phase of menstrual cycle, the use of oral contraception or the age of the patient. Approximately half of these patients should benefit from this combined procedure, and improvement of the IVM rate may better increase the efficiency of the procedure in the future.
2016-05-04T20:20:58.661Z
2011-11-23T00:00:00.000
{ "year": 2011, "sha1": "5755da418258da2fdf51c1bb715d2ea5b8bfb56f", "oa_license": "CCBY", "oa_url": "https://rbej.biomedcentral.com/track/pdf/10.1186/1477-7827-9-150", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7065fcbb77bcdff6927d5c839e3bd94090fce85", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
260682296
pes2o/s2orc
v3-fos-license
Alcoholic cardiomyopathy The individual amount of alcohol consumed acutely or chronically decides on harm or benefit to a person’s health. Available data suggest that one to two drinks in men and one drink in women will benefit the cardiovascular system over time, one drink being 17.6 ml 100 % alcohol. Moderate drinking can reduce the incidence and mortality of coronary artery disease, heart failure, diabetes, ischemic and hemorrhagic stroke. More than this amount can lead to alcoholic cardiomyopathy, which is defined as alcohol toxicity to the heart muscle itself by ethanol and its metabolites. Historical examples of interest are the Munich beer heart and the Tübingen wine heart. Associated with chronic alcohol abuse but having different etiologies are beriberi heart disease (vitamin B1 deficiency) and cardiac cirrhosis as hyperdynamic cardiomyopathies, arsenic poising in the Manchester beer epidemic, and cobalt intoxication in Quebec beer drinker’s disease. Chronic heavy alcohol abuse will also increase blood pressure and cause a downregulation of the immune system that could lead to increased susceptibility to infections, which in turn could add to the development of heart failure. Myocardial tissue analysis resembles idiopathic cardiomyopathy or chronic myocarditis. In the diagnostic work-up of alcoholic cardiomyopathy, the confirmation of alcohol abuse by carbohydrate deficient transferrin (CDT) and increased liver enzymes, and the involvement of the heart by markers of heart failure (e.g., NT-proBNP) and of necrosis (e.g., troponins or CKMb) is mandatory. Treatment of alcoholic cardiomyopathy consists of alcohol abstinence and heart failure medication. According to the definition of the World Health Organization (WHO), alcoholism is subgrouped in two categories: alcohol abuse and alcohol dependence [1]. This corresponds roughly with the concept of the American Psychiatric Association [2,3]. Alcohol abuse describes the psychological dependence on ethanol for adequate functioning together with occasional heavy consumption, while alcohol dependence is defined as an increased alcohol tolerance together with physical symptoms upon withdrawal. In Western countries it is estimated that up to 10 % of the adult population suffers from alcoholism [4]. The highest prevalence is detected in the third to fifth decade of life, and alcoholism is seen in all races, ethnic groups, and socioeconomic strata. Germany with a total population of 81 million inhabitants is a permissive society with respect to the drinking of alcohol. Alcohol consumption is part of the local culture. About 40 million individuals drink alcohol. The per capita alcohol consumption of 9.7 l pure ethanol and the early onset of regular or episodic intensive drinking among young people in Germany consequently leads to high alcohol-related morbidity and mortality [5]. More than 1.8 million individuals in Germany with a total population of 81 million inhabitants are alcohol dependant. For an additional 1.6 million persons the use of alcohol is harmful [6,7]. In a world-wide setting, alcohol use disorders show similarities in developed countries, where alcohol is cheap and readily available [8]. The many complications of alcohol use and abuse are both mental and physical-in particular, gastrointestinal [9], neurological [10,11], and cardiological [12,13]. The relationship of alcohol with heart disease or dementia is complicated by the fact that moderate alcohol consumption was shown not only to be detrimental but to a certain degree also protective against cardiovascular disease [14] or to cognitive function in predementia. We reviewed the effects of ethanol on the cardiovascular system in 1996 [15], including aspects of inflammation [16], rhythm disturbances [17], and hypertension [18]. In 2001 we updated the data on the ambivalent relationship between alcohol and the heart [19] and in 2008 added new evidence on a larger cohort of patients with different forms of cardiomyopathy and increased alcohol intake from the German competence network on heart failure [20]. This review revisits our past and deals withourcurrentthinking on the epidemiology, pathophysiology, clinical characteristics, and treatments available for alcoholic cardiomyopathy. Methods This review assembles and selects pertinent literature on the ambivalent relationship of ethanol and the cardiovascular system, including guidelines, metaanalyses, Cochrane reviews, original contributions, and data from the Marburg Cardiomyopathy registry. Drinksasmeasuresofalcohol are often given in ounces (oz), whereby 1 oz equals 28.35 g or 29.57 ml. Examples for 100 % alcohol in ml of one drink in consumed beverages are between 17.6 to 17.76 ml: A historical perspective For more than 3000 years, alcoholic beverages have been consumed in multiple societies through the centuries and cultures. The name alcohol is much younger than the many beverages containing it. Pulverized antimony was used as eye shadow by Egyptian women and named al-Kol. In the 16th century Paracelsus Theophrastus Bombastus from Hohenheim used this term for distilled liquor and called it alcohol [15]. The beneficial cardiovascular effects of alcohol have been appreciated, e. g., in medieval times, when people took advantage of the vasodilating properties of alcohol to treat angina pectoris or heart failure. So Hildegard von Bingen (1098-1179), one of the most prominent mysticians of her time, recommended her heart wine as a universal remedy. One liter of wine was cooked for 4 min with 10 fresh parsley stems, 1 spoon of vinegar, and 300 g honey and then filtered [11]. This recipe is still in use today. Over the centuries "the good and the bad" of alcohol were evaluated clinically and scientifically. As early as 1855, Wood incriminated alcohol as a cause of heart failure. In 1861, Friedrich reported idiopathic hypertrophy as associated with alcoholism. In 1873, Walshe described myocardial cirrhosis in alcoholics, which includes a spectrum of hepatic derangements that occur in the setting of rightsided heart failure. Conversely cirrhosis (fibrosis) was found both in heart and liver. High cardiac output in patients with liver cirrhosis may have contributed to this cardiomyopathy in a vicious circle. The term "wine heart" (Tübinger Weinherz) originated in 1877 by Münzinger [21], a German pathologist at Tübingen university. This entity we would call nowadays "alcoholic cardiomyopathy" with histologic features of dilatation, myofibrillar necrosis and fibrosis (. Fig. 1a), and ultrastructural changes such as reduction of myofibrils and mitochondriosis in a great variability of size and form (. Fig. 1b; [22]). In Munich, the annual consumption of beer reached 245 l per capita and year in the last quarter of the 19th century. In 1884, the pathologist and veterinarian Otto von Bollinger (. Fig. 2a) described the "Munich beer heart" with fibrosis, hypertrophy, and fatty degeneration in postmortem cardiac tissue of alcoholics who consumed an estimated average of 432 liters of beer per year (. Fig. 2b; [23]). At that time every 10th necropsy in men at the Munich pathology institute named cardiac dilatation and fatty degeneration as "Bierherz" being its underlying cause. For comparison, the mean annual beer consumption in Bavaria is nowadays estimated to be 145 l and in the rest of Germany around 100 l beer per person and year [24]. In 1887, Maguire reported on 2 patients with severe alcohol consumption who benefitted from abstinence. He suggested that alcohol was poisoning the heart. In 1890, Strümpell listed alco-holism as a cause of cardiac dilatation and hypertrophy, as did Sir William Osler in 1892 in his textbook Principles and Practices of Medicine. In 1893, Graham Steell, well known for the Graham Steell murmur due to pulmonary regurgitation in pulmonary hypertension or in mitral stenosis, reported 25 cases in whom he recognized alcoholism as one of the causes of muscle failure of the heart. He found it "a comparatively common one" [25]. In his 1906 textbook The Study of the Pulse, William MacKenzie described cases of heart failure attributed to alcohol and first used the term "alcoholic heart disease" [26]. In his 1972 review article, Bridgen was the first to introduce the term alcoholic cardiomyopathy [27]. Nutritional causes of "alcoholic" cardiomyopathy Beriberi heart disease Thiamine deficiency is common feature in a malnourished and/or alcoholic population. Thus, the concept of beriberi heart disease dominated thinking about alcohol and the heart for decades and caused many to doubt that alcohol was actually cardiotoxic [28]. But vitamin B1 (thiamine) deficiency is accompanied by an elevated cardiac output and diminished peripheral vascular resistance [29,30]. According to its central hemodynamics, it can be classified as hyperdynamic cardiomyopathy or high output failure with a cardiac output >8 l/min or a cardiac index >3.9 l/min/m 2 [31,32]. In contrast, alcoholic cardiomyopathy is characterized by a low cardiac output, associated with systemic vasoconstriction [4]. However, the high output state can lead to cardiac dilation, thus, representing a characteristic subentity of cardiomyopathy different from low output dilated cardiomyopathy. Therefore, thiamine deficiency per se is just a historical nutritional anomaly in the history of alcoholic cardiomyopathy. Manchester arsenic-in-beer epidemic In 1900, the Manchester arsenic-in-beer epidemic was a serious food poisoning outbreak affecting several thousand people across the North-West and Midlands ofEngland, withmanycasesprovingfatal. The arsenic had come from the glucose for which sulphuric acid was used in the sugar production process of a company in Leeds. Brewers had been using this sugar, thus, unknowingly poisoning the beer and as a result their customers for many years even prior to the epidemic [33]. Arsenic poising caused a multisystem disease in over 6000 cases with more than 70 deaths [34]. The syndrome included the usual signs and symptoms of arsenic poisoning, with skin, nervous system, and gastrointestinal manifestations. Unusual in arsenic poisoning, but especially prominent in this epidemic, were the cardiovascular findings. In his clinical description, Ernest Reynolds wrote that "cases were associated with so much heart failure and so little pigmentation that they were diagnosed as beri-beri . . . ". He also found that "undoubtedly the principal cause of death has been cardiac failure. In postmortem examinations, the only prominent signs were the interstitial nephritis and the dilated flabby heart" (p. 169, [35]). This outbreak had been the first known trace metal cardiotoxic syndrome. In 2013, the issue of arsenic in beer and wine was again prominent, when MehmetCoelhan, a researcheratthe Weihenstephan research center at the Technical University of Munich, reported at a meeting of the American Chemical Society that many of the nearly 360 beers tested in Germany had trace amounts of Alcoholic cardiomyopathy. The result of dosage and individual predisposition Abstract The individual amount of alcohol consumed acutely or chronically decides on harm or benefit to a person's health. Available data suggest that one to two drinks in men and one drink in women will benefit the cardiovascular system over time, one drink being 17.6 ml 100 % alcohol. Moderate drinking can reduce the incidence and mortality of coronary artery disease, heart failure, diabetes, ischemic and hemorrhagic stroke. More than this amount can lead to alcoholic cardiomyopathy, which is defined as alcohol toxicity to the heart muscle itself by ethanol and its metabolites. Historical examples of interest are the Munich beer heart and the Tübingen wine heart. Associated with chronic alcohol abuse but having different etiologies are beriberi heart disease (vitamin B1 deficiency) and cardiac cirrhosis as hyperdynamic cardiomyopathies, arsenic poising in the Manchester beer epidemic, and cobalt intoxication in Quebec beer drinker's disease. Chronic heavy alcohol abuse will also increase blood pressure and cause a downregulation of the immune system that could lead to increased susceptibility to infections, which in turn could add to the development of heart failure. Myocardial tissue analysis resembles idiopathic cardiomyopathy or chronic myocarditis. In the diagnostic work-up of alcoholic cardiomyopathy, the confirmation of alcohol abuse by carbohydrate deficient transferrin (CDT) and increased liver enzymes, and the involvement of the heart by markers of heart failure (e.g., NT-proBNP) and of necrosis (e.g., troponins or CKMb) is mandatory. Treatment of alcoholic cardiomyopathy consists of alcohol abstinence and heart failure medication. Schlüsselwörter Vorhofflimmern · Beriberi · Zirrhosebedingte Kardiomyopathie · Hochdruck · Myokarditis arsenic. The source was identified to be the filter of choice for wine and beer, i.e., diatomaceous earth [36]. The German word for it is Kieselguhr, a beige powder made up of the skeletons of diatoms. The trace amounts of arsenic have not been comparable to the arsenic-inbeer endemic in Manchester but may still reach up to 10-times the amount admitted for arsenic in drinking water in the European Union and the US. Quebec's beer drinker disease In the mid-1960s, another unexpected heart failure epidemic among chronic, heavy beer drinkers occurred in two cities in the USA, in Quebec, Canada, and in Belgium. It was characterized by congestive heart failure, pericardial effusion, and an elevated hemoglobin concentration. The explanation proved to be the addition of small amounts of cobalt chloride. Cobalt was used as a foam stabilizer by certain breweries in Canada and in the USA. In 1966 McDermott et al. [37] described the syndrome as myocardosis with heart failure, Kestelott et al. [38] added pericardial involvement and named it alcoholic pericardiomyopathy, and Morin and Daniel [39] in Quebec tracked down the etiology to cobalt intoxication to what become known as Quebec beer-drinkers cardiomyopathy. Human pathologywas first described byBonefant et al. [40]. Animal models investigated ultrastructure [41] and treatment e. g. by selenium [42]. Removal of the cobalt additive ended the epidemic in all locations. Cobalt poisoning and alcohol together acted synergistically in these patients. As the syndrome could be attributed to the toxicity of this trace element, the additive was prohibited thereafter. Not alcohol but cobalt itself recently caused severe heart failure in a 55-yearold man, who was referred to the university hospital in Marburg to rule out coronary artery disease as the cause of his heart failure. He had become almost deaf and blind, with fever of unknown cause, hypothyroidism, and enlarged lymph nodes. Both his hips had been replaced, the left side by a CoCrMo Protasul metal prosthesis. Remembering a similar case in an episode of the TV series Dr. House, the team of J. Schäfer suspected cobalt intoxication as the cause of heart failure, which clinically mimicked Quebec's beer drinker disease [43]. One should note, however, that cobalt is needed in minute amounts of 0.0003 mg/day in vitamin B12 (cobalamine) to avoid megaloblastic anemia. Cardiac cirrhosis or cirrhotic cardiomyopathy The heart and liver interact in several different ways. Acute or chronic right heart failure leads to elevation of liver enzymes most likely due to liver congestion, whereas cirrhosis due to cardiac disease is infrequent. Chronic liver disease such as cirrhosis may in turn affect the heart and the whole cardiovascular system, leading to a syndrome named cirrhotic cardiomyopathy (CCM). Thus, CCM has been introduced as an new entity separate of the cirrhosis etiology. Increased cardiac output due to hyperdynamic circulation, left ventricular dysfunction (systolic and diastolic), and certain electrophysiological abnormal findings are pathophysiological features of the disease. The underlying mechanisms might include the impaired β-receptor and calcium signaling, altered cardiomyocyte membrane physiology, elevated sympathetic nervous tone and increased activity of vasodilatory pathways [44]. In pathophysiological terms, heart failure in liver cirrhosis belongs to the hyperdynamic cardiomyopathies. Hypertension As early as in 1915, Lian [45] reported in middle-aged French servicemen during the first world war that heavy drinking could lead tohypertension. Ittookalmost 60 years before further attention was paid to the complex interaction between the heart and the peripheral vasculature in various cross-sectional and prospective epidemiologic studies, which have empirically confirmed this early report. One is aware today that alcohol may cause an acute but transient vasodilation, which may lead to an initial fall in blood pressure probably mediated by the atrial natriuretic peptide (ANP) [46]. But also short-and long-term pressor effects mediated by the renin-aldosterone system and plasma vasopressin have been described [47,48]. The long-term hypertensive effect of alcohol has been confirmed in many studies [49][50][51][52]. Remarkably, alcohol also interacts with brain stem receptors and exerts thereby central hypertensive effects [18]. The apparent threshold amount of drinking associated with higher blood pressure is approximately 3 drinks/day. Most studies show no increase in blood pressure with lighter drinking; several show an unexplained J-shaped curve in women with lowest blood pressures in lighter drinkers. There seems to be independence from adiposity, salt intake, education, smoking, beverage type (wine, liquor, or beer), and several other potential confounders. Clinical observation confirmed that several days to weeks of drinking show higher and weeks of abstinence lower pressures. Alcohol intake may also interfere with the drug and dietary treatment of hypertension. This altogether supports a causal relationship between alcohol consumption and a hypertensive state. Alcoholic cardiomyopathy: Cytotoxicity of alcohol on heart muscle The 1989 landmark report of Urbano-Marquez et al. [53] showed a clear relation of lifetime alcohol consumption to structural and functional myocardial and skeletal muscle abnormalities in alcoholics. The amount of consumed alcohol was large-the equivalent of >80 g alcohol/day for 20 years. Further evidence came from data on acute alcohol effects [54] and from clinical observation [55][56][57]. In 1996, cardiomyopathies were defined as diseases "affecting the myocardium with associated cardiac dysfunction" [58] and primary and secondary forms were distinguished in this context. After consumption of large quantities of alcohol over years the clinical picture of heavy alcohol drinkers could be indistinguishable from other forms of dilated or familial cardiomy- opathy. Alcohol is still suspected to be the major cause or contributory factor of secondary nonischemic dilated cardiomyopathy being involved in up to one third of all cases of dilated cardiomyopathy [59][60][61]. In alcoholic cardiomyopathy, dilation and impaired contraction of the left or both ventricles is observed [4]. Left ventricular enddiastolic diameters are increased compared to age-and weight-matched controls [62], the left ventricular mass index is increased [63], and the left ventricular ejection fraction is well below normal (<45 %). Thus, the diagnosis of alcoholic cardiomyopathy is still based on the coincidence of heavy alcohol consumption and a global myocardial dysfunction, which cannot be explained by any other underlying myocardial disease [64]. However, the prevalence of alcoholic cardiomyopathy may be underestimated, as autopsy findings reveal pathologic changes of the heart in individuals with no clinical symptoms [65], when analyzing in large cross-sectional studies. Further evidence suggests that not only ethanol but also the first metabolite acetaldehyde may directly interfere with cardiac and skeletal muscle homeostasis [53,66]. In vitro studies have further elucidated the direct effect of ethanol on electromechanical coupling, indicating a decrease in myofilament-calcium sensitivity during alcohol consumption, changes in the transmembrane action potential, the amplitude of the cytosolic calcium transients, and the shortening of the action potential duration [67][68][69][70][71]. Isolated cardiomyocytes of alcohol-fed rats did not maintain ATP levels upon energy demand due to an inadequate increase in mitochondrial ATP-synthase activity, which led altogether to further myocyte loss [72,73]. Ultrastructural disarray of the contractile apparatus [74] is associated with a depressed myofibrillar and sarcoplasmic protein synthesis in cardiac muscle after ethanol exposure [75][76][77]. This reduces contractile cardiac filaments with subsequent negative inotropic effects on heart contractility [78,79]. An apoptotic effect of ethanol on cardiac muscle has also been described, which could be counteracted by insulin-like growth factor (IGF)-I [80] and confirmed in later studies [81,82]. In a study in rats that were fed with two different doses of alcohol (5 mM [low alcohol], 100 mM [high alcohol] or in pair-fed nonalcohol controls for 4-5 months), caspase-3 activity as putative marker of apoptosis was decreased in the low alcohol diet, which went along with increased or normal contractility, whereas high doses of ethanol showed increased caspase activity, wall thinning, and a reduction of shortening velocity [83]. Of note, rats are a relatively alcohol resistant species. Alcohol and myocarditis Alcohol abuse coinciding with myocarditis was reported in 1902 by McKenzie [26]. In endomyocardial biopsies of alcoholics up to 30 % of patients were found to exhibit sparse lymphocytic infiltrates with myocyte degeneration and focal necrosis and increased HLA (human leukocyte antigen) or ICAM (intercellular adhesion molecule) expression (. Fig. 3; [16,84]). This may have to do with the susceptibility for infections due to a suppressed immune system in a compromised human host and also in experimental animal [85]. Ethanol can alter lymphocyte functions, inhibit neutrophil chemotaxis, and suppress the production of cytokines, which are involved in regulating acute inflammatory responses to infectious challenges [86][87][88]. Furthermore, autoimmunity and circulating autoantibodies seem to be associated in some patients with chronic alcohol consumption [16,20,84]. Coronary artery disease and atherosclerosis The beneficial heart wine as universal remedy in medieval ages by Hildegard von Bingen [11] found its later correlates in many observations at the beginning of modern medicine when coronary artery disease (CAD) and its risk factors and symptoms received more attention. Heberden [89] described angina so elegantly in 1786 and also added that "considerable relief " through "wine and spirituous liquors" could be expected. This observation led to the erroneous belief that alcohol is an immediate coronary vasodilator. Alcohol is not a direct coronary vasodilator [90]. Symptomatic relief of angina could be through the anesthetic effect of ethanol or through peripheral vasodilation, which could transiently reduce oxygen demand of the heart. In 1819 the Irish physician Dr. Samuel Black, who had a special interest in angina pectoris described what is probably the first commentary pertinent to the "French Paradox" [91]. This refers to the finding in the last century that moderate alcohol consumption could be the reason for the relatively low cardiovascular disease incidence in wine-drinking regions [92]. Renaud and de Lorgeril [93] suggested that the inhibition of platelet reactivity by wine may be one explanation for protection from CAD in France. However, there was further evidence on this and other dietary mechanisms with the observation that France and Finland have similar intakes of cholesterol and saturated fat, but consumption of vegetables and vegetable oil containing monounsaturated and polyunsaturated fatty acids is greater in France than in Finland. This inverse relation on mortality resembles in most population based studies a U-or J-shaped curve: Total abstinence has a slightly increased mortality when compared to low or moderate alcohol consumption. It is present in individuals with and without overt CAD, with diabetes, and with hypertension and has been underlined by a large number of studies [94,95]. The cardioprotective effect of alcohol can be attributed to the increase in total high-density lipoproteins (HDL), and especially by an increase in subfractions HDL2 and HDL3, whereas established cardiovascular risk factors like low-density lipoproteins (LDL) or lipoprotein(a) are thought to be moderately decreased [96]. Moderate alcohol intake also exerts beneficial effects on the blood coagulation system. It leads to an increase of endogenous plasminogen activators [97], or a decrease in fibrinogen concentrations [98]. In the Caerphilly prospective heart disease study, platelet aggregation induced by adenosine diphosphate was also inhibited in subjects who drank alcohol [99]. Assessing differences between various forms of alcoholic beverages it should be noted that resveratrol leads in vitro to platelet inhibition in a dosedependent manner [100] and has shown effects on all-cause mortality in a community-based study [101]. Polyphenols of red barrique wines and flavonoids have been shown to inhibit endothelin-1 synthase [102] and PDGF-induced vasoproliferation thus also contributing to cardiovascular protection [103]. Signal transduction and betareceptors In alcoholic cardiomyopathy, similar to idiopathic dilated cardiomyopathy (DCM), beta 1-adrenergic and muscarinic receptors are reduced in the myocardium itself and reduced responsiveness of the adenyl cyclase was shown, whereas catecholamine levels in the circulation may be elevated [104]. As a net effect, negative inotropism may result and contribute to heart failure. Arrhythmias and stroke Acute effects of alcohol can result in rhythm disturbances. Since this happens often on weekends and holidays, Ettinger and Regan coined the term "holiday heart syndrome", when they described 32 habitual drinkers with an additional ingestion of ethanol prior to the arrhythmia [59,105]. Atrial fibrillation was the commonest manifestation, which resolved with abstinence. In the Kaiser Permanente Study, atrial arrhythmias in 1322 persons reporting >6 drinks per day were compared to arrhythmias in 2644 matched light drinkers, showing a doubled relative risk for heavy drinkers [106]. Apart from direct cardiotoxicity, hypertension causing atrial stretch the arrhythmogenic potential of alcohol may come from the lowering the resting membrane potential [107] and the prolongation of conduction [108]. Studies of alcohol and stroke are complicated by the various contributing factors to stroke. Heavier drinkers are apparently at a higher risk of hemorrhagic stroke, whereas moderate drinking might be neutral or even result in a reduced risk of ischemic stroke. Clinical work-up for alcoholic cardiomyopathy Habitual drinkers often hide their alcohol dependence fairly effectively. They may admit drinking at social events butnot the abuse in the first contact. Patients with alcoholic cardiomyopathy, therefore, usually present with symptoms of heart failure, i. e., dyspnea, orthopnea, edema, nocturia, and tachycardia. Echocardiography may reveal a mild or severe depression of cardiac function and ejection fraction or even show hypertrophy in the beginning [109]. Heart failure symptoms maybe due toearlydiastolic ortolatersystolic dysfunction. At later stages, due to atrial fibrillation, thrombi are notuncommon in the dilated atria. Mitral regurgitation is found in up to two thirds of cases [110]. Atrial fibrillation and supraventricular tachyarrhythmias are common findings in 15-20 % of patients [111], whereas ventricular tachycardias are rare [112]. On ECG, unspecific abnormalities like complete or incomplete left bundle branch block, atrioventricular conduction disturbances, alterations in the ST segment, and P wave changes can be found comparable to those in idiopathic DCM [113]. On endomyocardial biopsy, a discrimination between idiopathic, chronic inflammatory and alcoholic cardiomyopathy is virtually impossible since common features such as fibrosis, hypertrophy of cardiac myocytes, and alterations of nuclei are present at light microscopy in the alcoholic cardiomyopathy [114] as well as in chronic myocarditis according to the Dallas criteria [115] or the World Heart Federation/ International Society and Federation of Cardiomyopathy (WHF/ISFC) definition of myocarditis [116]. Although the severity of histological alterations on endomyocardial biopsy correlates with the degree of heart failure in one of our studies, biopsy is not in common use for prognostic purposes [117]. Even the recovery after abstinence of alcohol is hard to predict based on morphometric evaluation of endomyocardial biopsies [118]. Laboratory findings Measuring blood alcohol concentration in an acute intoxication gives baseline information but does not permit deductions to chronic misuse. Markers for chronic alcohol consumption rely on liver enzymes such as gamma-glutamyltransferase (GGT) [119], glutamic oxalacetic transaminase (GOT), and glutamic pyruvic transaminase (GPT). Elevations of the transaminases (GOT, GPT), especially a ratio of GOT/GPT higher than 2 might be indicative of alcoholic liver disease instead of liver disease from other etiologies [120,121]. An excellent marker is carbohydrate deficient transferrin (CDT), which best detects chronic alcohol consumption alone [122,123] or in combination with the other markers such as GGT [8,124]. Markers such as ethyl sulphate, phosphatidyl ethanol, and fatty acid ethyl esters are not routinely done. For a comprehensive overview see . Table 2 with combined data from [6,8,24,28]. Biomarkers of heart failure such as NT-proBNP and of myocardial necrosis such as the troponins and CKMB indicate heart failure or myocytolysis. Is there an immediate risk of alcohol intake? In a recent meta-analysis, Mostofsky et al. [125] analyzed if independent from habitual moderate or heavy alcohol consumption an immediate risks exists following alcohol intake. Data from 23 studies with 20,457 participants showed that even with moderate consumption an immediately higher cardiovascular risk was attenuated after 24 h. It then became protective for myocardial infarction and hemorrhagic stroke with a 30 % lower risk and protective against ischemic stroke within one week. In contrast, heavy alcohol drinking continued to be associated with higher cardiovascular risk in the following day (RR =1.3-2.3) and week (RR =2.25-6.2). Prognosis and treatment Prognosis in individuals with low or moderate consumption up to one or two drinks per day in men and one drink in women is not different from people who do not drink at all. In CAD, diabetes, and stroke prevention the J-type mortality curves even indicate some benefit apart from the social "well-being". In patients with chronic alcohol use disorders and severe heart failure prognosis is poor, since continued alcohol abuse results in refractory congestive heart failure. Death might also be sudden due to arrhythmias, heart conduction block, and systemic or pulmonary embolism. In these patients, only early and absolute abstinence of alcohol can reverse myocardial dysfunction [56,57,126] which in a historic study by McDonald and Burch was achieved with prolonged bedrest for several months without further access to alcoholic beverages. This was an excellent result long before ACE inhibitors or betablockers were available for heart failure treatment [57]. Mortality can otherwise reach 40-50 % within a 4-5 year period in the nonabstinent patients [127], whereas after withdrawal from alcohol hemodynamic and clinical improvement or at least a slower progression of disease compared to the idiopathic form of dilated cardiomyopathy was shown [128,129]. To maintain abstinence, recent investigations suggest the benefits of adjuvant medications, e. g., naltrexone, which is an opiate receptor antagonist that blocks endogenous opioid reward and reduces alcohol-cue-conditioned reinforcement signals; acamprosate, an agent that exerts action through excitatory amino acids; by disulfiram, an aldehyde dehydrogenase inhibitor, which causes in alcohol use acetaldehyde accumulation and symptoms such as nausea, flushing, sweating, and tachycardia or by selective serotonin re-uptake inhibitors (SSRI) [8,130,131]. To treat the alcohol problem, a combined approach comprising pharmacologic and psychosocial therapy involving self-help groups or Alcoholics Anonymous is essential. Treatment of alcoholic cardiomyopathy follows the usual regimen for therapy of heart failure, including ACE inhibitors, betablockers, diuretics including spironolactone or eplerinone, and digitalis in atrial fibrillation for rate control together with anticoagulation, whenever appropriate (. Table 3). Caution for anticoagulation is warranted due to the problems of noncompliance, trauma, and overdosage especially in hepatic dysfunction. Conclusion The individual amount of alcohol consumption decides on harm or benefit. The preponderance of data suggests that drinking one to two drinks in men and one drink in women will benefit the cardiovascular system over time. More than this amount can lead to alcoholic cardiomyopathy. Moderate drinking below that threshold might even reduce the incidence of coronary artery disease, diabetes, and heart failure.
2017-08-02T07:39:29.719Z
2016-08-31T00:00:00.000
{ "year": 2016, "sha1": "3e2d6df86d36513b4aa60cb8c84d36258a4d7329", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00059-016-4469-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3e2d6df86d36513b4aa60cb8c84d36258a4d7329", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231990105
pes2o/s2orc
v3-fos-license
Re-estimation of basic reproduction number of COVID-19 based on the epidemic curve by symptom onset date Previous studies have reported the basic reproduction number (R0) of coronavirus disease from publicly reported data that lack information such as onset of symptoms, presence of importations or known super-spreading events. Using data from the Republic of Korea, we illustrated how estimates of R0 can be biased and provided improved estimates with more detailed data. We used COVID-19 contact trace system in Korea, which can provide symptom onset date and also serial intervals between contacted people. The total R0 was estimated as 2.10 (95% confidence interval (CI) 1.84–2.42). Also, early transmission of COVID-19 differed by regional or social behaviours of the population. Regions affected by a specific church cluster, which showed a rapid and silent transmission under non-official religious meetings, had a higher R0 of 2.40 (95% CI 2.08–2.77). The epidemic characteristics of coronavirus disease are reported by many countries [1][2][3][4][5][6]; however, the basic reproduction number (R 0 ) of COVID-19 should be discussed with some concerns. First, the symptom onset date should be used to estimate R 0 mathematically, instead of the date of the positive test result for accurate estimation but many researchers used the diagnosed date as a proxy [3]. In addition, imported cases among confirmed cases should be identified and excluded from the calculation. Finally, it is better to use the mean generation time of COVID-19 measured through a contact trace system [7] by investigating concise contact relationships between cases, not by assumptions or imputations. Since previous studies lacked the abovementioned features, this study aimed to re-estimate the R 0 of COVID-19 through the epidemic curve of early COVID-19 transmission using the measured serial intervals, which is defined as the time interval between the onset dates of symptom(s) in the infector and the infectee, and symptom onset date of not-imported cases. Data source and analysis Data regarding the study population were derived from the COVID-19 surveillance system in Korea Centers for Disease Control and Prevention (KCDC) [7], collected by the COVID-19 National Emergency Response Center, Epidemiology, and Case Management Team. All reported symptomatic cases in the Republic of Korea from 20 January (the date of the first recorded case) until 3 August were enrolled in this study. Symptomatic cases were defined as having respiratory or systemic symptoms of COVID-19 and were confirmed by real-time reverse transcriptase-polymerase chain reaction (RT-PCR) tests [8]. Symptom onset date was defined as the first date of any symptom newly started within 1 week before the diagnosis. By investigating former cases of confirmed cases, we matched 1567 symptomatic infector-infectee pairs which are relevant, to measure the mean serial interval of the population. Personal information was deidentified before the analysis. Since the data were collected as part of rapid response disease control by the government, the requirement of institutional review board approval was waived. After collecting the frequencies of symptomatic cases daily, we used the 'R0' package of R version 3.4.2 [9] to create a distribution of generation times and estimate the R 0 using the exponential growth method, which is one of the popular methods used in infectious disease dynamics studies when the growth rate and the serial interval are available in the data [10]. Estimated R 0 as total and as four different regions, Daegu, Gyeongbuk, Gyeonggi and Seoul, are reported since they showed early epidemics in Korea. Since one large specific church cluster (SCC) in Daegu showed rapid and silent transmission based on broad but non-official religious meetings, they are reported separately [3]. Characteristics of the population and an epidemic curve Of the 14 423 confirmed cases in the Republic of Korea until 3 August, 10 870 (75.38%) cases were symptomatic. In total, 1347 cases were excluded because they were classified as imported Figure 1 shows the difference in epidemic curves by the date used to display frequency. Confirmed date of cases is congregated after the large screening of SCC in late February, which is hard to use as estimating R 0 by the curve. However, symptom onset dates, which is actually used in the R 0 estimation [10], show exponential and much earlier growth than the reported dates. Epidemic curves by regions that showed early transmission of COVID-19 in Korea are shown in Supplementary Figure S1. The mean serial interval of 4.02 (S.D.: 4.92) days was fitted into γ distribution, which was calculated by symptomatic infector-infectee pairs including 969 infectors and 1567 infectees in the Republic of Korea until 3 August 2020. The total R 0 was estimated as 2.10 (95% confidence interval (CI) 1.84-2.42). SCC cases had an R 0 of 2.40 (95% CI 2.08-2.77) and non-SCC cases had an R 0 of 1.92 (95% CI 1.42-2.59). Regional differences in R 0 are shown in Table 1. Seoul, which was not affected by SCCs, had an R 0 of 1.76 (95% CI 1.20-2.54). Daegu and Geyongbuk showed different R 0 values, regardless of SCC cases; SCC cases in Daegu and Geyongbuk had a higher R 0 than overall of 2.49 (95% CI 2.14-2.91) and 2.37 (95% CI 1.32-4.22), respectively. If we use the publicly reported dates instead of the onset date for the comparison, R 0 for overall was estimated as 3.14 (95% CI 3.04-3.25). Discussion The study showed that the R 0 was 2.10 (95% CI 1.84-2.42). This is lower than 3.14, which is estimated by the reported dates in the Republic of Korea for the comparison. Although estimations through mathematical models or confirmed dates may have advantages in pandemic situation when the onset date is not available [1][2][3][4][5][6], it is meaningful that these two values may be different because of the following reasons. First, imported cases for the Republic of Korea were 9.34% of total confirmed cases since we had minimal border control. If daily frequencies of COVID-19 were considered including them, transmission rate or R 0 might have been overestimated. Also, due to rapid transmission of SCC group, all members of SCC group were tested for screening, resulting in congestion of confirmed dates which was the date they were tested. Similarly, there was some screening of hospitals also. These tests may have led to an overestimation of R 0 by estimating the larger growth rate than the reality. Therefore, using symptom onset dates is essential for accuracy. Nevertheless, if patients' date of onset is not available, there should be some alternative methods [3] to adjust the estimation gap between the symptom onset and the disease confirmation for the accurate epidemiology of the disease. Higher R 0 values were estimated in SCC cases, which caused silent transmission in Daegu and Gyeongbuk as confirmations were delayed by uncooperative behaviours of the religious group [4]. The difference in the estimated R 0 by region indicates that the transmission characteristics of COVID-19 may differ based on various factors such as social distancing guidelines or specific clusters showing different characteristics to the general population. In conclusion, this study estimated R0 values ranging from 1.53 to 2.41 in different regions using only the onset date of domestic symptomatic cases. We suggest that the countries should not only report a case by date of symptom onset but also distinguish imported or local transmission, and perhaps even large clusters should be highlighted. Otherwise, estimates of R 0 may be higher than reality.
2021-02-23T06:16:24.134Z
2021-02-22T00:00:00.000
{ "year": 2021, "sha1": "88870a8bd5e2b7eae7d0a6ad0d7d851ce3923747", "oa_license": "CCBYNCND", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/DE68685172442170E08DA6F398514153/S0950268821000431a.pdf/div-class-title-re-estimation-of-basic-reproduction-number-of-covid-19-based-on-the-epidemic-curve-by-symptom-onset-date-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "47f3c258381e2aa52417d1cce467bda1038f92c0", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
68554
pes2o/s2orc
v3-fos-license
Field Evaluation and Impact on Clinical Management of a Rapid Diagnostic Kit That Detects Dengue NS1, IgM and IgG Background Dengue diagnosis is complex and until recently only specialized laboratories were able to definitively confirm dengue infection. Rapid tests are now available commercially making biological diagnosis possible in the field. The aim of this study was to evaluate a combined dengue rapid test for the detection of NS1 and IgM/IgG antibodies. The evaluation was made prospectively in the field conditions and included the study of the impact of its use as a point-of-care test for case management as well as retrospectively against a panel of well-characterized samples in a reference laboratory. Methodology/Principal Findings During the prospective study, 157 patients hospitalized for a suspicion of dengue were enrolled. In the hospital laboratories, the overall sensitivity, specificity, PPV and NPV of the NS1/IgM/IgG combination tests were 85.7%, 83.9%, 95.6% and 59.1% respectively, whereas they were 94,4%, 90.0%, 97.5% and 77.1% respectively in the national reference laboratory at Institut Pasteur in Cambodia. These results demonstrate that optimal performances require adequate training and quality assurance. The retrospective study showed that the sensitivity of the combined kit did not vary significantly between the serotypes and was not affected by the immune status or by the interval of time between onset of fever and sample collection. The analysis of the medical records indicates that the physicians did not take into consideration the results obtained with the rapid test including for care management and use of antibiotic therapy. Conclusions In the context of our prospective field study, we demonstrated that if the SD Bioline Dengue Duo kit is correctly used, a positive result highly suggests a dengue case but a negative result doesn't rule out a dengue infection. Nevertheless, Cambodian pediatricians in their daily practice relied on their clinical diagnosis and thus the false negative results obtained did not directly impact on the clinical management. Introduction The World Health Organization estimates that 50 million dengue infections occur annually and approximately 2.5 billion people live in area at high risk of infection. These areas are located in tropical and sub-tropical regions in South East Asia, Africa, Eastern Mediterranean, Western Pacific, Central and South America. The number of reported cases increased approximately 30 times over the last 50 years [1] and this could be in relation to many factors including population growth, urbanization, failure to control mosquito vectors, etc. [2]. Dengue is a viral disease transmitted by Aedes mosquitoes, principally Ae. aegypti. Dengue virus (DENV) belongs to the family Flaviviridae, genus Flavivirus. There are 4 antigenically and genetically distinct serotypes (DENV-1, -2, -3 and -4). In human, the virus can cause a spectrum of illness ranging from asymptom-atic infection or self-limiting influenza-like illness (dengue fever or DF) to life-threatening disease associated with vascular leakage, hemorrhage (dengue hemorrhagic fever or DHF), potentially leading to vascular shock (dengue shock syndrome or DSS). There is currently no specific treatment available for dengue. An early diagnosis is nevertheless very important for efficient clinical management in order to cure or prevent life-threatening complications. In addition, accurate and early diagnosis directs clinical attention to warning signs of an evolution to severe forms and avoids unnecessary use of antibiotics. A range of serological and virological diagnostic methods are available but most of them require specialized laboratory equipment, experienced personnel and are time consuming which is not adapted for a field and point-of-care use. Serological diagnosis by ELISA or rapid diagnostic tests (RDTs) is technically easy to perform and provides fast results but requires most of the time paired sera to definitively confirm the diagnosis [1,3]. Detection of the NS1 antigen in the blood is a recent and very popular diagnostic method. This viral protein is secreted in the blood and can be detected by ELISA or immunochromatographic tests from the first day of fever and up to 14 days after infection [4][5][6][7]. The purpose of this study was to evaluate a commercial rapid dengue diagnostic kit, the SD Bioline Dengue Duo device (Standard Diagnostic Inc., Korea), in particular in point-of-care applications, and to evaluate the impact of the results of this combined test on the clinical management decision. The SD Bioline Dengue Duo kit is composed of 2 tests designed to detect DENV NS1 antigen (first test) and anti-DENV IgM/IgG (second test) in serum, plasma or whole blood. The kit evaluation was double. Firstly, the use of the test in the field was for the first time evaluated during a prospective study in 2 Cambodian provincial hospitals. The results obtained in the hospital's laboratories were then compared with those reported with the same samples by a national reference laboratory at Institut Pasteur in Cambodia (IPC). We also investigated how the results of this point-of-care test designed to assist clinical management were perceived and subsequently incorporated into the clinical management decision of physicians from 2 hospitals during a dengue epidemic. Secondly, a more usual retrospective case-control evaluation against reference methods was performed at IPC in order to assess the kit performances in the context of a dengue-endemic South-East Asian country. Materials and Methods Patients' recruitment and samples collection for the prospective evaluation Patients were enrolled in the pediatric wards of Kampong Cham and Takeo provincial hospitals during the 2011 dengue epidemic in Cambodia i.e. between June and October 2011. Patients presenting spontaneously to these hospitals or referred by health centers with a history of fever during the previous 7 days and at least one of the following symptoms: rash or severe headache or retro-orbital pain or myalgia or joint pain or bleeding, were examined by physicians who decided whether or not the child should be hospitalized. When the number of beds available was limited, priority was obviously given to the most severe cases. In each hospital, a maximum of 10 hospitalized patients, randomly selected, were enrolled weekly. Patient's information and clinical data were collected by physicians using a specific case report form and blood samples were taken at the time of hospital admission (early/acute specimen) and discharge (convalescent/late specimen). Patients with incomplete test kit results, missing blood samples and incomplete clinical records were excluded. Panel of samples used for the retrospective laboratory evaluation The panel used for the retrospective laboratory evaluation of the kit performances consisted of 157 samples collected in 2011 during the field prospective evaluation and tested negative or positive by the reference methods available at IPC completed with an additional 167 samples selected from IPC's dengue laboratory's biobank (samples collected between 2008 and 2010). Positive samples were selected in order to obtain an evaluation panel as balanced as possible in terms of DENV serotypes, day of collection after onset of fever (DAOF), anti-DENV antibodies titer and immune status (primary/secondary infections). Negative samples were selected from patients presenting with a non-dengue febrile illness and also from pregnant women. Ethical aspects For the field prospective evaluation, a written consent was signed by the children's legal representatives before enrolment. This study was approved by the Cambodian National Ethics Committee. The use of stored samples from IPC's biobank was also approved by the Cambodian National Ethics Committee. Dengue diagnosis The SD Bioline Dengue Duo kits were provided by Standard Diagnostics (Kyonggi-do, Korea) and tests were performed according to the manufacturer's instructions. For the prospective study, only acute blood samples were tested with the kit in hospitals as well as at IPC. At IPC, laboratory diagnosis was based on RT-PCR, isolation of DENV after inoculation into mosquito cell lines, detection of anti-DENV IgM and measure of an increase of anti-DENV antibodies titer measured by hemagglutination inhibition assay (HIA) between acute and convalescent sera. RT-PCR was performed after viral RNA extraction from acute serum samples using QIAmp Viral RNA Mini kit (Qiagen, Hilden, Germany). Either a conventional nested RT-PCR according to Lanciotti et al. [8] protocol and modified by Reynes et al. [9] or a real-time multiplex RT-PCR based on the technique developed by Hue et al. [10] was performed. DENV was isolated on C6/36 cells and the virus serotype identified by immunofluorescence assay using monoclonal antibodies as described previously [11]. An in-house IgM capture Enzyme-Linked Immuno-Sorbent Assay (MAC-ELISA) was used to detect anti-DENV and anti-Japanese Encephalitis virus (JEV) IgM as describe previously [11]. A result was considered positive for dengue when the optical density (OD) was higher than 0.1 for the DENV IgM and when the OD of the anti-DENV ELISA was higher than the OD of the anti-JEV ELISA. HIA followed the method described by Clark and Casals [12] adapted to 96-well microtiter plate. Primary or secondary acute dengue infection was determined by HI titer according to criteria established by WHO [13]. In brief, the patient was defined as Author Summary Dengue is a potentially life-threatening viral disease. Symptoms are often not specific hence the importance to confirm the diagnosis during the early stage of the disease. Nevertheless, until recently only specialized laboratories were able to confirm dengue diagnosis. The discovery of the NS1 protein as a marker of infection has allowed the development of point-of-care tests for a rapid diagnosis confirmation. These tests have previously been evaluated by laboratories, but their performances have never been assessed in field conditions. In this study we evaluated the performance of SD Bioline Dengue Duo kit when tests were performed by hospital laboratories staff in a dengue hyper-endemic country. We also assessed the impact of the test results on the clinical management decision. The combination of NS1 test with antibodies detection improved the performance, though discordances on IgM and IgG results were observed between the hospitals and the national reference laboratories. Physicians treated patients according to their clinical diagnosis and did not take negative results into consideration. having a primary infection when the convalescent serum had a HI titer #2560 associated with a fourfold rise of the titer between the acute and convalescent sera (collected with a time interval of at least 7 days). When the convalescent serum had an HI titer .2560, the patient was defined as having a secondary dengue infection. All early samples were tested by PCR, viral isolation, IHA and MAC-ELISA whereas late samples were only tested by HIA and MAC-ELISA. Confirmed and suspected dengue cases were defined according to WHO guidelines [1]. A confirmed case was defined by a RT-PCR and/or a culture positive result and/or an IgM seroconversion in paired sera and/or a fourfold antibodies titer increase measured by HIA in paired sera. A probable dengue infection was defined by an HI antibody titer .2560 in paired sera without a fourfold increase or IgM positive result in the acute serum [1]. At IPC, technicians were blinded for the results of the kit evaluated as well as for the results of gold standard tests. In hospitals, the staff performing rapid diagnostic tests was blinded for the results obtained with these tests as well as for the results of the gold standard assays. Hospital case management Each clinical record contained the complete medical data recorded at the time of admission and the complete follow-up of the patient during the hospitalization (temperature, blood pressure, pulse, diuresis, medical prescriptions, etc.) until discharge. These data were anonymized by the physicians for the purpose of the analysis. Statistical analysis Statistical analysis was performed using STATA version 11.0 (StataCorp, College Station Texas, USA). Significance was assigned at P,0.05 for all parameters and were two-sided unless otherwise indicated. Uncertainty was expressed by 95% confidence intervals (CI95). For the prospective study, agreement between hospital's laboratories and IPC laboratory's data was measured by agreement percentage and Kappa (k) coefficient. For the prospective study, sensitivity and specificity obtained when tests were performed at hospitals were compared with those obtained at IPC with McNemar test. Positive and negative predictive values (PPV and NPV) were compared with Fisher exact test. For the retrospective laboratory study and for the analysis of medical records Fisher exact test was used. During the retrospective laboratory evaluation, sensitivity was calculated according to infecting serotype, DAOF, immune status and antibodies profiles. Four different antibodies profiles were arbitrarily defined according to HIA and MAC-ELISA results: profile 1, low HI titer (,640) and negative MAC-ELISA; profile 2, low HI titer and positive MAC-ELISA; profile 3, high HI titer ($640) and negative MAC-ELISA; profile 4, high HI titer and positive MAC-ELISA. Prospective evaluation of the SD Duo kit's performances Characteristics of the study population. A total of 162 patients were enrolled (100 patients in Takeo and 62 in Kampong Cham). The NS1 result of one patient and the IgM/IgG results of 4 others patients were not reported by hospitals. These 5 patients were therefore excluded. At IPC, the reference laboratory tests confirmed 85 dengue cases, 41 children were classified as probable dengue infection, and in 31 cases a dengue infection was excluded. Among the 126 patients with confirmed or probable dengue, 32 were classified as DF, 84 as DHF and 8 as DSS. Clinical, virological and demographical information of the population are summarized in Table 1. All the patients enrolled in this study survived and were discharged without complication or sequelae. Comparison of results between hospital laboratories and Institut Pasteur's laboratory. For the NS1 test, an agreement of 98.1% and a k coefficient of 0.96 were obtained (Table 2). For the IgM/IgG tests, an agreement of 68.8% and a k coefficient of 0.55 were observed (Table 3). For the combined test (antibodies and NS1) the overall sensitivity, specificity, PPV and NPV at the hospitals were 85.7%, 83.9%, 95.6% and 59.1% respectively whereas they were 94,4%, 90.0%, 97.5% and 77.1% respectively at IPC. Sensitivity was significantly higher at IPC (p-value = 0.002). Specificity, PPV and NPV were better at IPC but the differences were not statistically significant ( Table 4). The combination of all the tests significantly improved the sensitivity and NPV of the kit at hospitals (p-value sensitivity ,0.001; p-value NPV = 0.001) as well as at IPC (p-values ,0.001) with a non-significant decrease of the specificity (p-value at hospitals = 0.12, p-value at IPC = 0.5) and of the PPV (p-value at hospitals = 0.67, p-value at IPC = 1). Retrospective laboratory evaluation of the SD Duo kit's performances Samples description. A total of 166 positive samples and 120 negative samples were included in the retrospective laboratory evaluation of the kit. Eighty five and 81 positive samples as well as 31 and 89 negative samples were obtained from the 2011's prospective study and from the IPC's biobank, respectively. The 41 patients defined as suspect dengue infection by the gold standard methods during the prospective study were also included in the retrospective study. Positive samples included 86 sera with a low HI titer (,640) and 80 sera with a high HI titer ($640) (51.8% and 48.2%, respectively). The distinction between low and high HI titer groups was established during a preliminary comparative study between HI titers and SD Duo kit IgG results. In the high HI titer group, the correlation between both tests was over 70% which was considered as acceptable (data not shown). Of note, HIA does not only detect IgG but also other immunoglobulin isotypes and as such some low HI titers measured during the early phase of the infection may not contain or only very low quantities of anti-DENV IgGs. DENV serotype was identified in 87.3% of the positive samples: there were respectively 57, 35, 26 and 27 (34.3%, 21.1%, 15.7% and 16.3%) DENV-1, -2, -3 and -4 cases. Forty seven (28.3%) samples were collected 2 days after onset of fever or earlier, 67 (40.4%) between day 3 and day 4, 41 (24.7%) between day 5 and 6, and 11 (6.6%) after the 7 th day of illness (Table S1). A total of 19 and 83 patients (11.4% and 50%) were classified as having primary and secondary infections, respectively. The test's sensitivity diminished significantly also when the interval of time between onset of fever and sample collection Table 5). The sensitivity was better with DENV-1 than DENV-4 but this difference was at the limit of significance (70.2% vs 48.2%, pvalue = 0.051). No differences were observed with the other serotypes ( Table 5). The sensitivity improved significantly when the interval of time between onset of fever and sample collection increased (overall p-value,0.001), from 40.4% (18/47, CI95 = [26.4-55.7]) for samples collected before the 2 nd day of illness to 90.9% (10/11 CI95 = [58.7-99.8]) for those collected after the 7 th day of illness (Figure 1). The sensitivity was also significantly better in patients with secondary than primary infections (79.5% vs 42.1%, p-value = 0.003) ( Table 5). All the 41 patients with a suspicion of DENV infection tested positive either by IgM only (6/41, 14.6%), by IgG only (11/41, 26.8%) or by both IgM and IgG (26/41, 63.4%). In 3 non-dengue febrile cases and 3 anti-JEV IgM positive cases the kit gave non concordant results compared to our reference methods. In 2 non-dengue febrile cases both IgM and IgG were positives. In 1 JEV case the IgG test was positive while in 1 non-dengue febrile and 1 JEV cases, IgG test was weakly positive. In the last JEV case, the IgM test was weakly positive. Table 5). The sensitivity did not vary significantly between the serotypes when compared all together (p-value = 0.868), or 2 by 2, but also not according to the immune status (p-value = 1) or the time interval between DAOF and sample collection (p-value = 0.8) (Table 5, Figure 1). Impact of the RDTs results on clinical case management in Cambodia The medical records of 129 patients (82.2% of all patients enrolled) were provided by the two hospitals and subsequently analyzed. All the 66 patients who tested positive for acute dengue infection using the IPC gold standard test were also clinically diagnosed by the physicians as dengue cases, with or without coinfection (63 and 3 patients, respectively). One patient with a laboratory-suspected DENV infection as well as two children who tested negative were clinically diagnosed as non-dengue febrile illness (Table 1). All patients received a treatment based on WHO 2009 recommendations, i.e., intravenous fluid therapy with 0.9% saline, Ringer's lactate or Ringer's acetate with or without dextrose, paracetamol if fever and oral rehydration solution or other fluids containing electrolytes and sugar when possible. Patients in circulatory shock received dextran, O 2 and blood transfusion when necessary. Twenty-nine patients (27.7%) also received antibiotics. The prescription of antibiotics was justified by the phisicians in the medical records of 11 patients because the following diagnoses: 4 dysenteric syndromes with suspicion of typhoid fever, 3 meningitis or meningo-encephalitis, 1 suspicion of nosocomial infection, 2 pharyngitis and 1 bronchiolitis. Among the 90 patients with a positive NS1 and/or IgM and/or IgG test, 17.8% (16/90) were treated with antibiotics. Out of 39 patients who tested negative by the RDT, 13 (33.3%) also received antibiotics. The comparison of antibiotic prescription between both groups was at the limit of significance (p-value = 0.067). There was no difference in the duration of antibiotic therapy between patients with a positive test and those with a negative test (p-value = 0.216). Among the 16 positive patients who received antibiotics, only 7 (43.7%) had their antibiotic therapy stopped once the point-ofcare kit tested positive for dengue. Among the 13 patients with a negative result who received antibiotics, 8 (61.5%) had their antibiotic therapy stopped once the test was performed. The decision to maintain or discontinue the antibiotic therapy was not affected by the result of the RDT (p-value = 0.338). Discussion Early management of patients with dengue infection is essential to ensure a favorable evolution of the disease and prevent the occurrence of severe forms. Until recently an early confirmed diagnosis was only achievable in specialized laboratories. The discovery of the NS1 protein as an early marker for DENV infection, especially in RDT format, now allows dengue diagnosis during the early phase of the disease, even in laboratories with limited equipments and human resources. Evaluations are required to ensure that these tests are suitable for diagnosis and clinical management or epidemiological surveillance and outbreak investigations. Different methodologies can be used: laboratorybased evaluations (or retrospective evaluations) and field evaluations (or clinical-based/prospective evaluations) [14]. Retrospective evaluations are easy to perform but tend to overestimate tests accuracy. Prospective evaluations allow determination of PPV and NPV with tests performed on patients in the real clinical settings. However, accuracy of diagnostic tests estimated by prospective evaluations could be biased due to imperfect gold standard in the prospective clinical setting. In our study we combined both prospective and retrospective evaluations. The retrospective part was added in order to better understand the results obtained in the field during the prospective study. Since the two test kits of the SD Bioline Dengue Duo combo test do not give exactly the same information, the NS1 assay was initially assessed alone in the prospective as well as in the retrospective study. If a positive NS1 test can confirm a dengue diagnosis, this is not the case for IgM and IgG tests as the antibodies remain detectable for months and thus a positive result obtained on a single blood specimen is only suggestive of a dengue infection. Indeed, to confirm an acute dengue infection by serology, an IgM seroconversion or a four-fold increase of IgG antibody titers in paired sera must be demonstrated (which cannot be done with the RDT kit as result is only qualitative) [1]. By evaluating separately, but in parallel, the NS1 test and the serological kit, we estimated the ability of the test to both suggest and confirm a dengue infection. During the prospective study, the sensitivity of the SD Bioline Dengue Duo NS1 when performed at the hospitals was only 44.5% to confirm dengue infections in children hospitalized for dengue-like illness during the epidemic season. The tests were carried out in laboratories equipped for routine medical biology. Out of the 127 patients included in the prospective evaluation, 70 (54.7%) had an HI titer $640 which could probably explains such a poor sensitivity. The retrospective study helps to understand why the sensitivity was limited. It suggested that the presence of high level of anti-DENV HI antibodies in the sample was a major factor for sensitivity decrease. Indeed, while a sensitivity .80% was obtained with samples containing no or low HI antibodies titer (,640), the sensitivity dropped to 37% when the HI titer was $640. Almost 86% of the samples with a high HI titer issued from patients with a secondary infection. Since HI titer reflects mainly IgG response, the poor sensitivity observed during secondary infections is probably directly linked to the high IgG titer. Similar observations were already made by other authors. In Vietnam, the same NS1 test demonstrated a sensitivity of 24.6% for samples positive for IgG by GAC-ELISA and a sensitivity of 77.3% in sera negative for IgG [15]. In Colombia, Osario et al. reported an even lower sensitivity (IgG negative: 65.6%, IgG positive: 15.6%) [16]. Of note, the methods used for IgG detection in these evaluations were all different and rather than giving the real performance of the kit, the data indicate a global trend to a lower sensitivity when IgG titers increase. Table 4. Performances of the kit against confirmed and probable dengue cases in hospitals and IPC. As others [16,17], we observed that the sensitivity of this test decreased when the window of time between onset of fever and sampling increased. This was expected since the IgG titer also increased with the time. Finally, a higher IgG titer also characterizes the secondary dengue infections and the better sensitivity of the NS1 in primary infections was also already reported [15][16][17]. The performances of the NS1 test reported here as well as by other retrospectives studies are close to those observed with other commercial NS1 RDTs [15,18]. A major value of the kit marketed by SD is the combination of the NS1 test with an anti-DENV antibodies detection kit. Indeed, the serological results improved the sensitivity by compensating for the loss of sensitivity usually observed with the NS1 test when used alone in the presence of specific anti-DENV antibodies. During the prospective evaluation, we demonstrated that the addition of IgM and IgG results to the NS1 data was only associated with a slight non-significant decrease of the specificity. However, this result should be interpreted with caution as the number of negative patients included was relatively small. In addition, the relatively low overall performance of the IgM/IgG test could well be partially due to imperfect gold standard tests. In the retrospective study, we did not observed any cross-reactivity with Chikungunya virus, Orientia tsutsugamushi or Plasmodium sp.. However when evaluating the SD Bioline Dengue Duo kit, Blacksell et al. [18] reported 12.2% (10/82) of cross-reactivity with Chikungunya virus, 12.5% (1/8) with Orientia tsutsugamuhi and 100% (1/1) with Plasmodium sp. When evaluating only the IgM part of the kit, Hunsperger et al. [19] reported around 35% of IgM cross-reactivity with malaria as well as some false positive results with leptospirosis, tuberculosis and West-Nile infections. During the prospective evaluation, the PPV value of the NS1 test was 98.2%, suggesting that the probability to correctly confirm a dengue infection was very high when the test was positive. When the test was used in combination, the PPV decreased only very slightly (NS1/IgM: 96.9%; NS1/IgM/IgG: 95.6%). Consequently, the NPV observed when the tests were performed in the hospitals was only 29% for the NS1 test alone and 56.8% for the combination test. In other words, the probability of truly exclude a dengue infection when the tests were negatives was low. These PPV and NPV results should be regarded with caution as they depend on the dengue disease prevalence that can be extremely different in other contexts and epidemiological situations. In this prospective study, the prevalence of dengue infection was very high (80.3%, 126/157) because the evaluation was performed during the peak epidemic season and only involved dengue suspect patients. Observing high prevalence of dengue infections in suspect patients hospitalized is common in Cambodia (87.8% of average between 2000 and 2008) and in neighboring countries like Vietnam (86.2% during a DENV-4 epidemic in 2002) [20,21]. On the samples collected during the prospective study, the comparison of the results of the tests performed by technicians in hospital laboratories or by health workers who did not receive any specific training for the use of the kits with the results reported by the staff of the national reference laboratory at IPC demonstrated a moderate agreement with the serological tests and an excellent agreement with the NS1 test. Indeed, 49 discordant results between the hospitals and IPC were observed with the IgM/IgG test out of which 34 (69.3%) were positive at IPC but negative at the hospitals while 13 (26.5%) were negative at IPC but positive at the hospitals. These discrepancies could be explained if the reading was made before the recommended 15 minutes (leading to false negative results) or after the correct time (leading to apparition of unspecific bands) or because of problems with the interpretation of weak signals (faint bands). To evaluate if the issue was the interpretation of the faint bands, these data were removed from the analysis and a better agreement percentage and Kappa coefficient were obtained (82.0% vs 68.8% and 0.73 vs 0.55). A problem of reproducibility could also have accounted for some of the discrepancies observed. Nevertheless, in the case of bad reproducibility an equal number of discrepancies should have been observed in each laboratory which was not the case in our study. Moreover all tests were from the same manufacturing lot. During a malaria RDTs evaluation, misinterpretation of weak signal in the field had already been reported [22]. It was also reported that health workers in the field tend to read the results before the time recommended by the manufacturer [22,23]. Despite its relative ease to use, the performances of the IgM/IgG RDT are obviously partially person-dependent, hence the importance of providing specific training or at least very clear pamphlets which could guide the health worker in its interpretations and expose the risks of false results when the recommendations are not strictly followed. On the contrary a very good agreement was observed with the NS1 test since the bands in this immunochromatographic device almost always appear very clearly. As the RDTs have a significant cost, promoting the use of these kit does only make sense if the health workers can perform the tests in good conditions, which seems to be sometimes challenging in intensive care units and pediatric wards that are often unable to cope during peak epidemics. Knowing these constraints and limitations, the manufacturer should be encouraged to correct, if possible, the reading issues of the serological test. The outcomes of the patients who were wrongly tested negative by the kit was a matter of concern as RDTs are designed for rapid diagnostic and to assist physicians in their decisions. Dengue is a life-threatening disease that requires specific clinical care. The analysis of the medical records demonstrated that physicians ignored the negative results and followed their clinical instinct as all patients who tested negative by RDT received an intravenous fluid therapy which is recommended in patients with warning signs [1] but which is also often administrated in mild cases to prevent complications. Similar observations were also made in the context of malaria RDTs use. Between 54% and 85% of the patients with negative malaria RDT results were treated with anti-malaria drugs in Nigeria, Tanzania, Burkina Faso, Philippines and Laos [22][23][24][25]. There are probably several reasons that could explain that physicians did not consider the negative results obtained with the RDT: the habit to rely mostly on clinical intuition explained by a frequent limited access to laboratory tests, some mistrust against a new test, the difficulties to understand the kinetic of the immune response during dengue infection and the significance of NS1, IgM and IgG test results, a high confidence in clinical diagnosis when children present to pediatric wards with dengue-like symptoms during the epidemic season (especially since the national virological surveillance confirms usually more than 80% of the dengue clinical diagnosis) [20], the fear that a misdiagnosed dengue infection evolves towards a DHF or a DSS while these complications are pretty easy to prevent with simple clinical management, etc. The SD Bioline dengue Duo test could have a better utility in smaller medical care structures, like health care centers and dispensary where the proportion of dengue among all febrile diseases is lower (e.g., 12% of all the febrile episodes in Kampong Cham province, 2006-2008) [11] and where routine hematology (e.g., hematocrit, platelet count) that could help to orientate the diagnosis are not often available. One of the advantages to perform a rapid confirmatory diagnostic of dengue in the context of febrile illness is to avoid the unnecessary use of antibiotics. In the context of Cambodia, it seems the RDT results did not have a significant impact on the decision to start or discontinue an antibiotic therapy. In an endemic country, especially in the context of an epidemic, it seems that the sensitivity of the NS1 RDT alone is too low and that only positive results should be taken into consideration. Nevertheless, the performances of the combined kits are good and these kits appear to be a useful tool for the clinicians as they can quickly confirm the diagnosis of dengue and therefore contribute to the an optimal clinical management of the cases and avoid an unnecessary use of antibiotics or other drugs which is important in the context of a developing country with limited resources. In conclusion, we observed that for a patient presenting with dengue-like symptoms in a dengue-endemic/epidemic region, a NS1 positive result obtained with the SD Bioline Dengue Duo kit confirms a dengue diagnosis, an IgM and/or IgG positive result highly suggests dengue infection but a negative result doesn't rule out a dengue infection. We have also demonstrated that the performances of the test in the field were lower than the ones obtained in the more experienced hands of technicians working in a national reference laboratory. This suggest that even for a point of care test theoretically designed to be used by untrained staff, there is still a significant improvement of the performance of the test to expect if a proper training and a quality assurance program can be implemented. With the time, the trust of the physician will probably increase if the accuracy of the test improves. In general, manufacturers should always bear in mind that the ultimate goal of the RDTs is essentially to be used as a point-of-care test or in support of epidemiological investigation and as such should be easy to use, stable at room temperature but also not posing reading difficulties unless they can provide proper training and organize quality programs. More prospective field evaluations are still necessary now to better assess the interest to use such point-of-care tests in the real conditions that justified their development and to address some of the questions and concerns raised by this study.
2016-05-14T02:07:00.056Z
2012-12-01T00:00:00.000
{ "year": 2012, "sha1": "f7c491d323d6bad3e7f67138261e95c4e6b446f4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0001993&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7c491d323d6bad3e7f67138261e95c4e6b446f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268345992
pes2o/s2orc
v3-fos-license
Development of Poly(vinyl alcohol)–Chitosan Composite Nanofibers for Dual Drug Therapy of Wounds Current trends in localized drug delivery are emphasizing the development of dual drug-loaded electrospun nanofibers (NFs) for an improved therapeutic effect on wounds, especially infected skin wounds. The objective of this study was to formulate a new healing therapy for an infected skin wound. To achieve this goal, this study involved the development and characterization of poly(vinyl alcohol) (PVA)/chitosan nanofibers loaded with ciprofloxacin and rutin hydrate. Polymers and drugs were used in different ratios. Nanofiber morphology was studied by scanning electron microscopy, thermal stability by thermogravimetric analysis, structural determination by the X-ray diffraction method, and integrity by Fourier transform infrared spectroscopy. Dissolution studies were performed to check the drug release behavior of the formulations. Antibacterial studies were performed against Staphylococcus aureus and Pseudomonas aeruginosa. The wound healing efficiency of dual drug-loaded nanofibers was measured by a full-thickness excisional wound model of rabbits. The fabricated nanofibers were smooth in morphology. According to FTIR findings, the drugs remained intact in the nanofibers. The results of swelling ratio and porosity revealed that the pore size was increased as the amount of chitosan was increased up to 30% but a further increase in chitosan concentration reduced the swelling ratio and porosity. Drug release studies of nanofibers depicted an initial burst effect and afterward controlled drug release behavior. Drug-loaded nanofibers showed better activity against S. aureus than P. aeruginosa. The antibacterial efficacy of rutin hydrate with ciprofloxacin was improved compared to that of the formulation having rutin hydrate only, likely due to the additive effect in activity. Based on wound healing studies, nanofibrous membranes acted as a promising wound dressing material as compared to the commercial wound healing formulation. Drug-loaded polymeric nanofibers were successfully fabricated by using an electrospinning method. These nanofibers showed an efficient ability to deliver drugs and treat infected wounds. INTRODUCTION With the advancements in nanotechnology, polymeric nanofibers have a variety of applications in biomedical fields, including wound dressing, drug delivery systems (DDS), and tissue engineering.Because of their small size ranging from 5 to 100 nm in diameter, large surface area, high porosity, and high drug loading capacity, they are widely used for the development of antimicrobial products. 1Both natural and synthetic polymers can be used for nanofiber production. 2 To make a nanofibrous mat with good mechanical properties and high immunogenicity, it must be combined with other synthetic or natural polymers like PVA (poly(vinyl alcohol)), PCL (polycaprolactone), and PLA (poly(lactic acid). 3The composition, porosity, and morphology of electrospun nanofibers have a vital role in the prolonged drug release and prevention of the burst effect. 4Drug encapsulation into a polymer increases its efficiency for sustained release and for a longer period of time. 2 Burn wounds are the most periodically occurring traumas, leading to more than 250,000 deaths worldwide. 5Burn wounds can be classified based on depth and penetration into the skin, generally classified as first-, second-, and third-degree wounds.Third-degree wound (full-thickness wound) is the most complex condition and is prone to growth of microorganisms, leading to delay in the wound healing process. 6Polymeric nanofibers loaded with antibacterial compounds play their vital role in inhibiting the growth of bacteria. 7Use of chitosan in the pharmaceutical field is increasing day by day because of its antibacterial, wound healing, and bioadhesion properties. 8A transdermal drug delivery system (TDDS) is the most efficacious and novel technology in recent times because of its least side effects on the liver and gastrointestinal tract. 9ransdermal drug delivery can be achieved with the help of patches, which are widely used for targeted drug release.It allows sustained drug release across the skin, while intramuscular and intravenous infusions are the invasive approaches of drug delivery that require administration of drug under observation of medical staff. 10Both natural (such as rutin hydrate) and synthetic (ciprofloxacin) drugs can be loaded onto a TDDS.On coming in contact with wounds, the loaded drugs interact with tissues and accelerate the process of wound healing. 11It allows a limited and controlled absorption of the drug.Thus, a transdermal route is sometimes preferred over oral and injectables. 12iprofloxacin hydrochloride is a fluoroquinoline antibiotic used for treating wound infections.Ciprofloxacin carries a very short half-life, so it is very difficult to achieve slow and prolonged drug release. 13It is used to treat several types of infections including joint and bone infections, typhoid, chronic bacterial prostatitis, skin infections, diarrhea, chronic sinusitis, 14 and outer ocular infection. 15Mostly, it is used for treating urinary tract infection and for bacteria that is transmitted sexually. 16Ciprofloxacin is widely used for treating Gram-positive and Gram-negative bacterial infections. 13Rutin hydrate (MW 610 g/mol), also known as phytomelin, is a naturally occurring flavonoid that is present in more than 70 fruits and vegetables.Rutin is a glycoside that chemically can be mixed with natural and synthetic polymers.The major drawback of rutin is its bioavailability and poor solubility, which were overcome by PVA and chitosan polymers.The incorporation of a biologically active compound in a polymer matrix composite in an aqueous environment is an efficient method to improve the limited water solubility and bioavailability. 2,17Nowadays, one of the cutting-edge techniques for developing polymer-based drug delivery carriers is electrospinning.This is due to the fact that the average diameter of polymer fibers decreases to micro-or nanometers, which increases the surface area, thereby increasing localized drug delivery. 10he use of nanofibers for biomedical applications is increasing day by day due to their unique properties, which include bioavailability, renewability, inexpensiveness, biodegradability, and biocompatibility. 18Along with these properties, their structural resemblance with the structure of the extracellular matrix to promote cell adhesion and proliferation makes them a suitable candidate for wound healing and other biomedical applications. 19−22 Water-permeable characteristics of PVA make it a suitable candidate for use with many drugs, and its dissolution with other natural polymers is quite easy. 23oreover, PVA is mostly used in electrospinning due to its high mechanical resistance, fiber-forming properties, and biocompatibility. 24It supports cell adhesion, proliferation, and migration.Nonetheless, PVA, due to its inert bioactive nature, cannot be administered for full-thickness wounds.Thereafter, PVA is blended with other natural or synthetic polymers.Chitosan (CS) is a chitin derivative that is biocompatible, and biofunctional aminopolysaccharides have an efficient role in wound healing. 19Its structure is mostly based on D-glucosamine units, and these rigid units' intermolecular hydrogen bonding and high crystallinity lead to its poor solubility in organic solvents.Positively charged polysaccharides carry aliphatic amine groups in their structure.Under acidic conditions, protonation of aliphatic amines takes place and gives a cationic polyelectrolyte.Cationic behavior, hydrogen bonding, and rigid D-glucosamine in the structure of chitosan make it highly viscous, which is the major reason for its poor electrospinning because it is not possible for the electrostatic field to overcome the surface tension of solution. To reduce this problem, chitosan must be dissolved in dilute acetic acid solution.So, by increasing the concentration of chitosan, there will be an increase in its viscosity and decrease in shear thinning effect, which will make its electrospinnability possible up to some extent. 25To deal with all these problems, chitosan must be used with any other natural or synthetic polymer that is easily electrospinnable, e.g., PVA, silk fibroin, and poly(ethylene oxide) (PEO).Mostly, PVA is used with chitosan in drug delivery.The incorporation of chitosan in PVA nanofibers improve its surface hydrophilicity. 26,27In addition, chitosan nanofibers can mimic the natural extracellular matrix (ECM), which attracted extensive attention in tissue engineering and wound dressing applications. 28ross-linking is an important phenomenon for increasing the stability and wet resisting capability of synthesized electrospun nanofibers.A variety of cross-linkers and crosslinking methods are employed in the fabrication of ciprofloxacin−rutin hydrate-loaded electrospun PVA/chitosan nanofibers. 29The polymers used in this study, PVA and chitosan, are hydrophilic in nature.On forming nanofibrous mats of these polymers alone, they can be easily destroyed in aqueous media due to poor wet stability and least mechanical properties in aqueous media. 30These hydrophilic polymers are indeed effectively used for wound dressing because they have an excellent ability of absorbing the exudates secreted from the wounds, but loading of hydrophilic drugs on to hydrophilic polymers leads to a burst release of drugs. 6So, cross-linking could be carried out to achieve sustained drug release from the nanofibrous mat.Chemical cross-linking was performed by exposing the nanofibrous membrane to glutaraldehyde solution. 15,22Other methods of cross-linking involve physical methods, which include UV irradiation, dehydrothermal treatment, and simple heating. 31A chemical cross-linker interconnects polymer molecules by increasing mechanical properties that may lead to decreased degradability and least availability of functional groups in fabricated nanofibers.Montmorillonite (MMT) is a nanoclay that is used as a reinforcement material for increasing mechanical characteristics and stiffness of the polymer chains. 32Due to its nontoxicity and slow drug release behavior, it has an efficient role in drug delivery. 33herefore, the aim and novelty of this study constitute nanofibers impregnated with two compounds ciprofloxacin and rutin using an electrospinning method.Nanofibers offer a high surface area and porosity, enhancing drug release kinetics and bioavailability. 8The controlled release of ciprofloxacin and rutin from the nanofibers can contribute to improved therapeutic outcomes by maintaining an effective drug concentration at the target site while minimizing systemic side effects.Additionally, the nanofibers provide a versatile platform for developing advanced drug delivery systems, paving the way for innovative approaches in pharmaceutical and medical applications. 2.2.Preparation of Nanofibers.Chitosan (CS) (2% w/ w) was dissolved in distilled water with a few drops of acetic acid by continuous stirring at 340 rpm at 50 °C for about 12 h (Table 1, Figure 1).PVA was dissolved in distilled water under continuous magnetic stirring at 80 °C for about 7−8 h.Then, 1% (in weight) of MMT solution was added as a reinforcement in PVA solution to get high mechanical strength and thermal resistance.Both polymers were mixed with continuous stirring for about 10 h at 360 rpm for complete dissolution and stability of the polymer blend.Rutin hydrate solution was prepared separately by dissolving it in ethanol, while ciprofloxacin was dissolved in distilled water in addition to 10−12 drops of acetic acid.The drugs were loaded on polymer blend solution under continuous magnetic stirring for about 24 h at 380 rpm at room temperature to get a homogenized solution.To achieve a high level of uniform diameter of nanofibers, electrospinning was processed under controlled atmospheric conditions.Henceforth, the rutin−ciprofloxacinloaded PVA-MMT/CS nanocomposite solution was transferred into a 5 mL syringe having 11.99 mm diameter.The distance of the needle tip and collector was kept 14 cm.The solution was ejected from the needle at a flow rate of 400 μL/ h, and the voltage was adjusted between 14 and 15 kV.After solvent evaporation, the nanofibrous mats were collected from the collector. X-ray Diffraction Spectroscopy (XRD). This characterization technique was performed to analyze the crystallinity of the fabricated nanofibers.Pure drug and the nanofibers were tested using an X-ray diffractometer (Phillips XPERT PRO 3040/60). Thermogravimetric Analysis (TGA). Thermal stability of rutin−ciprofloxacin-loaded CPNFs was analyzed using a Q600 SDT TGA analyzer.The thermal decomposition of drug-loaded nanofibers was measured at 10 °C/min in a temperature range of 20−700 °C.The weight loss of the sample was observed as a function of temperature at the controlled temperature supply in a nitrogen atmosphere.2.5.FTIR Characterization of Rutin−Ciprofloxacin-Loaded PVA/CS Nanofibers.By using FTIR, both qualitative and quantitative analyses can be carried out for determining the functional site or reacting site.This analysis was carried out on KBr pellets ranging from 4000 to 500 cm −1 .Through FTIR, we can determine the nature of bond, reaction kinetics, presence of impurities, presence of covalent bond, and presence of cis/trans configuration in the compound mixture. 2.6.SEM of Rutin−Ciprofloxacin-Loaded PVA/CS Nanofibers.SEM characterization of the synthesized nanofibrous mats was performed on a Joel, JSM 6400f.The morphology, topology, and diameter of the drug-loaded nanofibrous mat can be determined by this technique.SEM images determined the changing properties of the nanofibrous sheet after the addition of drugs and their exposure to glutaraldehyde solution. 2.7.Solubility Study.Distilled water and organic solvents including DMF, DMSO, methanol, and ethanol were used to check the solubility of drugs.A total of 10 mg of drug was added into each solvent and stirred continuously at 50 rpm until a clear solution was obtained.The absorbance of both drugs was determined using a UV spectrophotometer. 2.8.Calibration Curve.Stock solutions of both drugs were prepared by dissolving 10 mg of drug in 10 mL of methanol:distilled water mixture.From this stock solution, various dilutions in a range of 0.1−1 μg/mL were prepared by taking a required amount of stock solution and further diluting it with the help of 10 mL of fresh solvent.The absorbance of different dilutions was measured using a UV spectrophotometer at 278 nm for ciprofloxacin hydrochloride and 360 nm for rutin hydrate.Finally, calibration curves were drawn using MS Excel. 2.9.Swelling Ratio and Porosity Studies.Swelling ratio and porosity studies were also performed to assess the fluid absorption and permeation capabilities.Porosity study was performed to check how porous the nanofiber formulations were.It was calculated using the following formula: where W 2 is the weight of the wet sample, W 1 is the weight of dry fiber, V 1 is the volume of solvent before adding fiber, V 2 is the volume of solvent after removing fiber, and ρ is the density of solvent in g/mL at 25 °C (for water, ρ = 0.9970 g/mL). Drug Release Studies. Using a type II dissolution apparatus, drug release studies were performed in a dissolution medium containing 10% methanol having a pH of 6.8, maintained at 37 ± 0.05 °C, and stirred at 50 rpm.Sampling was done at the predetermined time points and tested using a UV spectrophotometer at 278 nm for ciprofloxacin and 360 nm for rutin.Dissolution studies were carried out in triplicate.Drugs were quantified by using the calibration curves. 2.11. In Vitro Antibacterial Assay.The antibacterial activity of the fabricated nanofibers was checked by an agar well diffusion test, and the zone of inhibition (ZOI) was studied against Staphylococcus aureus and Pseudomonas aeruginosa isolates.These isolates were acquired from a microbiology lab.Agar medium was prepared and poured into Petri plates, and samples were placed on the medium.Finally, the plates were incubated over standard conditions. 2.12.In Vivo Wound Healing Studies.Research Ethics Committee of COMSATS University Islamabad (CUI), Lahore Campus, approved all protocols for this animal study (approval number: 897/CUI/PHM-2021).All the animal experiments were conducted in accordance with the ARRIVE Guidelines and U.K. Animals (Scientific Procedures) Act 1986 and associated guidelines for the care and use of laboratory animals. Antibacterial and wound healing studies were performed at the Department of Pharmacy, COMSATS University Islamabad, Lahore Campus.The wound healing property of nanofibers was checked against the marketed formulation (Quench).For this purpose, 12 rabbits weighing 1−2 kg were selected.The posterior dorsal region of rabbits was trimmed and shaved to get clear skin and washed with ethanol to minimize the chances of rashes.The rabbits were anesthetized with xylene and ketamine hydrochloride injections prior to wound creation.Four wounds were created on the four legs of each rabbit with the help of a red-hot iron having diameter 1− 2 cm 2 , and then, bacterial strains were seeded on each wound.On one wound, drug-loaded nanofibers were applied.The second wound received drug-free nanofibers.On the third wound, the marketed formulation (Quench) was applied, and the fourth wound was kept open and untreated.The following formula was used to check the healing of wounds: A o is the original area of the wound, and A is the wound area after a fixed time. 2.13.Statistical Analysis.The paired t test was applied to compare the findings using SPSS version 22.0, keeping the level of significance set at 0.05. X-ray Diffraction Spectroscopy (XRD). This characterization technique was performed to check the crystallinity of the fabricated nanofibers.XRD results in Figure 2 clearly depict the semicrystalline nature of drug-loaded nanofibers.XRD of R1 was investigated to check whether the nanofibrous sheet is crystalline, amorphous, or semicrystalline in nature.XRD data in Figure 3 shows clearly the semicrystalline nature of nanofibers because the peaks are neither too sharp nor too broad.Although PVA and chitosan are crystalline in nature, the peak is not sharp due to the adsorbent nature of chitosan and cross-linking of nanofibrous sheet with glutaraldehyde that changes the peak intensity.The peak of chitosan indicated the presence of inter-and intramolecular hydrogen bonding between the hydroxyl and amino groups, forming hydrated and anhydrous crystals. 34The nanofibers R2 showed no sharp peak, and the intensity of peaks diminished, which could be due to obstruction of hydrogen bonding of cationic chitosan with ciprofloxacin and rutin hydrate. Thermal Stability of Rutin−Ciprofloxacin Nanofibers by TGA. Thermal stability of drug-loaded nanofibers was estimated by TGA analysis performed for the R1 sample.Depending upon the shape of the curve, we can easily determine how stable our material is.TGA results of PVA/ chitosan nanofibers revealed that weight loss occurs at three different points, but that weight loss is very low, which gives another hint about the sustained release of drug.The TGA graph of drug-loaded nanofibers for formulation R1 (rutin− ciprofloxacin) was investigated.The weight loss was measured at three different points that shows a slow decomposition of polymers and drugs (Figures 4 and 5).The first weight loss occurred at about 50−120 °C, which was due to loss of bounded water present in all the formulation of nanofibers.The second weight loss started at 125−130 °C at which the decomposition of polysaccharide chains of both polymers PVA and chitosan started and the drugs started melting. FTIR Analysis of Drug-Loaded CPNFs. The synthesized nanofibers were further subjected to FTIR analysis.In the IR spectra of drug-loaded nanofibers with both drugs (ciprofloxacin HCl and rutin hydrate), both showed peaks of OH at 3376.06 cm −1 , while for chitosan, it was observed at 3346.49cm −1 .In the case of chitosan, other peaks appear including the C−H stretching at 2937 cm −1 , C=O at 1652 cm −1 , C−H bend at 1356 cm −1 , and C−O at 1059.72 cm −1 .Different peaks for ciprofloxacin were observed at different wavelengths; the NH peak appeared at 3340.57cm −1 , OH at 2935.20 cm −1 , and C=O at 1705.52 cm −1 , C−F within the range of 1400−1000 cm −1 , and C−O at 1270.87 cm −1 .The hydroxyl group of rutin hydrate showed a peak at 3334.66 cm −1 , C=O at 1656.05 cm −1 , and C−O at 1296.29 cm −1 .The FTIR results of both polymers, drugs, and R1 formulation, which contains both polymers and drugs with MMT, are shown in Figure 6. Physical Characteristics of Rutin−Ciprofloxacin Nanofibers by SEM. SEM micrographs of R1 formulations at different resolutions were carried out at 10 kV.SEM images at different resolutions at high magnifications revealed the smooth morphology of nanofibers having diameter in the nanorange (Figure 7), but they also showed bead formation, which is due to chitosan.Chitosan has gelling properties, so it blocks the needle tip of the electrospinning machine frequently and shows spitting on the sheet.The increase in the amount of chitosan increases this problem, which automatically affects the fiber diameter.The SEM results indicated that these electrospun nanofibers are placed in a crisscross manner, showing spaces among the nanofibers.In addition, the image scale shows the nanorange of these nanofibers' diameter.that the linear regression coefficients were R 2 = 0.99462 for ciprofloxacin hydrochloride and R 2 = 0.9938 for rutin hydrate. 3.5.2.Swelling Ratio.This study was used to check the water uptake capacity of all nanofiber formulations.According to Figure 8, as the amount of chitosan increases in formulations R1 to R3, its swelling ratio also increases; however, this trend is not seen in formulation R4, likely due to bead formation.Chitosan used in higher amounts (such as 40%) leads to bead formation, which reduces the number of pores on the nanofiber surface, resulting in a reduced swelling ratio.PVA is permeable for water and chitosan is also a good absorbent.So, mixing of both polymers increases water uptake because of the increase in hydrophilic groups in the mixture.Thus, the swelling ratio first increases and then decreases when chitosan is increased more than 30%.The swelling ratio of R5, R6, and R7 is comparable to one another because these formulations have the same amount of chitosan.R7 is a blank formulation and shows more water uptake than R6, which could be due to the presence of 20% rutin hydrate in R6, which is insoluble in water. Determination of Porosity. The porosity study is shown in Figure 9.The increase in swelling promotes porosity. Like the swelling behavior, R3 has the highest (P < 0.05) porosity.There was an increase in porosity from formulations R1 to R3, while the porosity was decreased in other formulations (Figure 9).Formulation R4 contained a maximum amount of chitosan, but the use of more than 30% of chitosan in the formulations led to bead formation in nanofibers, which resulted in a decreased number of pores.Formulation R5 containing 20% ciprofloxacin showed more porosity than R6 having 20% rutin hydrate.This behavior is likely because of the miscible and immiscible behavior of both drugs.Blank formulation R7 showed more porosity than R1, R5, and R6, although all of them had the same amount of PVA and chitosan; this difference could be due to the presence of drugs having different solubilities. 3.5.4.Drug Release Studies.Formulation R1 permitted the fastest release of ciprofloxacin followed by R2, R3, and so on.This decrease in the ciprofloxacin release rate was associated with a proportional increase in chitosan concentration.However, this increase in the amount of chitosan did not exert a significant retarding effect on the release of rutin.This difference in the release behavior of both drugs is due to their solubility of dissolution.Ciprofloxacin is more water-soluble than rutin, 4 and thus, it was released at a faster rate than rutin.In addition, a burst release of both drugs occurred in the first 15 min followed by a very slow-release phase.Various formulations allowed for a release of about 25−79% during the initial 15 min.This swift release could be attributed to the hydrophilic nature of the polymers (chitosan and PVA) used in the fabrications of nanofibers (Figure 10).Similar findings have been reported in previous publications. 5,6.5.5.In Vitro Antibacterial Assay.The antibacterial activity of drug-loaded NFs was evaluated by a dynamic contact assay and agar well diffusion method (Figure 11A,B) at the MIC (6 μg/mL) against different strains of S. aureus and P. aeruginosa.In the dynamic contact assay method, both Grampositive and Gram-negative bacteria were incubated with different membrane species for about 12 to 24 h.All the formulations showed an appreciable activity against S. aureus than P. aeruginosa (Figure 12).The antibacterial activity of PVA/chitosan nanofibers against S. aureus is more pronounced than that against P. aeruginosa.This is because the outer membrane structure of Gram-negative bacteria limits the permeability of both drugs.Formulation R3 (cipro−rutin ratio 1:1) showed the highest ZOI against S. aureus.Although there is a nonspecific trend on the ZOI findings, the increased concentration of chitosan, as in formulations F3 and F4, showed a significantly (P < 0.05) improved activity against both bacterial strains, as compared to formulations R1 and R2.As chitosan acts as a natural antibiotic, so increasing the amount of chitosan increases the antibacterial activity.Chitosan is polycationic in nature, so its main function is to destroy the cell wall of bacteria, which hinders the delivery of nutrients into the cell and promotes the leakage of intracellular components.In the case of P. aeruginosa, nanofibers did not show promising results, and the change in the amount of chitosan did not show any impact on it.It is just because of bacterial resistance.The results depict that the antibacterial efficacy of rutin hydrate with ciprofloxacin is much better than that formulation having only 20% rutin hydrate.As expected, the pure PVA scaffold is completely dissolved at the end of the 24 h incubation and a nonsignificant (P > 0.05) inhibition zone formation was observed. 35In contrast, a neat exhibition zone is observed for drug-loaded PVA/chitosan nanofibers.In the case of nondrug-loaded PVA/chitosan nanofibers, the narrow inhibitory zone is observed, which is due to the frequently reported antibacterial activity of chitosan. 36his study evaluated the antibacterial activity of dual drugloaded nanofibers against P. aeruginosa and S. aureus, revealing that the antibacterial activity of drug-loaded nanofibers was greater (4 times more) than that of free drug. 3.6.Wound Healing Studies.The wound healing process is a combination of four stages, namely, cell homeostasis, cell inflammation, proliferation, and remodeling. 37The process of homeostasis is initiated by vascular constriction followed by coagulation of blood to slow the flow of blood from the injured tissue.The second phase is the inflammation phase in which nutrient-rich blood is supplied at the site of injury, which results in swelling of tissues.This phase also includes the angiogenesis process, which involves proteins and cytokines, the signaling cells involved to prevent infection. 38The proliferation phase involves formation of new cells, leading to granulation of new cells and tissues and formation of collagen by fibroblast cells at the site of injury. 4As new cells formed around the wound, a scar is formed, while collagen is involved in remodeling and closure of wound. 35This is known as the remodeling phase. 39,40The wound healing performance was checked with the help of a scale after different time periods (Table 2, Figures 13 and 14). The percentage area of wound closure was measured for 17 days.Wounds were treated with the antibacterial cream and drug-loaded and blank NFs.Healing was faster in wounds treated with drug-loaded and blank NFs than Quench-treated wounds and open wounds because chitosan itself has antibacterial properties and addition of drugs increases the healing properties of NFs. 37The incision was fully closed after 17 days in the drug-loaded formulation.Drug-loaded NFs inhibit bacterial growth by killing them; however, blank NFs also show good results after drug-loaded NFs, which is likely due to the presence of chitosan, which acts as a wound healing accelerator. 38The process of wound healing increases steadily when the drug is incorporated into the nanofibers.So, PVA− chitosan NFs along with the addition of ciprofloxacin HCl and rutin hydrate act as an excellent wound dressing. CONCLUSIONS AND PERSPECTIVE In summary, the research study provided insights into the potential applications of ciprofloxacin-and rutin-loaded nanofibers.The study highlighted the successful synthesis of dual drug-loaded nanofibers using the electrospinning method and emphasized the optimal conditions for electrospinning.The cost-effectiveness of rutin−ciprofloxacin PVA−CS nanofibers was demonstrated as an excellent dressing material with efficacious antibacterial activity compared to the standard.The cell viability of the synthesized nanofibers is higher, as evaluated in cell culturing.Moreover, the dissolution studies revealed the burst release of drugs within 15 min, which depends upon the solubility of dissolution of ciprofloxacin and rutin hydrate.The dissolution of rutin is proportional to the increase in the amount of chitosan.Finally, in vivo animal studies revealed the progression of wound healing in 17 days.Overall, rutin−ciprofloxacin dual drug-loaded PVA−chitosan nanofibrous films were evaluated to have great potential for biomedical applications because of their antibacterial, effective biocompatibility, and full-thickness burn wound healing. Figure 2 . Figure 2. XRD graph of the R1 formulation in which both drugs are present in nanofibers. 3 . 5 . Pharmaceutical Studies.3.5.1.Calibration Curves of Rutin Hydrate and Ciprofloxacin Hydrochloride.The calibration curves of rutin hydrate and ciprofloxacin hydrochloride were plotted between the drug concentration in different dilutions and its absorbance.The findings showed Figure 5 . Figure 5. First derivative of weight loss in TGA showed a weight loss around 350 °C. Figure 6 . Figure 6.Comparison of FTIR spectra of PVA, chitosan, ciprofloxacin hydrochloride, and rutin hydrate with formulations R2 and R7 indicating loading of drugs completely in the nanofibers. Figure 9 . Figure 9. Determination of the porosity of different formulations of nanofibers. Figure 10 . Figure 10.Drug release behavior of various formulations. Figure 11 . Figure 11.Optical images of antibacterial activity of NFs against (A) S. aureus and (B) P. aeruginosa. Figure 14 . Figure 14.Relationship between wound closure and treatment days of wound healing studies. Table 1 . Formulations of Nanocomposite Samples uponAdding a Constant Amount of MMT (0.01 g)
2024-03-12T15:41:35.260Z
2024-03-07T00:00:00.000
{ "year": 2024, "sha1": "7659083ccacc4eb51056133bae98bc4ca07132f5", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c08856", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6d9b41540a7583260238be6db898e797d0a6869f", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [] }
30620002
pes2o/s2orc
v3-fos-license
Effect of probiotic administration on the intestinal microbiota , current knowledge and potential applications Although it is now known that the human body is colonized by a wide variety of microbial populations in different parts (such as the mouth, pharynx and respiratory system, the skin, the gastroand urogenital tracts), many effects of the complex interactions between the human host and microbial symbionts are still not completely understood. The dysbiosis of the gastrointestinal tract microbiota is considered to be one of the most important contributing factors in the development of many gastrointestinal diseases such as inflammatory bowel disease, irritable bowel syndrome and colorectal cancer, as well as systemic diseases like obesity, diabetes, atherosclerosis and non-alcoholic fatty liver disease. Fecal microbial transplantations appear to be promising therapies for dysbiosis-associated diseases; however, probiotic microorganisms have been growing in popularity due to increasing numbers of studies proving that certain strains present health promoting properties, among them the beneficial balance of the intestinal microbiota. Inflammatory bowel diseases and WJG 20 Anniversary Special Issues (17): Intestinal microbiota TOPIC HIGHLIGHT 16518 November 28, 2014|Volume 20|Issue 44| WJG|www.wjgnet.com obesity are the pathologies in which there are more studies showing this beneficial association using animal models and even in human clinical trials. In this review, the association of the human gut microbiota and human health will be discussed along with the benefits that probiotics can confer on this symbiotic activity and on the prevention or treatment of associated diseases. © 2014 Baishideng Publishing Group Inc. All rights reserved. obesity are the pathologies in which there are more studies showing this beneficial association using animal models and even in human clinical trials.In this review, the association of the human gut microbiota and human health will be discussed along with the benefits that probiotics can confer on this symbiotic activity and on the prevention or treatment of associated diseases. INTRODUCTION In February 2014, if one performed a MEDLINE search using the keywords "probiotics" crossed with "microbi-Submit a Manuscript: http://www.wjgnet.com/esps/Help Desk: http://www.wjgnet.com/esps/helpdesk.aspxDOI: 10.3748/wjg.v20.i44.16518 World J Gastroenterol 2014 November 28; 20 (44): 16518-16528 ISSN 1007-9327 (print) ISSN 2219-2840 (online) ota" or "microbiome", the total hits would be over 1294 articles.From these, almost 75% (962) were published in the last 5 years (between 2009 and the beginning of 2014) showing that the association between probiotics and microbiota is not only recent but also is gaining the attention of scientists from around the worlds.The objective of this review is to give an overview of the most recent studies that have shown that the use of probiotics can modify the human microbiota and in turn can help in the prevention or treatment of a growing number of diseases that can be caused by a dysbiosis in the microbiota composition. DEFINITIONS Before going any further, it is important to clearly define the 2 terms that are going to be described in this review, probiotics and microbiota.The most commonly accepted definition of probiotics was published by the World Health Organization/Food and Agricultural Organization in 2001 that stated that probiotics are "live microorganisms which when administered in adequate amounts confer a health benefit to the host" [1] .However, according to the International Scientific Association for Probiotics and Prebiotics (ISAPP), a non-profit scientific organization dedicated to advancing the science of probiotics and prebiotics, the term probiotic is commonly misused both commercially, when the term is featured on products with no substantiation of human health benefits, and scientifically, where the term has been used to describe bacterial components, dead bacteria or bacteria with uncharacterized health effects in humans (http:// www.isapp.net/Portals/0/docs/ProbioticDefinitionClarification.pdf).The ISAPP does not provide a new definition for probiotics, it simply points out the important elements that are contained in the FAO/WHO definition.This being said, they clarify that a probiotic must: (1) be alive when administered; (2) have undergone controlled evaluation to document health benefits in the target host; (3) be a taxonomically defined microbe or combination of microbes (genus, species and strain level); and (4) be safe for its intended use. Although the terms are sometimes used synonymously, "microbiome" and "microbiota" are terms that describe either the collective genomes of the microorganisms that reside in an environmental niche or the microorganisms themselves, respectively [2][3][4] .The term "microflora" is an equivalent term for "microbiota" that was used in the past and still appears in recent articles.The term "microbiota" is thus "the microscopic living organisms of a region" or "the microorganisms of a particular site, habitat, or geological period" according to the Dorland's Medical Dictionary for Health Consumers (2007) and the Oxford Dictionary, respectively. A keyword that has gained a lot of attention is "the human holobiont".In this theory, humans did not evolve as a single species instead they evolved with a complexassociated microbiota, building a kind of "superorgan-ism" or holobiont [5] .The human superorganism is a conglomerate of mammalian and microbial cells, with the latter estimated to outnumber the former by ten to one and the microbial genetic repertoire (microbiome) to be approximately 100-times greater than that of the human host [6] .The association between the host and its microbiota (also referred to as symbiote) provides a mutual beneficial relationship.It has recently been shown that the symbiote not only protects the host from pathogens but also decreases immune disorders by immunomodulation; while the host provides shelter and nutrients to the symbiote, the symbiote in turn also improve various body functions such as digestion to provide essential nutrients to the host [7] . Although it is now known that the human body is colonized by a wide variety of microbial populations in different parts of the human body (such as the mouth, pharynx and respiratory system, the skin, the gastro-and urogenital tracts), many effects of these complex interactions are still not completely understood.In this review, the association of the human gut microbiota and human health will be discussed along with the benefits that probiotics can confer on this symbiotic activity and on the prevention or treatment of associated diseases. DISEASES The human body has over 10 14 microorganisms in the gastro-intestinal tract (GIT), literally 10 times more than the cells of the entire human body itself.In the past, it was thought that this microbiota was useful for the host because they could contribute nutrients and energy via the fermentation of non-digestible dietary components in the large intestine.Now, it is recognized that the microbiota is also extremely important to human health due to the emergence of studies that have shown that a dysbiosis of the GIT microbiota can cause diseases or that in certain diseases there is an observable change in the composition of this microbiota. According to a recent review, a healthy microbiota is defined by high diversity and an ability to resist change under physiological stress; in contrast, microbiota associated with disease is defined by lower species diversity, fewer beneficial microbes and/or the presence of pathobionts [8] .In this review, diet-induced dysbiosis was described to be a contributing factor in the development of gastrointestinal diseases like inflammatory bowel disease, irritable bowel syndrome and colorectal cancer (CRC), as well as systemic diseases like obesity, diabetes, atherosclerosis and non-alcoholic fatty liver disease (NAFLD) (Figure 1). The close proximity of the GIT microbiota with the mucosa and gut lymphoid tissue helps explain why a balanced microbiota is likely to preserve mucosal health, whereas an unbalanced composition, as seen in dysbio-de Moreno de LeBlanc A et al .Effect of probiotics on intestinal microbiota sis, may increase the prevalence of diseases not only of the mucosa but also within the body due to the strong interactions with the gut immune system, the largest immune organ of the body [9] .Such abnormalities have been pinpointed as etiological factors in a wide range of diseases, including autoimmune disorders, allergy, irritable bowel syndrome, inflammatory bowel disease, obesity, and colon cancer.The intestinal mucosa is the body's first line of defense against pathogenic and toxic invasions from food.After ingestion, orally administered antigens encounter the GALT (Gut Associated Lymphoid Tissue), which is a well-organized immune network that protects the host from pathogens and prevents ingested proteins from hyperstimulating the immune response through a mechanism called oral tolerance.The main mechanism of protection given by the GALT is humoral immune response mediated by secretory IgA (s-IgA) which prevents the entry of potentially harmful antigens, while also interacting with mucosal pathogens without potentiating damage.The stimulation of this immune response could thus be used to prevent certain infectious diseases that enter the host through the oral route.Numerous studies have shown that certain probiotic strains can increase s-IgA and modulate the production of cytokines (mediators produced by immune cells) that are involved in the regulation, activation, growth, and differentiation of immune cells and have recently been reviewed [10] . Inflammatory bowel diseases (IBD), such as Crohn's disease (CD), ulcerative colitis (UC) or irritable bowel syndrome (IBS) can arise from the disruption of immune tolerance to the gut commensal microbiota, leading to chronic intestinal inflammation and mucosal damage in genetically predisposed hosts [11,12] .The gut microbiota composition and activity of IBD patients are abnormal, with a decreased prevalence of dominant members of the human commensal microbiota (i.e., Clostridium IXa and Ⅳ groups, Bacteroides, Bifidobacteria) and a concomitant increase in detrimental bacteria (i.e., Sulphate-reducing bacteria, Escherichia coli) [13] .Enterobacteria and Bacteroides species have been implicated as important factors in the observed dysbiosis and in the development and recurrence of IBD [14] .The observed dysbiosis is concomitant with defective innate immunity and bacterial killing (i.e., reduced mucosal defensins and IgA, malfunctioning phagocytosis) and overaggressive adaptive immune response (due to ineffective regulatory T cells and antigen presenting cells), which are considered the basis of IBD pathogenesis. Changes in the equilibrium of the intestinal microbiota were also associated to the presence of CRC.A comparative study of the stool microbioma of healthy individuals and CRC patients showed that butyrateproducing bacterial species were under-represented in the CRC samples and this finding was correlated with proportionately lower amounts of butyrate and higher concentrations of acetate in stools of CRC patients, compared to the healthy individuals [15] .These results agree with the conception that butyrate is a microbial me- ome modifying early life events to subsequent obesity risk provide some indirect evidence to support a causal role for gut microbiota in the pathogenesis of obesity [24] .Published data have proposed that dysbiosis of gut microbiota (at phyla, genus, or species level) affects host metabolism and energy storage stating that among the mechanisms involved, metabolic endotoxemia (higher plasma LPS levels), gut permeability and the modulation of gut peptides (GLP-1 and GLP-2) have been proposed as putative targets [25] .The mechanisms by which the gut microbiota affects metabolic disorders such as obesity, diabetes, and cardiovascular diseases have been proposed to be by two major routes: (1) the innate immune response to the structural components of bacteria [e.g., lipopolysaccharide (LPS)] resulting in inflammation; and (2) bacterial metabolites of dietary compounds (e.g., SCFA from fiber), which have biological activities that regulate host functions [26] .The concept of crosstalk, the biochemical exchange between host and microbiota, is also important to understand obesity since it maintains the metabolic health of the superorganism and whose dysregulation is a hallmark of the obese state [27] .Since the GIT and liver are connected by the portal venous system, the liver is thus more vulnerable to translocation of bacteria, bacterial products, endotoxin or secreted cytokines present in the GIT [8] .An obesogenic microbiota can alternate liver function by stimulating hepatic triglyceride and modulating systemic lipid metabolism that indirectly impact the storage of fatty acids in the liver [28] .A recent systematic database search was conducted and demonstrated that common mechanisms are involved in many of the local and systemic manifestations of NAFLD that can lead to an increased cardiovascular risk, and IBS, leading to microbial dysbiosis, impaired intestinal barrier and altered intestinal motility [29] . Studies in patients and animal disease models are shedding new light on the critical roles of the microbiota, metabolome and host responses in primary and recurrent Clostridium difficile (C.difficile) infection (CDI), which is the leading cause of antibiotic-associated diarrhea and pseudomembranous colitis in the healthcare setting [30] .In a recent study, culture-independent pyrosequencing was used to compare the distal gut microbiota for individuals with CDI, subjects with C. difficile-negative nosocomial diarrhea (CDN), and healthy control subjects [31] .This genomic analysis revealed significant alterations of organism lineages in both the CDI and CDN groups, which were accompanied by marked decreases in microbial diversity and species richness driven primarily by a paucity of phylotypes within the Firmicutes phylum.Normally abundant gut commensal organisms, including the Ruminococcaceae and Lachnospiraceae families and butyrateproducing C2 to C4 anaerobic fermenters, were significantly depleted in the CDI and CDN groups. These examples of the effects of microbiota dysbiosis are just a few of the most recent studies published on the subject and show the immense lack of knowledge of the effect of the holobiont on human health.Correcting tabolite reported to have anti-tumorigenic effects, which were associated to the decrease of colonic inflammation, the reinforcement of the colonic barrier and the decreasing of oxidative stress [16] .Similar results were recently observed using a 1,2-dimethylhydrazine (DMH)-induced colon cancer model in rats.The animals from tumour group showed reduction of butyrate-producing bacteria such as Roseburia and Eubacterium in the gut microbiota.This experimental work also showed that DMH-induced carcinogenesis was associated to decrease of other beneficial species such as Ruminococcus and Lactobacillus in the gut microbiota of the rats [17] .New studies continue to show the differences in the intestinal microbiota between healthy individual and CRC patients.In this sense, it was described that a reduction of biodiversity and richness of microbial community, with increases of bacteroides was associated with colon cancer [18] .The analysis of the exact mechanisms by which these changes in the intestinal microbiota can be related to colon carcinogenesis are largely unknown.It was demonstrated that in CRC patients, in addition to the modification of intestinal metabolites, changes in the intestinal microbiota influence the host's immune response.In this sense, it was demonstrated that IL-17C has an important role in microbiota-mediated tumorigenesis [19] .IL-17C was upregulated in human CRC samples and also in mouse models of CRC.IL-17C was induced in the intestinal epithelial cells by the dysregulated microbiota and promoted the survival of these cells, contributing to the tumorogenesis. A detailed microbiota analysis of a well-characterized cohort of infants with food allergy (FA) showed that dysbiosis of fecal microbiota with several FA-associated key phylotypes, but not the overall microbiota diversity, may play a pathogenic role in FA [20] .In this study, the proportion of abundant Bacteroidetes, Proteobacteria, and Actinobacteria phyla were significantly reduced, while the Firmicutes phylum was highly enriched in the FA group. Recent studies have suggested that an imbalance of the intestinal microbiota may be involved in the development of obesity and type 2 diabetes mellitus (T2DM).In a recent review it was stated that a high-fat diet may induce dysbiosis, which can result in a low-grade inflammatory state, obesity and other metabolic disorders and that modifying this diet can play a role in T2DM management due to positive intestinal microbiota modulation [21] .Also, a metagenome-wide association study analysis showed that patients with type 2 diabetes were characterized by a moderate degree of gut microbial dysbiosis, a decrease in the abundance of some universal butyrate-producing bacteria and an increase in various opportunistic pathogens, as well as an enrichment of other microbial functions conferring sulphate reduction and oxidative stress resistance [22] . In another study, it was suggested that the obesity epidemic in the United States may be partly driven by the mass exposure of Americans to foods containing lowresidue antimicrobial agents that can alter the composition of the gut microbiota [23] .Studies that link microbi-this dysbiosis is now the aim of many groups due to the diverse diseases that are directly or indirectly associated with this imbalance of the symbiotic microbiota. MICROBIOTA AND DISEASE Fecal transplantation and synthetic microbiome transplants are being considered as promising therapies for dysbiosis-associated diseases. Fecal microbial transplantation (FMT) is the process of transplantation of fecal bacteria from a healthy donor into a host with disease.Clinical criteria for inclusion and exclusion of both donor and recipient should be performed to limit the risk associated to this procedure and increase the chances of success [32] .Fecal transplantation represents a therapy with a high potential of success, and has been mostly studied in the treatment of chronic gastrointestinal infections [33] .The effectiveness of FMT was remarkable for recurrent C. difficile infection.Recently, it was reported that FMT was effective to improve clinical symptoms and eliminated fecal C. difficile toxins in a study of 27 patients with recurrent C. difficile infection who were given a single session of FMT [34] .This effect was associated to increased microbial diversity in all the patients and the effectiveness was also associated to the correction of the metabolism of bile salts that is disrupted in patients with recurrent C. difficile infection [35] . Considering that microbial dysbosis is associated to many intestinal and non-intestinal diseases, FMT was considered for treatment of different disorders, including IBS, IBD, insulin resistance, multiple sclerosis, obesity, and heart diseases [36] .However, its use remains controversial in patients with IBD [37] .There is a study showing the safety and positive clinical response after FMT in children and young adults with UC [38] .A totally different response was also reported where the case of a patient with UC (quiescent for more than 20 years) who was treated with FMT for a C. difficile infection and developed a flare of UC, indicating the need to be cautious in the use of this procedure in patients with IBD [39] .It was also suggested the value of characterizing not only the composition but also the temporal dynamics of the microbiota for a better understanding of FMT efficacy in the treatment of UC [40] . The current knowledge shows that FMT has a high potential to be used [41] , but controlled trials of FMT in specific disorders and complemented by animal models of fecal transplantation, in which variables can be controlled and manipulated, are needed before FMT can be more accepted and applied clinically.Concerns over donor-derived infections (especially viral infection that are not normally detected) also exist, and it is difficult to quantify the true risk.The possibility to modify the transplantation of whole microbial communities from a healthy donor stool by another methodology has also re-cently been suggested in which specific fecal microorganisms grown in vitro could afterwards be transplanted [42] .The discovery of these commensal microorganisms will lead to the development of new probiotics that can replace FMT as applied today. EFFECT OF PROBIOTIC ADMINISTRATION ON THE INTESTINAL MICROBIOTA AND DISEASE Probiotic microorganisms have been growing in popularity due to increasing numbers of studies proving that certain strains present health promoting properties, among them the beneficial balance of the intestinal microbiota that can be also associated to other benefits to the host (Figure 2A).The most commonly used strains as probiotics are members of Lactobacilli, Enterococci and Bifidobacteria groups [43] .Lactic acid bacteria (LAB) represent a heterogeneous group of microorganisms that are present in the normal diet of many people and also in the gastrointestinal and urogenital tract of animals, and some of these claimed to be probiotics.Although most of the studies about probiotic have been mainly focused on bacteria, there are also many reports showing the potential of probiotic yeasts.In this context, Ianiro et al [44] reviewed the role of the "gut mycome", and demonstrated that intestinal yeasts fulfill an important role in health maintenance.Selected yeast strains, especially from Saccharomyces boulardii were reported as probiotic, and their beneficial effects against different types of diarrhea were demonstrated using experimental animal models [45] and also in human trials [46,47] .Currently, many products containing LAB or other probiotic microorganisms are available on retail shelves throughout the world because of the increase consumer demand for healthier natural foods that can improve their overall well-being. EFFECTS OF PROBIOTICS ON INTESTINAL DISEASES It has been shown that LAB and other probiotic microorganims can counteract inflammatory processes in the gut by stabilizing the microbial environment and the permeability of the intestinal barrier, and by enhancing the degradation of enteral antigens and altering their immunogenicity [48] .Lactobacillus reuteri (L.reuteri) was used to prevent colitis in IL-10 knock-out (KO) mice and to increase the number of lactobacilli in the gastrointestinal tract [49] .The normalization of Lactobacillus levels was obtained by oral administration of a prebiotic and rectal swabbing with L. reuteri to neonatal IL-10 KO mice.In a placebo-controlled trial, orally administered L. salivarius UCC118 reduced prevalence of colon cancer and mucosal inflammatory activity in IL-10 KO mice by modifying the intestinal microbiota in these animals with reduction in C. perfringens, coliforms, and enterococcus levels in the probiotic fed group [50] .The administration of yoghurt, with potential probiotic strains, decreased the inflammation by modulation of the host immune response in a trinitrobenzene sulphonic-induced mouse model of IBD.This effect was related to beneficial changes in the large intestine microbiota of the mice, with increases of bifidobacteria population [51] . The translation of the potential use of probiotics for IBD patients remains uncertain [52] , and even when some authors reported their effectiveness against specific pathologies and the modification of GIT microbiota is one of the benefits attributed to them, there are only few reports where the fecal microbial composition of the patients was evaluated.A randomized, double-blind, placebo-controlled trial evaluated the effect of a probiotic mixture containing L. acidophilus, L. plantarum, L. rhamnosus, Bifidobacterium breve (B.breve), B. lactis, B. longum, and Streptococcus thermophilus in patients with IBS [53] .The fecal flora composition was analyzed by polymerase chain reaction denaturing gradient gel electrophoresis (DGGE) and it was reported that the therapeutic effect of this probiotic mixture was associated with the stabilization of intestinal microbiota.Another study showed that probiotic supplementation (Ecologic 825, Winclove, Amsterdam, the Netherlands) to patients with UC and severe pouchitis restored the mucosal barrier, which was correlated with the bacterial diversity of mucosal pouch 16523 November 28, 2014|Volume 20|Issue 44| WJG|www.wjgnet.commicrobiota [54] . Regarding the use of probiotic yeasts, the influence of the administration of Saccharomyces boulardii on the composition of the fecal microbiota was evaluated in a human microbiota-associated mouse model.The animals received antibiotic treatment that induced modifications in the intestinal microbiota.The administration of probiotic yeast was related with quicker return to the initial level for the Clostridium coccoides-Eubacterium rectale and Bacteroides-Porphyromonas-Prevotella groups, compared to the control animals without any special administration, and this effect was suggested as a possible mechanism by which S. boulardii affect beneficially human with antibiotic-associated diarrhea [55] . The use of probiotic S. boulardii was also examined in humans, but as was explained for probiotic bacteria, not many studies in human analyzed the modification of the intestinal microbiota.Regarding IBD patients, it was reported that S. boulardii was effective to reduce symptoms of disease and this was related to the improvement of intestinal microbiota composition [56] .S. boulardii was also evaluated for the treatment of diarrhea-predominant IBS and its effect was compared to mesalazine [57] .It was reported that all the treatments improved the symptoms of the patients; however, mesalazine alone or its combination with S. boulardii was more effective that the treatment with the probiotic yeast alone.A recent work demonstrated that probiotic S. boulardii, associated to conventional treatment improved the quality of life of patients with diarrhea-dominant IBS [58] .This effect was associated to an anti-inflammatory profile of cytokines in blood and tissues of patients that receive the probiotic compared to the placebo group. NON-INTESTINAL DISEASES The use of probiotics to beneficially affect the GIT microbiota was also evaluated in non-intestinal diseases (Figure 2B).Recent studies suggested that GIT microbiota might play a critical role in the development of obesity and LAB were pointed as candidate for an anti-obesity effect [59] .A review from 61 original articles showed that the main effect observed at the microbiota level (usually accompanied by weight loss) after probiotic or prebiotic administration in obese hosts was associated to increases in bifidobacteria populations [60] . Studies in diet induced obese mice showed that the supplementation of L. curvatus HY7601 and L. plantarum KY1032 reduced the obesity and modulated proinflammatory and fatty acid oxidation-related genes in the liver and adipose tissue; and this effect was associated to modulation of gut microbiota [61] .The relative abundance of 4 species belonging to the Ruminococcaceae and Lachnospiraceae families of the order Clostridiales and phylum Firmicutes were decreased by high fat diet and increased in mice receiving probiotic treatment.It was also observed that other GIT microbial species not asso-ciated with changes caused by high fat diet were affected in mice that received probiotics, standing out the relative abundance of endogenous Bifidobacterium pseudolongum. VSL#3 is a mixture containing eight different strains of probiotic bacteria that was evaluated against different diseases, including the prevention and treatment of obesity and diabetes in several mouse models.This effect was associated to the modulation of the gut microbiota-short chain fatty acid (SCFA)-hormone axis [62] .VSL#3 supplementation induced changes in the microbiota that were associated with an increase in the levels of butyrate, and it was demonstrated in vitro that this SCFA stimulated the release of GLP-1 from intestinal cells.The hormone GLP-1 reduces food intake and improves glucose tolerance. Recently, the beneficial effect of L. coryniformis CECT5711 was demonstrated in a high fat diet induced mouse model.Probiotic administration to obese mice induced marked changes in microbiota composition and reduced the metabolic endotoxemia by decrease of the LPS plasma level [63] . The effect of probiotics in humans was also observed; however, as was explained for other pathologies, there are not many articles that evaluate the intestinal microbiota.A clinical trial with the probiotic bacterium L. salivarius Ls-33 was conducted in obese adolescents to investigate the impact on fecal microbiota [64] .Ratios of Bacteroides-Prevotella-Porphyromonas group to Firmicutes belonging bacteria were significantly increased after administration of Ls-33; however, these changes were not related to effects on their metabolic syndrome. A randomized, double-blind, placebo controlled study was conducted in order to evaluate the effects of probiotic capsule when combined with herbal medicine in treatment of obesity [65] .In this trial, each probiotic capsule contained viable cells Streptococcus thermophillus, L. plantarum, L. acidophilus, L. rhamnosus, B. lactis, B. longum, and B. breve.It was reported that probiotic administration prevented endotoxin production, which can lead to GIT microbiota dysbiosis associated with obesity.Gut B. breve population showed negative correlation with endotoxin level. NAFLD is a disease linked to obesity and the beneficial role of probiotics was also reported [66] .Recently, it was shown that L. rhamnosus GG protected against NAFLD in a mice model [67] .The effect was associated to increase total bacterial numbers including the phyla Firmicutes and Bacteroidetes in the distal small intestine.This result was in concordance to the previous one that reported modulation of the microbiota in the small intestine with a concomitant anti-obesity effect in mice that received L. rhamnosus GG and L. sakei NR28 [68] . The human GIT microbiota has also been related with a possible cardiovascular risk.GIT microbiota profiles were not only associated with metabolic diseases, but also the flux of metabolites derived from microbial metabolism of choline, phosphatidylcholine and l-carnitine that contribute directly to cardiovascular disease.In this sense, probiotics were reported among dietary strategies to modulate the GIT microbiota or their metabolic ac-tivities [69][70][71] .The improvement of disease biomarkers, especially plasma cholesterol levels, appears to be possible after probiotic administration to lower cardiovascular risk.In this sense, it was shown that the administration of a probiotic soy product containing Enterococcus faecium CRL 183 and L. helveticus 416 supplemented or not with isoflavones was associated with an improved cholesterol profile and inhibition of atherosclerotic lesion development in a rabbit model [72] .The authors reported that of Enterococcus spp., Lactobacillus spp.and Bifidobacterium spp.were negatively correlated with total cholesterol, non-HDL-cholesterol, and lesion size.The intake of the probiotic soy product increased significantly these bacterial species in the fecal microbiota. EFFECTS OF PROBIOTICS IN HEALTHY HOSTS There are also reports that showed the potential of probiotics in healthy hosts, maintaining a balanced microbiota, which, as was explained above, is an important key for health.The consumption of a probiotic product containing L. coryniformis CECT5711 and L. gasseri CECT5714 was analyzed in 30 children with no gastrointestinal pathology [73] .An increase in faecal lactobacilli counts was shown at the end of the experimental protocol, and these findings were associated to enhancing the defence against gastrointestinal aggressions and infections and enhancing the immune function with increase IgA concentration in faeces and saliva.A recent work reported a clinical trial that included 40 participants with no known digestive diseases.Laminaria japonica, a widely used ingredient in seaweed kimchi, and LAB of traditional fermented Korean food were given to volunteers and was related to increases in the number of some administered LAB species in their GIT microbiota [74] . CONCLUSION The dysbiosis of the gastrointestinal tract microbiota is considered to be one of the contributing factors in the development of certain gastrointestinal and nongastrointestinal diseases.Fecal transplantations appear to be promising therapies for dysbiosis-associated diseases; however, controlled trials of FMT in specific disorders are needed before FMT can be more accepted and applied clinically.The possibility to modify the traditional FMT by specific probioitc fecal microorganisms was also reported and would be a better alternative from a safety and therapeutic point of views. Recent reports showed the potential of the administration of specific probiotic strains to improve the balance of the GIT microbiota that is altered in different diseases, being IBD and obesity the pathologies in which there are more studies showing this association using animal models and even in human clinical trials.The importance of probiotic consumption in healthy hosts was also demonstrated because its relationship with beneficial balance in GIT microbial populations, which is also associated to improved defense against gastrointestinal aggressions and infections and the enhancing of the host's immune function. However, as was explained for FMT, there are not enough human trials where the application of probiotics as biotherapeutic agents was evaluated in double-blinded large scale clinical trials.These assays are very important before the medical community will accept the addition of probiotic as supplements for specific patients with diseases associated to gut microbial dysbiosis as a viable alternative to FMT. Figure 2 Figure 2 Effect of probiotic administration on gastrointestinal tract dysbiosis (A) or healthy individuals (B) and the interaction of the gastrointestinal tract microbiota with the host.GIT: Gastrointestinal tract; FMT: Fecal microbial transplantation.
2018-04-03T01:41:03.675Z
2014-11-28T00:00:00.000
{ "year": 2014, "sha1": "3421515db04117968027f39c53d8763952db66a3", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v20.i44.16518", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "3421515db04117968027f39c53d8763952db66a3", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256435387
pes2o/s2orc
v3-fos-license
Tire Slip H∞ Control for Optimal Braking Depending on Road Condition Tire slip control is one of the most critical topics in vehicle dynamics control, being the basis of systems such the Anti-lock Braking System (ABS), Traction Control System (TCS) or Electronic Stability Program (ESP). The highly nonlinear behavior of tire–road contact makes it challenging to design robust controllers able to find a dynamic stable solution in different working conditions. Furthermore, road conditions greatly affect the braking performance of vehicles, being lower on slippery roads than on roads with a high tire friction coefficient. For this reason, by knowing the value of this coefficient, it is possible to change the slip ratio tracking reference of the tires in order to obtain the optimal braking performance. In this paper, an H∞ controller is proposed to deal with the tire slip control problem and maximize the braking forces depending on the road condition. Simulations are carried out in the vehicular dynamics simulator software CarSim. The proposed controller is able to make the tire slip follow a given reference based on the friction coefficient for the different tested road conditions, resulting in a small reference error and good transient response. Introduction Vehicle stability under braking is essential to ensure the integrity of the vehicle's passengers and external actors. Wheel locking can affect vehicular motion, diverting the vehicle from the driver's desired trajectory or reducing the effectiveness of braking, which can lead to accidents. In many cases, these accidents and their consequences can be avoided thanks to the use of active vehicle dynamics control systems. Tire slip control by means of Anti-lock Braking Systems (ABS) has been one of the great achievements in automotive vehicle safety. Traditionally, Hydraulically Applied Brakes (HAB) have been the most common system layout in commercial vehicles. Pressure modulation in these systems is generally achieved in a stairway style, making it suitable for threshold-based, fuzzy logic and neural network control [1]. However, alternatives to these systems are now available, such as the Electro-Hydraulic Brake (EHB) or Electro-Mechanical Brake (EMB) systems. These are characterized by a faster response compared with conventional hydraulic systems [2,3] and allow a more precise and continuous control of the braking torque at the wheels. Many different control strategies have been proposed to address the ABS control. Rulebased algorithms compose the majority of solutions nowadays [4] but, in addition to fuzzy logic [5] and neural network [6,7] controllers, the large amount of tuning parameters make them extremely time-consuming options and are not able to deal with the uncertainties and disturbances of the tire-road dynamics. Moreover, none of these methodologies can assure the stability of the system. Given that brake actuator technology has significantly advanced in the last two decades, researchers have focused their efforts on more advanced control techniques to improve ABS performance. In [8], a robust Integral Sliding Mode • The proposed controller is able to make the tire slip follow a given reference based on the TRFC, resulting in a small reference error and good transient response, guaranteeing system stability. Since the estimation of the TRFC is not the focus of this article, it is assumed to be known for making use of any of the most recent literature algorithms [20][21][22][23][24][25][26][27][28][29][30][31][32]. • The braking forces are maximized depending on road condition. • Even though a simple vehicle model was taken into consideration for the controller design, the proposed algorithm was tested in the vehicle dynamics simulator software CarSim, in which simulations were carried out for different road conditions. • To consider the longitudinal velocity and tire-road contact time-dependency problem, a time-varying parameter approach is considered for the synthesis of the controller. These parameters are considered as pseudomeasures. • In order to estimate the states of the vehicle and the time-varying parameters with the information obtained from on-board series-production vehicle sensors, a Kalman Filter is considered. The rest of the article is organized as follows: in Section 2, the problem of the H ∞ gain-scheduling controller and vehicle states estimation is depicted. Moreover, the braking problem and dynamics are formulated. In Section 3, the design of the proposed controller is explained. The controller is tested in Section 4 using CarSim and Simulink, and the results obtained are analyzed. Finally, the conclusions are drawn in Section 5. Problem Formulation In this section, the problem of the H ∞ gain-scheduling controller and vehicle states estimation is depicted in Figure 1. The vehicle and friction models used for the controller are presented subsequently and all the parameters used are shown in Appendix A. As shown in Figure 1, a Kalman Filter algorithm is used to estimate the braking tire force of each wheel and the longitudinal velocity of the vehicle. These estimations are then used to calculate the longitudinal slip on each wheel and for the model used by the H ∞ controller. To simplify the algorithm, the TRFC is supposed to be obtained by some estimation method [20][21][22][23][24][25][26][27][28][29][30][31][32] and the optimal tire slip that maximizes the braking force is calculated by means of the Burckhardt friction model. Finally, the H ∞ controller generates the necessary braking pressure for each wheel in order to minimize the error between the optimal and current longitudinal slip. Vehicle and Friction Models In this section, the vehicle and friction models used for the controller are presented. A single-corner model [33] is used to represent the dynamics of the wheel during braking. It is assumed that the vehicle only moves in the longitudinal direction during the braking maneuver, as in Figure 2. The dynamics of the single-corner vehicle model depicted in Figure 2 can be expressed as in [33]: where J is the moment of inertia of the wheel, m is the equivalent mass of the single-corner vehicle model and R is the effective radius of the wheel; ω is the rotational velocity of the wheel, T b is the braking torque applied on the wheel, v x is the longitudinal velocity of the vehicle and F x is the force originated from the tire-road contact. This force can be determined by means of the expression where F z is the vertical load and µ is the instantaneous tire-road friction coefficient. For a case of straight-line braking, it is considered that µ only depends on the tire slip: with λ ∈ [0, 1] and λ = 1 meaning that the wheel is locked. In this work, the Burckhardt friction model is used to characterize the tire-road contact behavior. This model allows to obtain the instantaneous friction coefficient for different road condition as a function of the tire slip: where the value of the coefficients c 1 , c 2 and c 3 only depends on the road condition, resulting in different friction curves [34], as in Figure 3. By using the Burckhardt friction model, it is simple to know the value of the longitudinal tire slip that maximizes the braking force, shown in Table 1. where P b is the pressure of the hydraulic system, and constant k b comes from In Equation (5), both F x and v x are pseudomeasure time-varying parameters estimated by a Kalman Filter algorithm presented later in the document. To facilitate the design of the controller, the following time-varying parameters are defined: where both time-varying parameters ρ 1 and ρ 2 are bounded within an upper and a lower bound denoted by " * " and " * ", respectively. By taking x = [λ], u = [P b ] and ρ = ρ 1 ρ 2 from Equation (5), the dynamics of the longitudinal tire slip can be characterized bẏ where and d is considered as the disturbances: Controller Design In this section, the proposed H ∞ controller synthesis is presented, as well as the proposed algorithm for the vehicle states estimation. Controller Design Objectives The main objective of the controller is to make the tire slip ratio follow the desired reference r = [λ opt ] that maximizes the braking force according to the Burckhardt model, shown in Table 1. Then, the state space of the system expressed in Equation (7) can be augmented with a new defined state ζ = t 0 (λ − λ opt )dt and η = [λ ζ] T . The dynamics of the augmented system isη The controlled output of the system is where G = 0 1 . The gain controller law proposed for the system in Equation (9) is of the form u c (t) = K(t)η (12) and results in a generalized proportional integral controller whose integral term works towards eliminating the error with the reference signal, minimizing the error with respect to the optimal slip ratio. Stability Analysis In order to minimize the controlled output, the H ∞ performance inequality is chosen as in [35]: and it must be fulfilled for any bounded disturbance d and reference signal r, where γ 1 is the H ∞ performance index and γ 2 is a weighting factor. Theorem 1. For a given state feedback gain K, the closed-loop system defined in (9) is asymptotically stable and guarantees the H ∞ performance described in Equation (13) Proof. By choosing a Lyapunov function of the form and satisfying V > 0 andV < 0 with P 0 (16a) where A c is the closed-loop system matrix A c = A + BK. Now, let us define a cost function as To guarantee that the inequality of Equation (14) holds, the cost function defined in Equation (17) must satisfy By expressing ∆ in matrix form and applying Schur's complement to Equation (19), it ensures Equation (14) to be satisfied, so the proof is concluded. Gain-Scheduling Feedback Gains Design As the closed-loop plant of the system is expressed as a function of time-varying parameters ρ in Equation (9), a polytopic system is generated for describing the dynamics of the system [36] : where α i (ρ) are the weighting gains that satisfy ∑ N i=1 α i (t) = 1, α(t) > 0 and N = 4 for each of the four vertices that represent the four linear submodels of the generated polytope, as shown in Figure 4. These vertices are built from the upper and lower bounds of the F x and The weighting gains α(t) are calculated using the values of ρ(t) as follows: where The values of ρ 1 and ρ 2 can be obtained online and, through them, the final feedback controller gain K can be obtained as a linear combination of the feedback gain of the K i submodels using With the polytopic system in Equation (20) and gain law control in Equation (12), the controller is asymptotically stable, and the H ∞ conditions in Equation (13) are ensured if there is a definite positive matrix Q, a matrix M and a γ 1 > 0 that satisfy the LMI , and the state feedback gain of each submodel of the corresponding vertex of the polytope is obtained as Proof is shown in [36]. In addition, another constraint is used to limit the maximum control output signal so that the maximum pressure supported by the hydraulic system is not exceeded, thus limiting the braking torque . The limitation of the output signal is performed as in [37], where given positive definite matrices Q and M and a positive scalar , the maximum control output of the system in Equation (9) can be limited using the constraint with X ≤ P b,max . The objective controller gains are found by solving the minimization problem State Variable Estimation through a Kalman Filter It is necessary for the control feedback to know the values of the states and the values of ρ to calculate the gains α i of the polytope. Therefore, F x , v x and λ have to be estimated. For this purpose, a Kalman Filter is used to estimate the longitudinal velocity and the tire braking forces [38], because it allows to estimate the states of a linear system which cannot be measured directly, in this case tire forces. As the tire forces of every wheel of the vehicle are needed, the estimation is performed using Equation (29) into all the wheels of the vehicle: where m t is the total mass of the vehicle, T b,i is the braking torque and F x,i is the braking tire force of the i th wheel. From Equation (29), the following state-space model is deriveḋx All the measurement signals can be obtained using inertia or velocity sensors. Longitudinal acceleration a x can be measured by an Inertial Measurement Unit (IMU) [39], while the angular velocity of each wheel ω can be measured with Wheel Pulse Transducers (WPTs) [40]. Even though longitudinal velocity v x can be measured with an odometer, this can lead to imprecise results; therefore, an estimation of v x seems to be the best choice. By augmenting the system with the tire forces, the new statespace variables vector is x f = [v x w f l w f r w rl w rr F x, f l F x, f r F x,rl F x,rr ] T , and the state equation of the KF written in discrete form is where where the time variation is defined using the random walk model, as in [38]. The KF algorithm has two steps: the time update step and measurement update step. In the measurement state step, the algorithm uses the measurement to correct the estimation made in the time update step In the time update step, an estimation of the state variables is made using the dynamics equations of the system The process noise v k is considered to have zero mean and Q k covariance, the measurement noise v k is considered to have zero mean and R k covariance and P k is the states' covariance. Through these estimations, the tire slip of the wheels can be calculated using Equation (35). The tire slip is estimated using the measurement of the angular velocity of the wheels and the estimated longitudinal velocity: Simulation Set Up This section shows the conditions and results of the simulations performed to test the operation of the H ∞ controller designed in the previous section, which is used to control the slip of the four tires of the vehicle. Simulations are carried out in the vehicle dynamics software CarSim, which allows to run simulations with a 27-DOF vehicle model [41]. The controller and state estimator are implemented in Matlab-Simulink. Since during the braking process the vertical load is not the same on both axles of the vehicle due to the load transfer from the rear wheels to the front wheels, one controller is calculated for the rear wheels and another for the front wheels, considering that both the left and right wheels of the same axle work under identical conditions. The gains of the controller are obtained by solving the LMI minimization problem using the Robust Control Toolbox. The limit values for parameters ρ 1 and ρ 2 are defined in Table 2. The velocity range considered is 3 − 19.44 m/s. The minimum force on the tire is 0 N, and the maximum for the front occurs when the friction coefficient is maximum, considering load transfer. For the case of the rear tire, the maximum forces are calculated when only static load is considered where L is stated in Table 3. The friction coefficient considered in Equation (36) is the maximum for the road considered in the simulations, µ max = 1.00. The feedback gains and the H ∞ performance index for the front and rear braking controllers are calculated by choosing a weighting factor γ 2 = 1 in order to take into account the disturbances, shown in Equation (13). The gain matrices obtained are The initial, process and measurement covariances for the Kalman Filter are P 0 = Q k = diag 10 −7 10 −1 10 −1 10 −1 10 −1 5 · 10 2 5 · 10 2 5 · 10 2 5 · 10 2 T R k = diag 10 −5 10 −5 10 −5 10 −5 10 −3 T (38) where R k is the covariance considered on the sensors signals. In order to test the performance of the designed controller, simulations are performed using the vehicular dynamics software CarSim, considering a C-Class vehicle model. This category includes series-production vehicles such as Audi A3, Fiat Bravo or Opel Astra, among others. During the simulation, errors in the sensor measurements are considered. The controller and estimator are implemented the Simulink environment, Figure 1. The controller is tested in different road condition in which the vehicle always starts at a velocity of 70 km/h and starts braking at 0.1 seconds along a straight path. The cut-off speed of the controller is 3 m/s; below this velocity the actuator applies the maximum allowable pressure, as the wheel locking at very low velocities does not compromise the braking maneuver. In all simulations, it is assumed that the friction coefficient µ max is known, and no error in the estimation is assumed. Hence, the slip reference λ opt is obtained by comparing the estimation of µ max with the closest value from Table 1. The coefficient of friction µ max is also considered the same for all the wheels; thus, the same reference is always provided to all the controllers. The results are compared with those obtained with a PID controller with gains K P = 10, K D = 0.5 and K I = 600 under the same simulation conditions. Braking with Constant µ max The braking maneuver is simulated with the following road conditions: • Road condition 1: road with µ max = 1.00 trying to emulate a dry asphalt road. • Road condition 2: road with µ max = 0.40 trying to emulate a wet cobblestone road. • Road condition 3: road with µ max = 0.20 trying to emulate a snowy road. The results of this simulations can be seen in Figures 5-13. For simplicity, only the results relative to the wheels of the left side of the vehicle are shown. In Figures 5,6,8,9,11 and 12,it can be seen that the designed controller manages to make the longitudinal tire slip reach the given reference for the three tested different road conditions better than the PID controller does, especially in the case where the friction coefficient is high, where the proposed controller presents less steady-state error. The settling time is approximately 0.1 seconds in all the simulations, being faster than the PID controller in all the situations. Braking Test with Changing µ max In Figures 14-16, a snowy stretch on the road where the vehicle brakes is simulated. It can be seen that when the sudden friction change occurs, the controller prevents the slip from increasing too much and thus stopping the wheel from locking. In addition to that, the controller makes the slip of both the front and rear tires follow the reference λ opt , even though the tires of each axle enter the snowy section at different time instants. The entering and the exit of the car from the snowy patch is pointed out in Figures 14 and 16 with discontinuous lines. Again, the proposed controller performs better than the PID controller, as it has a faster response and minimizes error further. Braking Distance Comparison The braking distances obtained using the designed controller are compared with the ones obtained using a PID controller and the default braking ABS that CarSim uses. This system activates and deactivates the brake pressure to maintain the tire slip between two values, 0.1-0.15 for the front wheels and 0.05-0.1 for the rear wheels. The results are shown in Table 4. Conclusions and Future Works In this work, an H ∞ gain-scheduling controller able to optimize vehicle braking in an emergency situation was developed, trying to achieve the optimal longitudinal slip value from the Burckhardt tire model that maximizes the braking force for different road conditions. The controller was validated through braking simulations under different road conditions using CarSim and Simulink. It was observed that the controller is able to follow the reference under different road condition and with a reduced response time. In addition, its robustness against the variations that occur in the system during braking was verified, avoiding wheel locks. As part of a future work, communication delays must be taken into account, and an Event-Triggering mechanism should be applied to reduce the network communication loads and actuator chattering, leading to a more complete and realistic braking control. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: ABS Anti
2023-02-01T16:03:38.966Z
2023-01-27T00:00:00.000
{ "year": 2023, "sha1": "e05204cfb3dd423f928879457a9ff5cc8ff3e114", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/3/1417/pdf?version=1674946753", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9312dcca7c665a003233bc6db420145b628d2fa", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
255950175
pes2o/s2orc
v3-fos-license
Characterization of the expression and immunological impact of the transcriptional activator CREB in renal cell carcinoma The non-classical human leukocyte antigen (HLA)-G is a strong immunomodulatory molecule. Under physiological conditions, HLA-G induces immunological tolerance in immune privileged tissues, while under pathophysiological situations it contributes to immune escape mechanisms. Therefore, HLA-G could act as a potential immune checkpoint for future anti-cancer immunotherapies. Recent data suggest an aberrant expression of the cAMP response element binding protein (CREB) in clear cell renal cell carcinoma (ccRCC), which is correlated with tumor grade and stage. Furthermore, preliminary reports demonstrated a connection of CREB as a control variable of HLA-G transcription due to CREB binding sites in the HLA-G promoter region. This study investigates the interaction between CREB and HLA-G in different renal cell carcinoma (RCC) subtypes and its correlation to clinical parameters. The direct interaction of CREB with the HLA-G promoter was investigated by chromatin immunoprecipitation in RCC cell systems. Furthermore, the expression of CREB and HLA-G was determined by immunohistochemistry using a tissue microarray (TMA) consisting of 453 RCC samples of distinct subtypes. Staining results were assessed for correlations to clinical parameters as well as to the composition of the immune cell infiltrate. There exists a distinct expression pattern of HLA-G and CREB in the three main RCC subtypes. HLA-G and CREB expression were the lowest in chromophobe RCC lesions. However, the clinical relevance of CREB and HLA-G expression differed. Unlike HLA-G, high levels of CREB expression were positively associated to the overall survival of RCC patients. A slightly, but significantly elevated number of tumor infiltrating regulatory T cells was observed in tumors of high CREB expression. Whether this small increase is of clinical relevance has to be further investigated. An interaction of CREB with the HLA-G promoter could be validated in RCC cell lines. Thus, for the first time the expression of CREB and its interaction with the HLA-G in human RCCs has been shown, which might be of clinical relevance. chromophobe RCC (chRCC) and several rare subtypes [1][2][3]. Interestingly, the incidence of RCC varies worldwide with the highest occurrence in North America and the Czech Republic [4]. RCC risk factors include obesity, smoking, diabetes mellitus and hypertension, among others [5]. Unfortunately, around 25-30% of RCC patients are diagnosed at a locally advanced or even at a metastatic stage, which negatively contributes to further therapy options and success. Recent studies demonstrate a decreasing RCC mortality between 1992 and 2015 in the USA. This might be attributed to improved diagnostics, like advanced abdominal imaging and/or to changes in the prevalence of RCC risk factors [6], as well as a grown range of therapeutic options. Treatment of RCCs is multidisciplinary. Besides surgical resection and radiotherapy, numerous medical treatments for metastatic disease have been approved in the last years including VEGF and VEGF-R inhibitors, [7,8] mTOR inhibitors as well as with immune checkpoint inhibitors (ICI) [9]. Furthermore, first approaches with adoptive cell therapy (ACT) for RCC patients are under investigation, but despite the advances seen in melanoma, the reproducible generation of RCC tumor infiltrating lymphocytes (TILs) has been challenging [9]. Interestingly, the response rate to immunotherapies strongly vary between RCC patients, suggesting that the local composition of the tumor microenvironment and an altered expression of immune modulatory molecules might be crucial factors. The latter include the expression of PD-L1 and of the non-classical human leukocyte antigens (HLA) class Ib HLA-G and HLA-E as well as the down regulation of the classical HLA class Ia molecules mediated by an impaired expression of antigen processing machinery (APM) components [10][11][12][13]. Furthermore, cytokines secreted by Th1, Th2 and Th17 cells can also promote the progression of RCC [14]. Recent studies identified several microRNAs (miRs) regulating CREB in RCCs including miR-22-3p, miR-26a-5p, miR-27a-3p, and miR-221-3p [15]. Due to alternative splicing, HLA-G exists as membrane bound and soluble protein isoforms [16]. Both isoforms contribute to the composition and immune modulatory functions of the local tumor microenvironment [12]. Under physiologic conditions, its expression is restricted to mainly immune-privileged tissues including the cornea, testis, and the chorion. In contrast, a pathological HLA-G expression was found in many solid and hematopoietic tumors with an inter-tumor and intratumoral heterogeneity [17]. Furthermore, high HLA-G expression levels were associated with immune tolerance and inhibition of anti-tumoral immune responses [18,19]. This was mediated by binding of HLA-G to inhibitory lymphocyte receptors, the immunoglobulin-like transcript (ILT)2, ILT4 and to the killer immunoglobulin-like receptor KIR2DL4 present on NK cells and CTLs [20]. The frequency of HLA-G expression in RCC lesions has been extensively investigated. Using a tissue microarray (TMA) consisting of 453 RCC lesions and matched normal kidney epithelium, a membranous HLA-G expression was found in 49.9% and a cytoplasmic HLA-G expression in 38.1% of cases, but the staining intensity strongly varied. Furthermore, the HLA-G expression was associated with the tumor grade: WHO grade 3 tumors often exhibited a stronger HLA-G staining than lower grade tumors. While the NK cell and CD4 + T cell infiltration did not vary, a significant difference in CD3 + and CD8 + cytotoxic T cells between HLA-G + and HLA-G − RCC lesions was observed [21]. HLA-G expression could be regulated by transcriptional, epigenetic as well as post-transcriptional mechanisms [22]. In addition, an alternative pathway of HLA-G gene transactivation mediated by the cAMP-response element-binding protein (CREB) has been reported in the literature [23]. This transactivation by CREB is unusual and differs from gene activation of the classical HLA genes mediated by NF-κB, IRF1 and the class II transactivator (CIITA). CREB is a 43 kDa transcription factor, which binds after phosphorylation to the cAMP responsive element (CRE), a sequence that is localized in several gene promoters. Indeed, CREB functions are implicated amongst others in the regulation of cell proliferation, apoptosis, cycle progression and metastasis [24]. Indeed, CREB knock down in RCC cell lines suppressed RCC proliferation and decreased their tumor formation in nude mice [25]. In RCC lesions and in RCC cell lines, an increased CREB expression could be demonstrated [26], which was associated with a better RCC patients' survival [15]. RCC is highly promoted by the composition of the tumor microenvironment (TME). Mutations of the von Hippel-Lindau (VHL) gene are common in ccRCC. In this study, the role of the transcriptional transactivator CREB on the gene expression of the RCC-relevant immune inhibitory molecule HLA-G was investigated at the molecular level in RCC cell lines and in a large cohort of RCC lesions by immunohistochemistry (IHC). CREB, HLA-G and HLA-E staining results were tested for associations to clinical parameters. In addition, tumor immune cell infiltrate composition was investigated for different levels of CREB expression. Cell lines and cell culture The HLA-G positive choriocarcinoma cell line JEG-3 was purchased from the American Type Culture Collection (ATCC, Manassas, USA) and a set of five established RCC Tissue microarray and immunohistochemistry Details of tissue micro array (TMA) construction and composition were published previously [21]. In short, the TMA consisted of samples from 453 formalin-fixed, paraffin-embedded RCC tissues which had been reevaluated by two experienced pathologists with respect to RCC subtype and WHO grade as defined by the 2004 World Health Organization (WHO) classification (NLM ID:101240923). After pathological review, a representative area per tumor had been transferred to recipient paraffin blocks, each capable of holding up to sixty tissue punches. Five µm sections from the resulting eight TMA blocks were stained by conventional immunohistochemistry. The expression of HLA-G, HLA-E, CREB and immune cell marker expression have already been studied earlier on this TMA [15,21,27] and details of the staining procedures have been already been published. The authors reused these data to study the correlations to CREB expression. From 2008 all patients gave informed consent, while obtained the Ethic Commission in Erlangen waived the need for informed individual consent for samples before 2008. The study is based on the approvals of the Ethic Commissions of the University Hospital Erlangen (No. 3755) and was conducted according to the principles expressed in the Declaration of Helsinki. Chromatin immunoprecipitation (ChIP) ChIP assays were performed using a kit (Pierce ™ Agarose ChIP Kit) from ThermoFisher (Waltham, USA) according to the manufacturer's instructions. Briefly, 2 × 10 6 cells/assay were cross-linked in a 1% formaldehyde solution for 10 min at room temperature and the reaction was terminated by glycine (final concentration: 125 mM). Nuclei of cross-linked cells were isolated and DNA fragmentation was achieved by Micrococcal Nuclease digest. Antibodies against CREB1 (48H2, CST) and IgG (ThermoFisher) as isotype control were employed for immunoprecipitation overnight at 4 °C. ChIP samples were washed, eluted and DNA was UVcross linked with NaCl and treated with proteinase K. Purification of the DNA was achieved by a columnbased approach. Isolated DNA was subjected to (qRT-) PCR analysis. Quantitative PCR To calculate the percent of input results ChIP samples, DNA was analyzed by qRT-PCR using GoTaq Western blotting Total protein was extracted from different cell lines using RIPA buffer (25 mM Tris-HCl pH 7.6, 150 mM NaCl, 1% NP-40, 1% sodium deoxycholate, 0.1% SDS + protease inhibitor cocktail) and protein concentration was determined employing the Pierce ™ BCA Protein Assay Kit (ThermoFisher). For Western blot analysis, 25 µg of protein / sample was separated by SDS-PAGE and transferred onto a nitrocellulose membrane by semidry blot. For detection of proteins, the anti-HLA-G antibody MEM-G/9 (Novus Biologicals), the anti-CREB1 antibody 48H2 (CST) and the anti-β-actin mAb ab8227 (Abcam) were used, while a horseradish peroxidase conjugated goat anti-α-mouse/rabbit antibody (CST) was employed as a secondary antibody. Chemiluminescent blots were imaged by LAS-3000 Imaging System (Fuji). Statistical analyses Statistical tests on immunohistochemical results were performed using IBM SPSS Statistics 21 and 24 (IBM Corporation). Two-sided exact Chi-square test (Pearson's or Fisher's two sided exact tests, as appropriate) or Kruskal Wallis test were applied to test for correlations between tumor characteristics and tumor marker expression, or immune cell infiltrate characteristics and tumor marker expression, respectively. Associations of staining intensity and overall survival were calculated by log rank test. Differences were regarded significant at p < 0.05. Results In vitro experiments demonstrated a CREB-mediated induction of HLA-G transcription in extravillous cytotrophoblasts [23]. Previous studies analyzed the HLA-G and the HLA-E expression as well as the tumor infiltrating immune cell composition in a large cohort of RCC lesions using a TMA [20,21,27]. In the current study, the impact of the transcriptional activator CREB on the HLA-G expression was investigated in the RCC specimens. As recently shown, immunohistochemical staining of the RCC TMA with a pan-CREB mAb demonstrated a highly variable CREB expression in the RCC lesions ranging from negative, weak, medium to high expression [15] ( Fig. 1). The three major RCC subtypes ccRCC, pRCC, and chRCC, were separately analyzed for HLA-G and CREB expression by evaluating staining intensity distribution. The different RCC subtypes exhibit significantly distinct HLA-G and CREB expression (Fig. 2, p = 0.001 (membranous HLA-G), p = 0.002 (cytoplasmic HLAG), p < 0.001 (CREB)). The frequency of HLA-G expression was highest in ccRCCs (55.9% for membranous and 39.7% for cytoplasmic HLA-G) and pRCCs (39.8% for membranous and 58.7% for cytoplasmic HLA-G) when compared to chRCCs (16.2% for membranous and 10.3% for cytoplasmic HLA-G). Furthermore, the frequency of CREB expression was comparable to that of HLA-G expression. Medium and high CREB expressing tumors were more frequently found in ccRCCs (57.0%) and pRCCs (58.1) and rare in chRCC (11.1%). Since a coordinated HLA-G and CREB expression was suggested by immunohistochemical staining, the molecular mechanisms were analyzed. In silico analyses revealed CREB-binding sites within the HLA-G promoter (Fig. 3a).To investigate whether the reported direct interaction of CREB with the HLA-G promoter sequence in extravillous cytotrophoblasts is also functional in the RCC cell system, ChIP was performed applying lysates from the RCC cell line MZ2862RC expressing both CREB and HLA-G (Fig. 3b). ChIP of the HLA-G promoter sequences including the CREBbinding sites using the anti-CREB antibody revealed a strong enrichment, which was almost as equal to the already known CREB regulated RRM2 promoter, which was used as positive control. This is shown by an exemplary agarose gel demonstrating an enrichment of the HLA-G promoter region with the anti-CREB mAb when compared to the respective isotype control (Fig. 3c). These results suggest that CREB is affecting the expression of HLA-G by binding to its promoter. To further address whether the coordinated expression of HLA-G, HLA-E and CREB has clinical relevance, their expression levels (staining intensity) were correlated to tumor grade. Analysis of HLA-E, which has no CREB-binding site in its promoter, served as control. Despite the heterogeneous expression in the different RCC subtypes only limited information about the tumor grade of the lesser frequent pRCC and missing validated scoring procedures for chRCC, the analyses were focused on ccRCC specimen (Fig. 4). While there was no statistically significant difference regarding HLA-E expression (p = 0.569) and membranous HLA-G expression (p = 0.279) among all tumor grades, cytoplasmic HLA-G expression was statistically significant associated with a higher tumor grade (p = 0.012). In contrast, CREB expression was statistically significant inversely correlated to tumor grade (p < 0.001): A lower CREB expression in ccRCCs was associated with a higher tumor grade as summarized in Table 1 suggesting that the HLA-G and CREB expression has independent effects on tumor grading. To analyze an association of HLA-G, HLA-E and CREB staining intensity on the overall survival of RCC patients, the respective non-expressing tumor specimen were compared to the strong positive specimens for HLA-G and CREB, respectively, and weak and moderate/ strong specimens for HLA-E followed by the generation of Kaplan-Meier plots (Fig. 5). There was a statistically significant (p = 0.029) correlation of high levels of CREB expression with increased overall survival, while the survival of RCC patients did not statistically significant differ with the HLA-G expression (membranous or cytoplasmic) or HLA-E expression (p = 0.965, p = 0.56 and p = 0.216, respectively). To address the question whether the CREB expression in RCCs is linked to alterations of the tumor infiltrating immune cell composition, the tumor infiltrating lymphocytes were analyzed and correlated to the CREB expression levels (staining intensity) of the RCC tumors (Fig. 6). There was no difference in the presence of the CD3 (p = 0.434) positive cells, which were predominantly CD8 positive CTL (p = 0.011) and to a lesser extent CD4 positive T helper cells (p = 0.141) in CREB high or CREB low RCC (Fig. 6). In addition, the frequency of CD56 positive cells (p = 0.512), including NK cells and NKT cells, was not associated to CREB expression levels. In contrast, the Treg frequency was statistically significant associated in the CREB expression (p < 0.0001): The higher the CREB expression of the tumor, the higher the amount of tumor infiltrating Tregs, although the mean absolute number of Tregs per high power field is rather low: non-CREB expressing tumors displayed 0.10 FoxP3 + immune cells per high power field in mean (median 0.00, std. deviation 0.56, minimum 0.00, maximum 4.25), while G3 tumors showed 1.10 FoxP3 + cells per high power field in mean (median 0.00, std. deviation 2.61, minimum 0.00, maximum 14.25). Discussion The validation of reported in vitro studies regarding clinical relevant molecules in in situ and/or in in vivo systems is of great importance. The pathophysiologic HLA-G expression in solid und hematopoietic diseases is an important immune evasion strategy of tumors and linked to a certain composition of the TIL [21]. Therefore, the reported in vitro results of the suggested CREB-HLA-G interaction are for the first time analyzed in a clinical data set of human RCC specimen. In the RCC specimen analyzed, the expression of HLA-G and HLA-E demonstrated no statistically significant association to overall survival of RCC patients. In contrast, the expression of the transcriptional activator CREB statistically significant correlated to the overall survival of RCC patients when comparing strong CREB positive to CREB negative tumors. This primarily contradictory finding might be explained by the pleiotropic functions of CREB or additional mechanisms involved in the regulation of the target gene or in patients' survival. The CREB expression of RCC tumors is also accompanied by an altered composition of TIL. Only the amount of FoxP3 + cells was statistically significant enhanced in CREB positive cells in comparison to CREB negative cells. FoxP3 expression is limited to Tregs [29] and known to suppress anti-tumoral immune responses. Although the total amount of infiltrating FoxP3 + Tregs was very low in the RCC samples examined, the number of TIL was associated to CREB expression levels with a concordant increase in CREB and HLA-G expression levels. This effect appears to be CREB-specific, since previous HLA-G expression studies in the same TMA did not reveal any HLA-Gdependent effect on FoxP3 + cells, but a strong statistically significant effect on CD3 + and CD8 + TIL [21]. Interestingly, Kim and co-authors identified CREB as a direct transcriptional activator of the FoxP3 gene expression in murine Tregs. This interaction of CREB with the respective sequence motif within the FoxP3 promoter region was furthermore dependent on its methylation status. Indeed, methylated DNA at the CREB binding site within the FoxP3 promoter prevented in vivo CREB binding [28]. These authors identified a TGA CGT CA putative CREB site within the first intron of the FoxP3 gene, which was confirmed by the direct interaction and transcriptional activation of CREB at this site. Tregs are a subpopulation of CD4 + T cells and FoxP3 regulates their development in the thymus and maintenance in the periphery [29]. The number of tumor infiltrating Tregs within the strong CREB + RCC tumors did not negatively contribute to the association of CREB expression with an increased overall survival and lower tumor grade. In this study, the in vitro interaction of CREB and HLA-G in RCC cell lines was proven. This leads to the hypothesis whether CREB specific inhibitors like 666-15, which exerts an anti-cancer activity in mouse experiments both in vitro and in vivo [26], might be employed also for the RCC therapy. However, the weak in vivo effects of CREB on HLA-G expression as concluded from our data question any possible anti-cancer effects by CREB inhibition as a consequence of a downregulated HLA-G transcription. Conclusions The interaction of CREB and HLA-G was investigated in clinical human specimen and in different subtypes of RCC. In this study, both markers HLA-G and CREB showed an equal distributed expression independently of the RCC subtype. CREB expression in RCCs was inversely correlated to tumor grade, but positively correlated to overall survival and to the amount of tumor infiltrating Tregs. However, HLA-G expression exerts opposing functions and is linked to higher tumor grade. Therefore, further studies should investigate under which cellular conditions CREB physiologically induces HLA-G transcription and in which tissues and with which involvement of other regulatory mechanisms. , and CREB (D) staining intensity was correlated with OS survival in RCC demstrating that only CREB expression was significantly (p = 0.029) associated to patients' OS. Note: No survival data were available for HLA-E negative cases and only few data on strong cases. Therefore, for this marker, the staining categories "negative/ weak" and "medium/strong" were compared
2023-01-18T14:10:38.905Z
2020-09-29T00:00:00.000
{ "year": 2020, "sha1": "4572a63e066cddf81173f8a0856bf7f14f09b8be", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12967-020-02544-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "4572a63e066cddf81173f8a0856bf7f14f09b8be", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
250622811
pes2o/s2orc
v3-fos-license
Convolutional Neural Networks for the Evaluation of Chronic and In fl ammatory Lesions in Kidney Transplant Biopsies In kidney transplant biopsies, both in fl ammation and chronic changes are important features that predict long-term graft survival. Quantitative scoring of these features is important for transplant diagnostics and kidney research. However, visual scoring is poorly reproducible and labor intensive. The goal of this study was to investigate the potential of convolutional neural networks (CNNs) to quantify in fl ammation and chronic features in kidney transplant biopsies. A structure segmentation CNN and a lymphocyte detection CNN were applied on 125 whole-slide image pairs of periodic acid e Schiff e and CD3-stained slides. The CNN results were used to quantify healthy and sclerotic glomeruli, interstitial fi brosis, tubular atrophy, and in fl ammation within both nonatrophic and atrophic tubuli, and in areas of interstitial fi brosis. The computed tissue features showed high correlations with Banff lesion scores of fi ve pathologists In kidney transplant biopsies, both inflammation and chronic changes are important features that predict long-term graft survival. Quantitative scoring of these features is important for transplant diagnostics and kidney research. However, visual scoring is poorly reproducible and labor intensive. The goal of this study was to investigate the potential of convolutional neural networks (CNNs) to quantify inflammation and chronic features in kidney transplant biopsies. A structure segmentation CNN and a lymphocyte detection CNN were applied on 125 whole-slide image pairs of periodic acideSchiffe and CD3-stained slides. The CNN results were used to quantify healthy and sclerotic glomeruli, interstitial fibrosis, tubular atrophy, and inflammation within both nonatrophic and atrophic tubuli, and in areas of interstitial fibrosis. The computed tissue features showed high correlations with Banff lesion scores of five pathologists Q10 . Analyses on a small subset showed a moderate correlation toward higher CD3 þ cell density within scarred regions and higher CD3 þ cell count inside atrophic tubuli correlated with longterm change of estimated glomerular filtration rate. The presented CNNs are valid tools to yield objective quantitative information on glomeruli number, fibrotic tissue, and inflammation within scarred and non-scarred kidney parenchyma in a reproducible manner. CNNs have the potential to improve kidney transplant diagnostics and will benefit the community as a novel method to generate surrogate end points for large-scale clinical studies. Although much progress has been made toward the prevention of acute kidney transplant rejection, long-term graft loss remains a major issue in donor kidney 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 demonstrated the prognostic value of inflammation and tubulitis in regions with interstitial fibrosis and tubular atrophy (i-IFTA and t-IFTA, respectively). 1e4 Accurate scoring of these chronic, inflammatory parameters is therefore pivotal in strategies to prevent graft loss. The commonly used scoring system for kidney transplant biopsy assessment is the Banff classification system. 5,6 The Banff classification system was the first standardized, international classification system for kidney transplant diagnostics and facilitated uniformity in the reporting of renal transplant pathology. 7 It is internationally applied by kidney researchers and physicians, and it is the globally accepted quantification tool for histopathologic transplant evaluation. However, the Banff classification system has increasingly been criticized for its limited reproducibility and its suboptimal patient stratification. Multiple studies show poor to moderate interobserver agreement, specifically for the scoring of fibrotic changes and inflammatory lesions. 8e12 Moreover, the Banff classification system is based on semiquantitative scoring on an ordinal scale, whereas inflammatory and chronic parameters represent a continuous spectrum and should therefore preferably be quantified on a granular, continuous scale. Quantitative assessment of transplant biopsies may be improved by the application of digital image analysis techniques. 13e15 Specifically, deep learning, the use of datadriven learning systems where multilayered (deep) neural networks are trained to generate output from input, has proven to be a powerful tool for histopathologic tissue assessment. 16e19 The most widely applied neural networks in medical image analysis are convolutional neural networks (CNNs). CNN-based image analysis could benefit biopsy assessment by increasing reproducibility and efficiency. In addition, CNNs can output absolute values, which may provide more insight into the stage of ongoing pathologic processes. A second and important advantage of CNNbased image analysis is the ability to decrease interobserver variability, a major problem in any form of histologic assessment by human observers. The notable performance of CNNs on medical imaging data has resulted in an increasing number of studies focused on deep learning applications for kidney tissue. These efforts were pioneered by the segmentation and classification of the glomerulus and were expanded toward other applications, such as multiclass segmentation, the segmentation of sclerotic glomeruli and interstitial fibrosis and tubular atrophy (IFTA), and diabetic nephropathy classification. 20e24 This study investigated the potential of CNNs as quantification tools for the assessment of chronic and inflammatory lesions, going beyond the current arbitrary semiquantitative thresholds and showing the absolute quantification of tubulointerstitial inflammation as a continuous parameter in areas with and without IFTA. Ideally, CNNs can be used in addition to the Banff classification system to support kidney researchers and physicians in their studies on chronic kidney tissue changes. For this purpose, two previously developed CNNs aimed at the segmentation of periodic acideSchiff (PAS)estained tissue and detection of lymphocytes in immunohistochemistry (IHC) were used. 25,26 The CNNs were retrained and applied on a cohort of PAS-and CD3-stained kidney transplant biopsy slides. Quantifications were performed on the basis of the CNN results. The reliability of the CNNbased quantifications was evaluated by assessing the correlation with the following visually scored components of the Banff classification system: glomerular count, total inflammation (ti), interstitial inflammation (i), tubulitis (t), interstitial fibrosis (ci), tubular atrophy (ct), i-IFTA, and t-IFTA. Materials and Methods A visual overview of this study can be found in Figure 1 ½F1 . Patient Cohort Tissue Table 1 ½T1 . The local institutional review board waived the need for approval of using Radboudumc tissue blocks in this study (number 2016-2269). Regions of Interest The cortical regions were manually annotated using the automated slide analysis platform software version 1.9 (https://github.com/computationalpathologygroup/ASAP, last accessed June 13, 2022). The pathologists Q16 were asked to perform their analyses within these regions of interest, and the CNN-based quantifications were performed within these same regions. Tissue folds, subcapsular inflammation, and inflammatory infiltrates surrounding large arteries were excluded from the regions of interest. Visual Pathologists' Assessment of the Patient Cohort Biopsies Five pathologists Q17 , specialized in kidney transplant pathology, manually counted the number of glomeruli and scored the following Banff lesion categories on the PAS WSI according to criteria listed in Supplemental Table 1 (based on Banff 2018 5 ): ti, i, t, ci, ct, i-IFTA, and t-IFTA. After a washout period of 4 weeks minimum, the pathologists repeated the scoring for the Banff ti, i, t, i-IFTA, and t-IFTA categories, now using the PAS WSI in combination with the CD3 WSI. The interobserver variability was assessed for both scenarios by calculating quadratic weighted Cohen k coefficients. The visual glomerular counts and Banff ti, i, t, ct, ci, i-IFTA, and t-IFTA scores were compared with their equivalent tissue feature, quantified by CNNs (listed in Supplemental Table 2 Structure Segmentation CNN Development The authors previously presented a U-net architectural CNN for the multiclass structure segmentation of PAS-stained kidney sections into relevant tissue classes, such as healthy and globally sclerotic glomeruli, interstitium, and proximal, distal, and atrophic tubuli. 25 For the current study, this CNN was improved by including more training data and improved post-processing techniques (see below). There was no overlap between the cases that were used for CNN development and the slides that were used in the formerly described PAS-CD3 patient cohort. A novel method was developed for the segmentation of interstitial fibrosis based on image processing of the multiclass structure segmentation results, further described in Indirect Segmentation Method for Interstitial Fibrosis and IFTA. Ground Truth For development of the structure segmentation network, the data set (60 WSIs) that was described in the authors' earlier publication on kidney tissue segmentation 25 was complemented with 36 additional PAS-stained transplant biopsies (Radboudumc, n Z 19; Mayo Clinic, n Z 17) and 3 tumor nephrectomy samples (Mayo Clinic), resulting in 99 WSIs. The slides were digitized on a Pannoramic 250 Flash II digital slide scanner (3DHISTECH; Radboudumc) or an Aperio ScanScope XT System scanner (Leica Biosystems Q18 , Germany; Mayo Clinic) at a resolution of 0.24 and 0.49 mm/ pixel, respectively. The data set was annotated using automated slide analysis platform software, applying the following predefined classes: glomeruli, sclerotic glomeruli, empty Bowman capsules, proximal tubuli, distal tubuli, atrophic tubuli, capsule, arteries/arterioles, interstitium, and border (being the basement membranes of the tubuli). All annotations were checked and corrected where necessary by a pathologist Q19 . The WSIs were randomly divided into training (n Z 63), validation (n Z 16), and test (n Z 20) sets. The total number of annotations per tissue class is listed in Supplemental Table 3. Mayo Clinic tissue samples were scanned with institutional review board approval (numbers 17-002391 and 10-004644), and digital image file transfer was approved under institutional review board number 18-005592. Network Design A U-net architecture was used as the structure segmentation network design. 27 The network was trained for 95 epochs at 512 iterations per epoch with a batch size of eight patches (412 Â 412 pixels at a resolution of 1.0 mm/pixel). Adam was used as weight optimization algorithm and categorical cross entropy as loss function. 28 Spatial and color augmentation techniques were applied to increase the network's robustness for variations in tissue morphology, staining intensity, and image quality. Before inference of the structure segmentation network, a tissue-background segmentation network was applied, separating tissue from background and removing dust particles and tissue artifacts. 29 Post-Processing Post-processing was used to optimize the structure segmentation results, applying the following steps at a pixel spacing of 1.0 mm/pixel: i) pixels classified as empty glomeruli positioned at the edge of the biopsy were removed; ii) pixels classified as border or interstitium were temporarily set to 0, grouping pixels of all the other classes into discrete objects; iii) holes (ie, value 0 regions) with an area <150 pixels inside objects were filled with their 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 dominant surrounding object label; iv) objects with an area <300 pixels were considered noise and set to the interstitium class; v) objects that consisted of more than one tubule class were assigned to the predominant tubule class, and objects that consisted of more than one glomerulus class were assigned to the predominant glomerulus class; vi) regions <50 pixels inside objects were assigned to their dominant surroundings; vii) objects classified as glomeruli, having an area <2500 pixels, were set to the interstitium class; and viii) pixels classified as border were labeled as interstitium, and all interstitium pixels were subsequently placed back unless they were filled during step 3. The decision to use a minimum area of 2500 pixels for glomeruli was based on the knowledge that the diameter of a complete glomerulus ranges from approximately 100 to 200 mm, depending on the level of sectioning. This corresponds to a minimum area of 7854 pixels [based on the following formula: area Z (diameter 2 * p)/4]. By using 2500 pixels as a minimum area, corresponding to a diameter of approximately 56 mm, we avoided the risk of excluding complete glomeruli. Structure Segmentation Performance The segmentation performance of the network was assessed by calculating the CNN's precision, recall, and Dice score on pixel level on the test set, where: The test set that was used to assess the performance metrics of the structure segmentation CNN was composed of PAS-stained slides from the Mayo Clinic and Radboudumc. Because the material from the current patient cohort contains PAS-stained biopsies from Radboudumc, it can be assumed that the performance on the test set will correspond with that on this patient cohort. Therefore, the performance metrics of the structure segmentation CNN were not additionally calculated for the patient cohort. Indirect Segmentation Method for Interstitial Fibrosis and IFTA The structure segmentation CNN was subsequently applied to the 125 PAS WSIs from the patient cohort (see Materials and Methods; Patient Cohort; Tissue Samples). Interstitial fibrosis regions were derived from the structure segmentation masks by computing distance maps for interstitial pixels with respect to atrophic tubuli and to all other structures. Pixels were assigned to the interstitial fibrosis class if they were closer to atrophic tubuli than to any other structure, under the biological assumption that interstitial fibrosis and tubular atrophy develop in tandem. This allowed for the quantification of interstitial fibrosis alone and IFTA. Because the CNN was not directly trained on interstitial fibrosis and IFTA, Dice score, precision, and recall could not be calculated for these classes. Instead, three human observers Q21 visually estimated the percentage interstitial fibrosis and IFTA on 20 cases from the patient cohort. Similar to the automated scoring method, the visual score was a continuous score, ranging from 0% to 100%, and was not limited to categories. To assess the soundness of our automatic interstitial fibrosis/IFTA scoring method, the intraclass correlation coefficient (ICC) was calculated for the percentages given by the human observers and the percentages based on CNN results. Lymphocyte Detection CNN A recently developed lymphocyte detection CNN was adapted and used for the detection of lymphocytes in this study. This network was developed in a previous study, 26 in which four network architectures were trained with 171,166 manually annotated CD3 þ and CD8 þ lymphocytes: a fully convolutional network, a U-net, a you only look at lymphocytes once network, and a locally sensitive method network. The networks were evaluated for their detection performance of lymphocytes within normal tissue, artifacts, and immune cell clusters, using IHC-stained sections originating from nine medical centers. The best performing network for all the tasks was used in the current study (Unet). Because this network was trained on conventional IHC, it was retrained for the current study using 6237 lymphocyte annotations (15 WSIs) in restained kidney slides (PAS-CD3) in addition to the original training data. This retrained network was subsequently used for the cell detections in this study. Image Registration The PAS WSI and the CD3 WSI pairs display the same biopsies and are therefore roughly aligned. Nevertheless, tissue deformations may occur during IHC staining, and the rescanning of the slides causes a slight alteration of the tissue's coordinates in the image. This was corrected by nonlinear image registration, using the noncommercially available software HistokatFusion Q22 (Fraunhofer MEVIS lab, Bremen, Germany). The software offers a three-step registration pipeline, consisting of a manual or automated prealignment, a parametric registration computed on coarse resolution images, and an accurate nonlinear registration. 30 This allowed for an accurate spatial translation of tissue features between slides and corresponding masks. Automatically Quantified Tissue Features On the basis of the registered results of the structure segmentation CNN and the lymphocyte detection CNN, the following features were calculated: the number of nonsclerotic glomeruli and globally sclerotic glomeruli; the highest CD3 þ cell count inside proximal tubuli or distal tubuli; the highest CD3 þ cell count inside atrophic tubuli; the CD3 þ cell density inside the total cortical area; the CD3 þ cell density inside the cortical area, excluding interstitial fibrosis; and the CD3 þ cell density inside regions of interstitial fibrosis. Correlation between Automated Feature Quantification and Visual Banff Lesion Scoring To assess the correlation of glomerular counting performed by pathologists with automated glomerular quantification, the average ICC of the pathologists and the average ICC of the pathologists and the CNN are reported. Correlation between Automated and Visual Scoring of Chronic Lesions and the Course of Kidney Function In contrast to the ordinal scoring by human observers, the deep learningebased results are reported as a continuum. It should be investigated whether these continuous values hold more prognostic information than the current lesion scoring system. As an illustration for such a validation study, we assessed the correlation between manually and automatically scored chronic lesions and long-term change in kidney function. More extensive validation should be performed on a larger data set, specifically designed for this purpose. The D estimated glomerular filtration rate (DeGFR) was defined as the difference between eGFR measured at 1 week before the biopsy procedure (according to the Modification of Diet in Renal Disease formula) and the eGFR measured at 2 years after the biopsy procedure. These data were available for 46 cases. One biopsy sample per patient was used for these analyses. When biopsy samples from multiple time points were included from a single patient in the patient cohort, only the last sample was included (n Z 39). Cases were only included if no clinical event occurred (defined as the need for a biopsy for cause) between the biopsy procedure and eGFR measurement 2 years after the biopsy procedure (n Z 29). Subsequently, 11 cases were excluded, where the biopsy for cause was obtained <60 days after transplantation to avoid that early transplantation-related lesions, such as acute tubular necrosis, would distort the analyses. This resulted in 18 eligible cases for the correlation assessment. Spearman correlation was calculated to assess the relationship between DeGFR and visually scored i-IFTA, t-IFTA, ci, and ct scores. The Spearman correlation was also calculated between DeGFR and automatically quantified CD3 þ cell density inside fibrotic regions, CD3 þ cell counts per atrophic tubuli, area percentage of interstitial fibrosis, and percentage of atrophic tubuli. Validation of the Indirect Interstitial Fibrosis and IFTA Segmentation Method with Visually Estimated Percentages The correlation of automatically generated interstitial fibrosis and IFTA percentages with percentages provided by human observers was assessed to validate the indirect segmentation method of fibrotic regions. The average ICC of three human observers for scoring interstitial fibrosis was 0.655, and the average agreement of the observers and the ). This validation confirmed the rationale of the indirect interstitial fibrosis and IFTA segmentation strategy and justified the use of this method to define fibrotic tissue regions in the entire patient cohort. These regions were used to automatically include and exclude interstitial fibrotic regions in CD3 þ cell density calculations and to quantify interstitial fibrosis. boxed areas on the low-resolution images represent the areas depicted in the high-resolution images. C and D: The segmentation of atrophic tubuli by the structure segmentation convolutional neural network is visualized in green. E and F: Using image processing, pixels in closer proximity to atrophic tubuli than to any other structures (excluding interstitium) were assigned to the interstitial fibrosis class (green). The interstitial fibrosis (IF) percentage based on the cortical area in this figure is 1% for the nonfibrotic biopsy and 36% for the fibrotic biopsy. Scale bars: 500 mm (A and B, low resolution); 50 mm (A and B, high resolution). 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 Agreement between Automated Feature Quantification and Visual Banff Lesion Scoring The results of the structure segmentation CNN and the lymphocyte detection CNN were used to quantify numerous tissue features from the patient cohort. ICCs and Spearman correlations were calculated between these features and the average Banff lesion scoring of five kidney pathologists Q24 . The mean ICC of the CNN and the panel of pathologists for glomerular counting was 0.941. As supported by Figure 4, visual assessment of the segmentation result showed highly accurate segmentations with occasional false-positive segmentations of sclerotic glomeruli. Limiting the automated glomerular count to non-sclerotic glomeruli led to a mean ICC of the CNN and the pathologists of 0.972 (Table 3 ½T3 ). Next, the CNN assessment of interstitial fibrosis (pixel percentage), tubular atrophy (object percentage), inflammation in the total tubulointerstitium (cells/mm 2 ), inflammation in nonfibrotic regions (cells/mm 2 ), inflammation in fibrotic regions (cells/mm 2 ), tubulitis (highest cell count), and tubulitis in atrophic tubuli (highest cell count) was compared with the average score of pathologists for the following Banff categories: ci, ct, ti, i, i-IFTA, t, and t-IFTA (Table 4 ½T4 and Figure 6 ½F6 ). The highest correlation was reported for automatically assessed CD3 þ cell density in the total cortical area with the mean ti score of the pathologists, followed by the CD3 þ cell density in non-scarred cortical regions and the mean i score of the pathologists. Good correlations were reported for automatic and visual assessment of interstitial fibrosis and tubular atrophy, as well as for CD3 þ cell density in scarred cortical regions and the mean i-IFTA score of the pathologists. The lowest correlations are reported for the highest CD3 þ cell count in nonatrophic tubuli and the mean t score of the pathologists, and the highest CD3 þ cell count in atrophic tubuli and the mean t-IFTA score of the pathologists. Correlation between Chronic Tissue Scores and the Course of Kidney Function The correlation of ci, ct, i-IFTA, and t-IFTA with the longterm course of kidney function was evaluated for the CNNbased quantification method and the visually assessed Banff scores. On average, an improvement of eGFR was found over time in this subset of the patient cohort. Nevertheless, moderate inverse correlations were found between the DeGFR and the average i-IFTA score of the pathologists (ICC Z e0.567; P Z 0.014) (Supplemental Figure 3A), and DeGFR and automatically assessed cell density inside interstitial fibrotic regions of the cortex (ICC Z e0.515; P Z 0.029) (Supplemental Figure 3B). The highest CD3 þ cell count inside atrophic tubuli segmented by the structure segmentation CNN also inversely correlated with DeGFR (ICC Z e0.782; P < 0.001) (Supplemental Figure 4B). A weaker inverse correlation was found between the average print & web 4C=FPO 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 t-IFTA score of the pathologists and DeGFR compared with the correlation with the automated method (ICC Z e0.568; P Z 0.014) (Supplemental Figure 4A). The visual ci and ct Banff score and the automatically assessed interstitial fibrosis area percentage and tubular atrophy percentage did not correlate with DeGFR. Discussion In this study, deep learning was used to quantify both inflammation and chronic lesions in kidney transplant biopsies. Two CNNs were applied: a structure segmentation CNN for PAS-stained kidney tissue and a lymphocyte 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 inflammation in non-scarred cortical regions, and inflammation in areas with interstitial fibrosis correlated with Banff ti, i, and i-IFTA scoring, respectively. In addition, glomerular counts based on CNN results correlated highly with visual glomerular counts. A correlation was found between higher inflammatory cell density inside areas of interstitial fibrosis and long-term decline in eGFR. Lower kidney function also correlated with higher inflammatory cell count inside atrophic tubuli. This was in agreement with the correlations that were found for visual Banff i-IFTA and t-IFTA scoring with long-term changes in eGFR. The literature on kidney tissue segmentation using deep learning has expanded drastically over the past few years. 31e34 Many of the models described in the literature were trained in a binary manner (ie, glomeruli versus nonglomeruli or tubuli versus nontubuli). The current study demonstrates a segmentation performance for healthy and globally sclerotic glomeruli comparable to that reported in literature, despite the challenge of nonbinary segmentation. 35e37 Also, glomerular quantifications based on our CNN results correlated highly with glomerular counts performed by five pathologists Q25 . In a study by Jayapandian et al, 38 multiple networks were presented for segmenting glomerular, vascular, and tubular structures. The authors are one of the few to report separate segmentation performance of proximal and distal tubular segments, with impressive results. Unfortunately, atrophic tubuli were not included in this study. 38 Bouteldja et al 36 demonstrated a multiclass segmentation network for PASstained kidney tissue, showing excellent segmentation performances. However, healthy and atrophic tubuli were combined in their evaluation. The current study presents the only multiclass structure segmentation CNN that is developed for the segmentation and classification of the interstitium, healthy and sclerotic glomeruli, and proximal, distal, and atrophic tubuli. Such discrimination (especially that between healthy and atrophic/sclerotic structures) is crucial for developing an assay that yields clinically relevant and actionable data. Interstitial fibrosis and tubular atrophy have been shown to correlate with chronic kidney disease and chronic rejection in kidney transplants. The quantification of fibrosis has been the subject of several studies. 39e42 Artificial neural networks have been developed for the assessment of fibrosis in trichrome-stained kidney slides 43,44 and recently the first neural network for sclerotic glomeruli and IFTA segmentation in PAS-stained slides was presented, showing good agreement with manual annotations in deceased-donor tissue. 24 In the current study, a novel approach was presented for the segmentation of interstitial fibrosis by generating an interstitial fibrosis mask based on atrophic tubuli segmentations resulting from the structure segmentation CNN. The segmentation of pixels in closer proximity to atrophic tubuli than to other structures resulted in a convincing definition of interstitial fibrotic regions. The correlation of the manual scoring of interstitial fibrosis percentage by three human observers was similar to the correlation between manual scoring and the automated method. In addition, the automated quantification of interstitial fibrosis showed high correlations with the average Banff ci lesion scores of five kidney pathologists. These results convincingly show that the presented CNN can be used as a valid quantification tool for interstitial fibrosis in kidney tissue. Although the segmentation performance of atrophic tubuli has significantly improved since earlier studies, the Dice coefficient is still relatively low compared with that of some of the other classes. The confusion matrix in Supplemental Figure S2 shows that this can largely be attributed to mix-ups with distal tubuli and interstitium. It can be doubted whether the confusion with distal tubuli can be entirely prevented as the transition from a healthy tubule to an atrophic tubule is a continuous process. However, the false-positive atrophic tubuli segmentations inside (inflamed) interstitium possibly result from a relatively low number of inflamed interstitial regions in the training set. This can be improved in future work, by expanding training data sets. 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 Over the past two decades, studies have demonstrated the detrimental effect of inflammation within areas of interstitial fibrosis and tubular atrophy on kidney transplant outcome. 1e4,45e47 As a result, inflammatory fibrosis (i-IFTA) was introduced to the Banff lesion scoring system in 2015. 6 Accurate scoring of i-IFTA requires the visual exclusion of non-scarred parenchyma, followed by an estimation of inflammatory burden inside the scarred region. This makes i-IFTA hard to score, also considering the novelty of the category. The low interobserver agreement for the scoring of i-IFTA in the current study (with and without IHC available) emphasizes the necessity of a supporting scoring tool as presented in this study. Yi et al 48 recently presented the so-called composite damage score, composed of abnormal interstitium areas and tubuli density and areas of mononuclear leukocyte infiltration. Although the authors did not directly compare composite damage score with i-IFTA, it was shown to be predictive for late eGFR decline and patient survival and will possibly approximate this Banff category. 48 Instead of presenting an entirely new scoring system, the aim of this study was to stay close to the commonly used definitions while increasing the scoring granularity, accuracy, and reproducibility. To do so, the automatically generated segmentations and cell detections were combined using an award-winning image registration technique. 49 This allowed us to calculate CD3 þ cell density within scarred and non-scarred parenchyma and perform absolute CD3 þ cell counts in healthy and atrophic tubuli, enabling comparison to ti, i, t, i-IFTA, and t-IFTA scores. Automatically quantified cell densities in the complete cortical area were highly correlated to the average ti scores of the pathologists. Excluding scarred regions from the analysis allowed for the calculation of an equivalent for the Banff i score, which showed a high correlation with visual scoring as well. The Banff ti and i scores and their computational equivalents require minimal segmentation of the tissue in specific compartments. This may explain why the highest interobserver agreements and the highest correlations between automated and visual assessment were found for these categories. Lower correlations were found for cell densities inside regions of interstitial fibrosis with visual i-IFTA scores. This was possibly partially due to the low interobserver agreement among pathologists. In addition, we observed false-positive tubuli detections in inflamed interstitial regions. Therefore, the automatically generated interstitial fibrosis mask will not reach these regions, causing an underestimation of i-IFTA. In return, these falsepositive segmentations can lead to an overestimation of (atrophic) tubulitis. This can be improved by including more inflamed interstitial regions during development of the structure segmentation network. Finally, the correlation between the change in kidney function and automatically and visually scored ci, ct, i-IFTA, and t-IFTA was assessed as a proof of principle. Higher serum creatinine levels at time of the biopsy could cause an artifact when looking at DeGFR. To avoid this artifact, we used the eGFR measured 1 week before the biopsy for cause as a baseline. In reality, the serum creatinine levels appeared to be close on both time points [mean eGFR at minus 1 week: 29.53 mL/minute per 1.73 m 2 (SD, 8.70 mL/minute per 1.73 m 2 ); mean eGFR at time of biopsy: 28.26 mL/minute per 1.73 m 2 (SD, 7.99 mL/minute per 1.73 m 2 )]. This causes most patients to show an improvement of eGFR over time. Nonetheless, a significant, inverse, correlation was found between the inflammatory burden inside areas of interstitial fibrosis and the subsequent course of kidney function. This held for the automated quantifications by the CNNs and the visual lesion scoring by pathologists. This shows that the presented method can support uniform assessment of inflammatory burden inside fibrotic and nonfibrotic kidney tissue. There were some limitations in this study. First, the method presented in this article relies on the restaining of PAS-stained slides with IHC, followed by image registration. Most clinical centers will not include these methods in their routine transplant diagnostics procedure. Therefore, future studies shall be targeted at the development of an inflammatory cell detection network for PAS-stained sections, targeted at macrophages, B lymphocytes, and T lymphocytes. Second, our automated method does not correct for tangential sectioning. Third, the data show a trend toward an inverse correlation between visual and automated scores of inflammation inside areas of interstitial fibrosis and tubular atrophy and the course of kidney function. However, the number of patients eligible for these analyses was too small to draw strong conclusions from these results. The predictive potential of automated quantification of specific tissue features should be assessed in a larger cohort that was designed for this purpose. Finally, cortical regions were manually annotated as regions of interest for visual and automated assessment. A cortex segmentation CNN is required for fully automated assessment and will therefore be developed in future work. Although this study supports a positive view toward the inclusion of CNN-based quantifications in routine transplant diagnostics, the true, short-term, clinical value of this study can be found in the application of CNNs for prospective kidney (transplantation) research. The results demonstrate that the presented CNNs produce reliable quantifications of (inflammatory) fibrotic regions that could be used to monitor pathologic processes in detail over time in a uniform manner. In particular, the CNN-based results can be used as surrogate end points in large-scale clinical studies, relieving pathologists from tedious scoring tasks. Predictive models often require histologic revisions of large cohorts, where uniform assessment is challenged by variation between countries, laboratories, and observers. The presented CNNs can be used to compute tissue features in a reproducible manner, which can subsequently function as input for a clinical prediction model. The continuous output of the CNNs can be used to reevaluate the thresholds of the Banff Inflammatory Lesion Scoring with DL The American Journal of Pathologyajp.amjpathol.org 13 categories, which might result in a different patient grouping and a better prognostic system. In conclusion, two CNNs were developed, applied, and combined for the segmentation of kidney tissue and the detection of CD3 þ inflammatory cells. Good correlations were found for the automated quantification of glomeruli, interstitial fibrosis, and (total) inflammation with the manual scoring of their equivalent Banff lesion categories. The segmentation performance of (atrophic) tubuli should be improved to achieve better correlation with visual scoring of (atrophic) tubulitis and i-IFTA. Analyses on a small subset indicate an inverse correlation between long-term changes in eGFR and inflammation within scarred regions, based on both automated and visual assessment. Further validations are necessary to continuously assess the prospects of deep learning in kidney transplant pathology.
2022-07-19T06:17:59.371Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "1d7cb6f85cf6478da6fef4d5630f02aa3a053f8c", "oa_license": "CCBY", "oa_url": "http://ajp.amjpathol.org/article/S0002944022001985/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e9f4758ef46d47f8873bbef861cd434d3d21b8a6", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
255253280
pes2o/s2orc
v3-fos-license
The effect of non-dental glass fiber volume fraction on flexural strength of heat cured acrylic resin Background: Heat-cured acrylic resin is a material that is often used for the manufacture of removable partial dentures in dentistry because it uses simple equipment, relatively inexpensive, and is easy to repair. Acrylic resin also has a disadvantage, such as the low value of flexural strength so that it can cause the denture to fracture. This study determine the effect of non-dental glass fiber volume fraction on the flexural strength of heat cured acrylic resin. Method: This research are post-test only control group design. Acrylic resins were divided into four groups (6 sample each group), group 1 without the addition of non-dental glass fiber or 0% and heat cured acrylic resin group with the addition of non-dental glass fiber 1%,2%, and 3% (group 2, 3 and 4). Result: The average flexural strength of acrylic resin with a volume fraction of 2% of non-dental glass fiber had the highest value compared to other groups. The results of the Mann-Whitney test from several test groups showed significant differences in the value of flexural strength from each group (p<0.05), except for the 0% and 3% group. Conclusion: There is an effect of volume fraction of non-dental glass fiber on the flexural strength of heat cured acrylic resin. INTRODUCTION Heat cured acrylic resin which are used to manufacture removable denture bases 1 . However, acrylic resin denture bases have several weaknesses. One of them, which often happens, is fracture 3 . Fracture on denture base may occur when a denture base drops and hits a hard object (impact force) or when it receives regular pressure during usage (flexural force) 4 . An addition of fiber to denture base can be performed to strengthen its physical and mechanical properties 5 . Glass fiber is an ideal reinforcing material for it 6 . However, glass fiber is limited and relatively expensive. An alternative type of glass fiber which is widely available and inexpensive in markets is non-dental glass fiber. The composition of non-dental glass fiber, as observed with X-Ray Fluorescence Spectrometer (XRF), is mostly similar with the composition of Eglass fiber which is normally used in dentistry. 7 Several factors that influence the increase of the mechanical property of fiber as a reinforcing material are volume, adhesion and position 4 . Choosing the right volume of fibers for acrylic resin can increase the strength of acrylic resin denture plates. A study finds that fibers which are well positioned and have the right volume can enhance the strength of denture 6 . Based on the explanation above, therefore, this study determine the effect of non-dental glass fiber volume fraction on the flexural strength of heat cured acrylic resin. RESEARCH METHOD This study used the post-test only controlled group design. It involved four control groups: 6 sample each group, acrylic resin without an addition of volume fraction of non-dental glass fiber (K1), acrylic resin with an addition of 1% volume fraction of non-dental glass fiber (K2), acrylic resin with an addition of 2% volume fraction of non-dental glass fiber (K3), and acrylic resin with an addition of 3% volume fraction of non-dental glass fiber (K4). The tools used in the study were cuvette, spatula, bowl, press, stellon pot, cement spatula, cellophane, and ControLab Universal Testing Machine (UTM) for testing flexural strength. The materials used during the study were heat cured acrylic resins, non-dental glass fibers, silane coupling agents, aquades water, CMS, Vaseline, white dental plaster and red wax. The length of non-glass fibers were 63 mm. Then, they were weighted according to the volume fraction of each group. 1% fraction of volume had a mass of 0.041275 grams, 2% fraction of volume a mass of 0.08255 grams, and 3% fraction of volume a mass of 0.123825 grams. The samples were created as follows. First of all, a dental impression for acrylic resin denture base was created from white dental plaster. A red wax model with a volume of 65 mm x 10 mm x 2.5 mm was placed on the plaster. Once the dental plaster hardened, Vaseline was coated on its surface. Then, antagonist dental cuvette was installed on it. After that, dough of white dental plaster was filled into it until the dough set. After the dough had set, the dental cuvette was opened and hot water was poured into the wax model until it melted and thus the dental impression was formed. Finally, the dental impression was coated with could mould seal (CMS). was pressed until a metal-to-metal contact occurred. After that, a curing process was performed on the acrylic resins by putting the dental cuvette into the boiling water with 100°C temperature for 20 minutes. Then, the finishing process was applied to the acrylic resins. Next, the acrylic resins were soaked inside aquades water and they were kept inside an incubator with 37°C temperature for 24 hours. RESULTS In Table 4 The result of statistical analysis using the Kruskal-Wallis test shows that the significance value of 0.000 (p< 0.05) was acquired, signifying that there is a significant difference among four data groups ( Table 2). The Mann-Whitney test was conducted to identify the differences among the significant values of flexural strength of each group (p<0.05). Its result can be seen in Table 3. show that the addition of 2% non-dental glass fiber volume fraction gives the highest flexural strength compared to others. The addition of fiber to acrylic resin can improve its flexural strength because any pressure that a plate made from acrylic resin and fiber receives will be distributed evenly 8 CONCLUSION Based on the study, it can be concluded that: 1. There is an effect of non-dental glass fiber volume fraction on the flexural strength of acrylic resin. 2. The flexural strength of acrylic resin with an addition of 2% non-dental glass fiber fraction volume has the highest flexural strength compared to additions of 0%, 1%, and 3% non-dental glass fiber volume fractions. ACKNOWLEDGEMENT The researchers give their heartfelt thanks to several parties who have supported this study, especially the Faculty of Dentistry, Sultan Agung Islamic University.
2022-12-30T16:06:24.207Z
2022-12-28T00:00:00.000
{ "year": 2022, "sha1": "27011b9ce276ef7ac399118888477bca6e80daa9", "oa_license": "CCBYSA", "oa_url": "http://jurnal.unissula.ac.id/index.php/odj/article/download/20762/7777", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "64e5250df3345971b7796ea571eb7f70ce93acf4", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [] }
1779012
pes2o/s2orc
v3-fos-license
Functional properties and structural characterization of rice δ1-pyrroline-5-carboxylate reductase The majority of plant species accumulate high intracellular levels of proline to cope with hyperosmotic stress conditions. Proline synthesis from glutamate is tightly regulated at both the transcriptional and the translational levels, yet little is known about the mechanisms for post-translational regulation of the enzymatic activities involved. The gene coding in rice (Oryza sativa L.) for δ1-pyrroline-5-carboxylate (P5C) reductase, the enzyme that catalyzes the second and final step in this pathway, was isolated and expressed in Escherichia coli. The structural and functional properties of the affinity-purified protein were characterized. As for most species, rice P5C reductase was able to use in vitro either NADH or NADPH as the electron donor. However, strikingly different effects of cations and anions were found depending on the pyridine nucleotide used, namely inhibition of NADH-dependent activity and stimulation of NADPH-dependent activity. Moreover, physiological concentrations of proline and NADP+ were strongly inhibitory for the NADH-dependent reaction, whereas the NADPH-dependent activity was mildly affected. Our results suggest that only NADPH may be used in vivo and that stress-dependent variations in ion homeostasis and NADPH/NADP+ ratio could modulate enzyme activity, being functional in promoting proline accumulation and potentially also adjusting NADPH consumption during the defense against hyperosmotic stress. The apparent molecular weight of the native protein observed in size exclusion chromatography indicated a high oligomerization state. We also report the first crystal structure of a plant P5C reductase at 3.40-Å resolution, showing a decameric quaternary assembly. Based on the structure, it was possible to identify dynamic structural differences among rice, human, and bacterial enzymes. Introduction Among proteinogenic amino acids, proline plays an important role in protein structure, uniquely contributing to backbone folding and stability (Ge and Pan, 2009). Moreover, in most plants (Verbruggen and Hermans, 2008) and microorganisms (Empadinhas and da Costa, 2008;Takagi, 2008), a rapid and reversible increase of the intracellular concentration of free proline to high levels has been shown in response to either osmotic, oxidative, or temperature stress (Verslues and Sharma, 2010), implying a role in stress tolerance and osmoregulation (Szabados and Savouré, 2010;Hayat et al., 2012), redox balance (Liang et al., 2013), and apoptosis (Monteoliva et al., 2014). More recently, both the level of free proline and proline metabolism in plants were also hypothesized to influence the transition to flowering (Mattioli et al., 2009), as well as pollen and embryo development (Lehmann et al., 2010;Funck et al., 2012). Although not conclusively analyzed experimentally (Funck et al., 2008), two metabolic routes leading to proline production have been proposed in higher plants. Under high nitrogen availability, the synthesis seems to proceed mainly through ornithine, an intermediate in the biosynthesis and degradation of arginine, which is converted to δ 1 -pyrroline-5-carboxylate (P5C) by a pyridoxal-dependent ornithine-δ-aminotransferase (da Rocha et al., 2012). Conversely, under osmotic stress conditions and/or nitrogen starvation P5C is synthesized from glutamate by P5C synthetase (Kavi Kishor et al., 1995;Turchetto-Zolet et al., 2009). The two pathways share the last reaction, in which P5C is reduced to proline by P5C reductase (EC 1.5.1.2). Although both P5C synthetase (Székely et al., 2008) and P5C reductase (Verbruggen et al., 1993) transcripts are induced under osmotic stress conditions, only the former is believed to represent a rate-limiting step (Kesari et al., 2012). In the absence of a functional P5C reductase both routes for proline biosynthesis are blocked, and no alternative pathway has been described. Consistently, null mutations of P5C reductase are embryo-lethal (Funck et al., 2012) and specific inhibitors of P5C reductase exert phytotoxic effects (Forlani et al., 2007), and may thus represent new active principles for weed control (Forlani et al., 2008). As P5C reductase occurs at the converging point of these two anabolic pathways, it should be subjected to fine regulation, even though it might not represent a rate-limiting step under most conditions. Indeed, when P5C reductase protein levels and intracellular proline concentrations were measured in different tissues and in osmotically stressed seedlings, data were not in agreement with the corresponding mRNA levels (Hua et al., 1997). A complex pattern of regulation was postulated, in which differential mRNA stability, degree of polysome association and 5 UTR effects on translation efficiency seem to play a role (Hua et al., 2001). Yet, trans-acting factors that can bind to the P5C reductase promoter region or mRNA have not been identified. Moreover, a translation inhibition of Arabidopsis thaliana P5C reductase was found under stress conditions (Hua et al., 2001), a result that seems inconsistent with a role in stress-induced proline accumulation. The occurrence of posttranslational regulative mechanisms was also proposed, but poorly investigated. In fact, plant P5C reductase has been purified only from a few plant species, such as barley (Krueger et al., 1986), soybean (Chilson et al., 1991), and spinach (Murahama et al., 2001). These enzymes showed substrate ambiguity, being able to use either NADH or NADPH as the electron donor, even if the NADH-dependent activity was inhibited by equimolar concentrations of NADP + (Szoke et al., 1992). Moreover, a twofold stimulation of the NADH-dependent reaction by 100 mM KCl or 10 mM MgCl 2 was reported for partially purified pea P5C reductase (Rayapati et al., 1989), whereas the two isozymes purified from spinach were on the contrary inhibited by NaCl (100-500 mM) and MgCl 2 (10-100 mM) when assayed using NADPH as the co-factor (Murahama et al., 2001). Recently, we isolated and characterized P5C reductase from suspension-cultured cells of A. thaliana, where a single gene is present (Verbruggen et al., 1993;Funck et al., 2012). The purified protein was able to use either NADPH or NADH as the electron donor, with contrasting affinities, and maximum reaction rates. The presence of equimolar levels of NADP + completely suppressed the NADH-dependent activity, whereas the NADPH-dependent reaction was only mildly affected. Proline inhibited only the NADH-dependent reaction. At physiological levels, increasing concentrations of salt steadily inhibited the NADH-dependent activity, but were stimulatory of the NADPHdependent reaction (Giberti et al., 2014). These properties suggest a complex regulation of enzyme activity by the redox status of the pyridine nucleotide pools, and the levels of proline and chloride in the cytosol. However, also due to the above inconsistencies in the literature, it was not possible to conclude whether these features are shared or not by all plant P5C reductases. Similarly, although a clear-cut preference for NADPH was evident, it was unclear whether NADH could sustain at least in part the rate of P5C reduction inside the plant cell under either physiological or stress conditions. Besides the lack of a detailed biochemical characterization of the enzyme in a diverse array of plants, our knowledge of post-translational mechanisms regulating the activity of plant P5C reductase is hampered also by the unavailability of its three-dimensional configuration. Crystal structures have been solved to date only for the enzyme of the bacterial pathogens Streptococcus pyogenes and Neisseria meningitides (Nocek et al., 2005) and for the human isozyme 1 (Meng et al., 2006). The plant and the bacterial sequences show similarity over their entire lengths, a fact that is suggestive of a similar tertiary structure. This notwithstanding, an alignment of the deduced amino acid sequences of S. pyogenes and A. thaliana P5C reductase pointed out a moderate degree of conservation, with 33% identities, 56% conserved residues, and 3% gaps (Forlani et al., 2007). Consistently, the sensitivity of the bacterial enzyme to a group of aminobisphosphonate inhibitors was found to be strikingly higher than that of the plant enzyme (Forlani et al., 2013), showing IC 50 values 2-3 orders of magnitude lower. Therefore, significant differences may exist with respect to the substrate-and effector-binding protein domains. Moreover, a broad range of oligomeric states of P5C reductase has been reported to date, extending from 125 kDa (suggesting tetramer) to 200-340 kDa (octamer-dodecamer; references in Nocek et al., 2005). For both the S. pyogenes (Nocek et al., 2005) and the human (Meng et al., 2006) enzyme a decameric architecture with five homodimer subunits and ten catalytic sites arranged around a peripheral circular groove has been described. Data obtained by gel permeation chromatography for plant P5C reductases were compatible with either a decameric (Murahama et al., 2001) or a dodecameric assembly (Giberti et al., 2014). In the frame of a research project for integrated genetic and genomic approaches for new Italian rice breeding strategies, we aim at a better understanding of the biochemical mechanisms underlying salt tolerance and proline accumulation in rice. Here we describe the functional characterization of rice P5C reductase. Our results confirmed in a monocotyledonous species the regulatory pattern previously found in A. thaliana. Taking one step further, the kinetic mechanisms for product inhibition were elucidated, and the regulatory effects of anions and cations were differentiated. On the whole, these results suggest that under physiological conditions only NADPH would act in vivo as the electron donor, and that a stress-induced increase in the cytosolic cation content and/or in the NADPH/NADP + ratio would instantly enhance P5C reductase activity, with no need of transcriptional control. A three-dimensional structure of rice P5C reductase was also obtained, showing a homodecameric configuration. Cloning and Heterologous Expression The coding sequence of Oryza sativa P5C reductase was amplified by PCR from cDNA clone J013104L18 (Rice Genome Resource Center, National Institute of Agrobiological Sciences DNA Bank, Japan) with the primers P5CR-fw (caccATGGCGGCGCCGCCTCA) and P5CRrev (gaggaTTAACTCTGAGAAAG), and inserted into the expression vector pET151 by directional TOPO cloning (Life Technologies, Carlsbad, CA, USA), yielding the vector pET151-OsP5CR. For heterologous expression, E. coli BL21(DE3) pLysS cells (Invitrogen) were made competent by the calcium chloride method, transformed with the vector and selected on ampicillincontaining LB plates. After inducing the expression of P5C reductase by 1 mM isopropyl-D-thiogalactopyranoside (IPTG) at 24 • C, the cells were lysed in a mortar with 2 g g −1 alumina and resuspended in 20 mL g −1 extraction buffer (50 mM Na phosphate buffer, pH 7.5, containing 200 mM NaCl, 0.5 mM DTT, and 20 mM imidazole). The His-tagged protein was purified from clarified extracts by affinity chromatography with a His-Select TM Nickel Affinity Gel column (1.5 mL bed volume, Sigma H7788). Stepwise elution was achieved by increasing concentrations of imidazole in extraction buffer. For activity assays, the purified enzyme was diluted 1:1000 with water, and a proper aliquot (2-5 μL) was added to the assay mixture. To remove the His-tag, aliquots (100 μg) of the preparation were treated with 1 μg of tobacco etch virus (TEV) protease (Sigma T4455), according to the cleavage protocol provided by the manufacturer. Enzyme Assay The physiological, forward reaction of P5C reductase was measured at 35 • C following the P5C-dependent oxidation of NAD(P)H. Unless otherwise specified, the assay mixture contained 20 mM Tris-HCl buffer, pH 7.75, 1 mM NADH or 0.5 mM NADPH, and 1 mM DL-P5C (equivalent to 0.5 mM L-P5C; Williams and Frank, 1975) in a final volume of 0.2 mL. DL-P5C was synthesized by the periodate oxidation of δ-allohydroxylysine (Sigma H0377) and purified by cation-exchange chromatography, as described previously (Forlani et al., 1997). A limiting amount of enzyme (from 8 to 16 ng of the purified protein) was added to the pre-warmed mixture, and the decrease in absorbance at 340 nm was recorded at 20-s intervals and up to 5 min through an optical path of 0.5 cm. Activity was calculated from the initial linear rate on the assumption of a molar extinction coefficient for NAD(P)H of 6,220 M −1 cm −1 . Linear regression analysis was computed by using Prism 6 (version 6.03, GraphPad Software, Inc., USA). Protein content was determined by the Coomassie Blue method (Bradford, 1976), using bovine serum albumin (BSA) as the standard. For the purified protein, direct absorbance at 280 nm was used instead, and the concentration was calculated on the basis of a deduced molar extinction coefficient for rice P5C reductase of 14,000 M −1 cm −1 (http://web.expasy.org/cgi-bin/protparam/ protparam). Kinetic Analyses To evaluate substrate affinity, invariable substrates were fixed at the same levels as in the standard assay. The concentration of L-P5C ranged from 150 to 500 μM with NADH as the electron donor, and from 100 to 225 μM with NADPH. The concentration of NADH and NADPH ranged from 50 to 350 μM. To evaluate the mechanism of the inhibition brought about by proline and NADP + on the NADH-dependent activity, NADH concentration ranged from 100 to 800 μM. When evaluating the effect of ions, L-P5C concentration was reduced to 200 μM to minimize the carry-over of chloride anions. All assays were performed in triplicate. K M and V max values, as well as the concentrations causing 50% inhibition (IC 50 ) or 50% stimulation of P5C reductase activity, K I values and their confidence intervals were estimated by non-linear regression analysis using Prism 6. Catalytic constants were calculated from V max values taking into account a homodecameric composition of the native holoenzyme, having each monomer a molecular mass of 29,670 Da. Gel permeation chromatography was performed by injecting 100 μL aliquots of the purified protein onto a Superose 12 HR 10/30 (Pharmacia) column that had been equilibrated with 50 mM Tris-HCl buffer, pH 7.75, containing 250 mM NaCl. Elution proceeded at the constant flow of 0.5 mL min −1 , for the collection of 0.5-mL fractions, while monitoring the eluate at 280 nm (HPLC Detector 432, Kontron). Molecular weight markers for column calibration (Pharmacia) were bovine thyroid thyroglobulin (669 kDa), horse spleen ferritin (440 and 960 kDa), bovine liver catalase (232 kDa), rabbit muscle aldolase (158 kDa), and BSA (67 and 268 kDa). Three runs were carried out for each marker, and six runs for the purified protein. Isoelectric focusing was performed as described previously (Forlani et al., 1997), with ampholytes within the pH 3.5-10 range (Pharmacia); pI markers (Sigma) were bovine milk β-lactoglobulin A (pI 5.1), bovine erythrocyte carbonic anhydrase II (5.4 and 5.9), and bovine erythrocyte carbonic anhydrase I (6.6). After the run, individual tracks were cut from the gel and either sliced in 5-mm segments for the determination of pH, or stained for protein as above. Protein Production for Crystallization The coding sequence of O. sativa P5C reductase was subcloned into vector pMCSG68 according to the standard protocol described previously (Eschenfeldt et al., 2013). OsP5CR was overexpressed in BL21 Gold E. coli cells (Agilent Technologies). The bacteria were cultured with shaking at 210 rpm in Lysogeny Broth supplemented with 150 μg mL −1 ampicillin at 37 • C until the OD 600 reached 1.0. The temperature was lowered to 18 • C and IPTG was added to a final concentration of 0.5 mM. The culture was grown for 18 h and the cells were pelleted by centrifugation at 4 • C. Bacteria from 1 L culture were resuspended in 35 mL of binding buffer [50 mM Tris-HCl pH 8.0, 500 mM NaCl, 20 mM imidazole, 1 mM Tris(2-carboxyethyl)phosphine (TCEP)] and stored at −80 • C. The samples were thawed and the cells were disrupted by sonication using bursts of total duration of 5 min, with appropriate intervals for cooling. Cell debris was pelleted by centrifugation at 18000 g for 30 min at 4 • C. The supernatant was applied to a column packed with 8 mL of HisTrap HP resin (GE Healthcare) connected to VacMan (Promega), and the chromatographic process was accelerated with a vacuum pump. After binding, the column was washed five times with 40 mL of binding buffer, and the His 6 -tagged protein was eluted with 20 mL of elution buffer (50 mM Tris-HCl pH 8.0, 500 mM NaCl, 300 mM imidazole, 1 mM TCEP). TEV protease (2 mg) was added to cleave the His 6 -tag, and the sample was immediately transferred to a dialysis tube. The dialysis was carried out overnight at 4 • C against buffer lacking imidazole. The solution was again mixed with HisTrap HP resin to remove the His 6 -tag and the His 6 -tagged TEV protease. The flow-through, containing rice P5C reductase, was concentrated to 4 mL and applied onto a HiLoad Superdex 200 16/60 column (GE Healthcare) equilibrated with 50 mM Tris-HCl buffer, pH 8.0, containing 200 mM NaCl and 1 mM TCEP. The size exclusion chromatography yielded a homogenous protein fraction. Crystallization, Data Collection, and Structure Solution The sample was concentrated using Amicon concentrators (Millipore) to 14 mg mL −1 as determined by measuring the absorbance at 280 nm. Crystallization screening was performed using a Robotic Sitting Drop Vapor Diffusion setup (Mosquito). Manual optimization using hanging drops gave the following final conditions: 100 mM Tris-HCl, pH 8.5, 200 mM MgCl 2 , 18% polyethylene glycol 8000. The crystallization drop was composed of 4 μL of protein and 2 μL of the reservoir solution. The needle-shaped crystals appeared after 3 days at 19 • C. The crystals were washed with the reservoir solution supplemented with 20% glycerol as a cryo-protectant and vitrified in liquid nitrogen. The diffraction data were collected at 22-ID SER-CAT beamline at Advanced Photon Source, Argonne, USA. The diffraction images were processed with XDS (Kabsch, 2010). The structure was solved by molecular replacement in Phaser (McCoy et al., 2007) using a homology-based model of rice P5C reductase prepared using Swiss-Model server (Biasini et al., 2014). The structure of human homolog withdrawn from Protein Data Bank (PDB, access ID: 2izz) served as the template. The protein model was built using Phenix AutoBuild (Terwilliger et al., 2008). Statistics of data collection, processing, and refinement are summarized in Table 1. Functional Properties of Rice P5C Reductase The cDNA of the only gene coding for P5C reductase in the japonica rice genome was subcloned into the expression vector pET151 and expressed in E. coli. The N-terminal addition of a stretch of six His residues (Supplementary Figure S1) and the adoption of a stepwise elution protocol allowed the attainment of homogeneous preparations in a single step (Supplementary Figure S2). The presence of the His 6 -tag did not affect the enzymatic activity, since virtually identical results were obtained in all experiments before and after the cleavage of the purified protein with TEV protease. Enzyme preparations were highly stable; if sterilized by filtration (0.22 μm pore size), no detectable loss of activity was evident after 3-month storage at 4 • C. The maximal specific activity of the recombinant enzyme strongly depended on the electron donor used. With NADPH, Values in parentheses correspond to the highest resolution shell. a R meas = redundancy independent R-factor (Diederichs and Karplus, 1997). a V max of about 12 μmol s −1 (mg protein) −1 was found, corresponding to a catalytic constant of 350 s −1 per monomer ( Table 2). With NADH instead of NADPH, a strikingly higher V max value was calculated, one that would result in more than 4,500 catalytic events in 1 s for a single subunit. However, the corresponding affinity was conversely lower (apparent K M values of 49 and 806 μM for NADPH or NADH, respectively), and saturating conditions were not obtained even at the highest NADH concentration tested (Figure 1). As a consequence, the estimated V max with P5C as the variable substrate was lower than that with NADH (Table 2), because the latter was still limiting under standard assay conditions. The use of NADH as the cosubstrate also resulted in a 10-fold higher apparent K M value for P5C. While the NADPH-dependent activity was linear with time over the entire assay period, kinetics with NADH showed a progressive reduction of the catalytic rate as the reaction took place (data not shown). To verify whether this effect may be due to product inhibition, the impact of increasing levels of proline, NADP + and NAD + on the initial reaction velocity was assessed. In the range of concentrations tested, NAD + was substantially ineffective (results not presented). On the contrary, both proline and NADP + were able to inhibit the activity of P5C reductase (Figures 2A,B). However, a remarkably different sensitivity was found depending on the pyridine nucleotide used. With NADPH as the electron donor, proline was inhibitory only at concentrations exceeding 100 mM, and NADP + was effective if added at levels higher than that of NADPH in the assay mixture. On the contrary, the NADH-dependent activity was strongly reduced by the presence of either proline concentrations in the range from 5 to 100 mM, or micromolar levels of NADP + , with IC 50 values of 48 mM and 37 μM, respectively. To obtain further information, a thorough kinetic analysis was performed. When . Invariable substrates were fixed at the same levels as in the standard assay. At least three replicates were carried out for each concentration, and mean values ± SE are presented. Plotting of data from Michaelis-Menten graphs into the Lineweaver-Burk double reciprocal plots allowed the calculation of affinity constants and V max values for the NADH-and the NADPH-dependent reaction ( Table 2). Frontiers in Plant Science | www.frontiersin.org FIGURE 2 | Product inhibition of rice P5C reductase. The effect of increasing concentrations of proline (A) and NADP + (B) on the activity of the enzyme was determined using either NADH or NADPH as the electron donor. Results were expressed as percent of the activity in the absence of supplements, and are mean ± SE over three replicates. Non-linear regression analysis allowed the calculation of IC 50 values, which in the case of the NADH-dependent reaction were 47.7 ± 1.6 mM and 36.7 ± 1.1 μM for proline and NADP + , respectively. The NADPH-dependent activity was much less sensitive to proline and NADP + , with estimated IC 50 values of 3.44 ± 1.51 M and 3.78 ± 0.12 mM, respectively. To investigate the mechanisms of product inhibition, the affinity toward NADH (C,D) or L-P5C (E,F) was calculated in the presence of increasing concentrations of L-proline (C,E) or NADP + (D,F). Lines converging to the y axis account for a mechanism of competitive type with respect to P5C for the inhibition by proline (E) and with respect to NADH by NADP + (D), with K I values of 20.2 ± 1.0 mM and 7.73 ± 0.69 μM, respectively. Lines converging to the x axis suggest a mechanism of non-competitive type for proline with respect to NADH (C), K I being equal to 34.7 ± 1.2 mM. On the contrary, parallel lines for different NADP + concentrations are consistent with an inhibition of uncompetitive type with respect to P5C (F), with a K I value of 8.48 ± 0.56 μM. the effects of proline and NADP + were evaluated at varying the concentration of either substrate, Lineweaver-Burk plots were consistent with an inhibition mechanism of competitive type with respect to P5C for proline ( Figure 2E) and with respect to NADH for NADP + (Figure 2D), and of non-competitive type for proline with respect to NADH ( Figure 2C). Interestingly, for NADP + an inhibition of uncompetitive type with respect to P5C was found (Figure 2F), showing that P5C binding is required to allow NADH to enter the active site of the enzyme. K I values with respect to NADH were 20 mM and 8 μM for proline and NADP + , respectively. Effects of Anions and Cations on the Activity of Rice P5C Reductase Because inconsistent results have been described in the literature about the sensitivity of plant P5C reductase to salts, the effect of increasing concentrations of NaCl and MgCl 2 on the catalytic rate of the rice enzyme was assessed. Once again, strikingly different properties were evident depending on the electron donor (Figures 3A,B). With NADH as the co-factor, an inhibition was found with both salts at levels exceeding 20-50 mM. With NADPH, a remarkable stimulation was on the contrary shown at concentrations in the range 5-200 mM NaCl and 0.1-70 mM MgCl 2 . Above these thresholds, the activity came back to control rates and, only in the case of MgCl 2 , was inhibited at levels exceeding 100 mM. To ascertain whether these effects may be ascribed to anions or cations, the effects of other chlorides or other sodium salts were also investigated. Results were plotted as a function of Cl − (Figures 3C,D) or Na + (Figures 3E,F) concentration when NADH (Figures 3C,E) or NADPH (Figures 3D,F) was used as the co-factor. Almost overlapping patterns for several chlorides and dissimilar patterns for various sodium salts suggest that anions are the main cause of the inhibition of the NADH-dependent activity. Conversely, different patterns with different chlorides but substantially overlapping patterns with various sodium salts were consistent with a stimulation of the NADPH-dependent activity by cations, whereby divalent cations were remarkably more effective than monovalent cations ( Figure 3D). or MgCl 2 (B) to the reaction mixture were assessed using either 1 mM NADH or 0.5 mM NADPH as the electron donor. To minimize the carry-over of chloride anions from the purified preparation of the co-substrate, L-P5C levels were fixed at 0.2 mM, resulting in less than 10 mM Cl −1 in standard mixture. Results were expressed as percent of controls assayed in the absence of added salts, and are mean ± SE over three replicates. Non-linear regression analysis allowed the calculation of IC 50 values for the NADH-dependent reaction, which were 165 ± 13 mM and 43.8 ± 3.9 mM for NaCl and MgCl 2 , respectively. To discriminate whether anions or cations were causing the striking stimulation of the NADPH-dependent activity and the inhibition of the NADH-dependent reaction, similar experiments were also performed with increasing concentrations of KCl, NH 4 Cl, and CaCl 2 (C,D), or with NaNO 3 , Na 2 SO 4 , and NaH 2 PO 4 /Na 2 HPO 4 (in a molar ratio 0.292: 1, resulting in a pH value of 7.75) (E,F). Results were plotted together as a function of chloride or sodium ion concentration, respectively. Almost overlapping patterns suggest that cations in the range 10 − 3 -10 −1 M stimulate P5C reductase activity when it uses NADPH as the substrate, whereas anions inhibit the reaction if NADH acts as the electron donor. Frontiers in Plant Science | www.frontiersin.org Based on these data, P5C reductase activity seems therefore strongly dependent on both the use of NADH vs. NADPH as the electron donor, and the presence of reaction products and salts. To obtain further information, the activity of the purified enzyme was measured in the presence of substrate and effector concentrations similar to those reported to exist within plant cells. Results are summarized in Figure 4. Despite the fact that V max with NADH is more than 10-fold higher than that with NADPH, very similar rates were obtained with either co-factor added to the reaction mixture at physiological levels. The activities were not additive, suggesting a preferential use of NADPH. This notwithstanding, the rate in the presence of both dinucleotides was slightly higher than that with NADPH alone, showing that also NADH may contribute to the overall velocity. However, when realistic levels of NADP + were also present, the NADPH-dependent activity was reduced to 50%, whereas the NADH-dependent reaction was almost abolished. The presence of 15 mM proline was more inhibitory than previously observed under standard assay conditions, most likely because of the lower concentration of P5C. Interestingly, the presence of ion concentrations similar to those reported in the cell under normoosmotic conditions enhanced the NADPH-dependent activity while inhibiting that with NADH. When all variables were FIGURE 4 | Activity of rice P5C reductase in the presence of physiological concentrations of substrates, products, and ions. Specific activity levels of the purified enzyme were measured under conditions simulating substrate and effector levels inside the plant cell. Available literature data in nmol (g fresh weight) − 1 were converted into molar concentrations by assuming that in cultured plant cells the cytosol may account for about 10% of fresh weight. Activity assays were therefore carried out in the presence of the following concentrations: NADPH 50 μM, NADH 30 μM, NADP + 250 μM, NAD + 160 μM (Hayashi et al., 2005); L-P5C 100 μM (Forlani et al., 2013); Pro 15 mM (Forlani et al., 2015a); 5 mM NaCl, 20 mM KH 2 PO 4 /K 2 HPO 4 , 20 mM K 2 SO 4 , 5 mM NH 4 NO 3 , and 5 mM MgSO 4 , resulting in 76 mM K + , 25 mM SO 4 2− , 20 mM H 2 PO 4 − /HPO 4 2− , 5 mM Na + , 5 mM Mg 2+ , 5 mM NH 4 + , 5 mM NO 3 − , and 5 mM Cl − (Lutts et al., 1996;Taiz and Zeiger, 2010). Presented values are means ± SE over eight replicates; activities under saturating substrate conditions are quoted from Table 2. included, the enzymatic activity with NADPH corresponded to about 40% of the maximal rate, whereas that with NADH was completely abolished. Consistently, the activity level with both nucleotides was under these conditions not significantly different from that with NADPH alone. Structural Characterization of Rice P5C Reductase Under denaturing conditions, P5C reductase migrated as a single band (Supplementary Figure S2) to a position corresponding to a molecular mass (30.1 ± 0.4 kDa) that is compatible with the deduced mass from the nucleotide sequence of the gene (28,624 Da; Supplementary Figure S1). To obtain an estimate of its relative mass under non-denaturing conditions, the TEVcleaved protein was subjected to gel filtration chromatography, and its retention pattern was compared with that of molecular weight markers (Figure 5). Results indicated a native molecular mass of 401 ± 19 kDa, which would be consistent with an oligomer composed of 14 identical subunits. However, void volumes within the protein structure as well as any deviation from globular shape strongly affect protein retention, and molecular masses inferred from retention patterns may be subjected to significant errors (Erickson, 2009). Indeed, a decameric composition was found when the crystal structure of O. sativa P5C reductase was determined at a resolution of 3.40 Å. The structure was determined by the molecular replacement method with a search probe created by homology modeling based on the decameric human structure (PDB ID: 2izz; Forlani et al., 2015b). Analysis of the protein crystal solvent content, the so-called Matthew's coefficient calculation, indicated that two decameric assemblies are located in the asymmetric unit of the crystal lattice with P2 1 space group symmetry. Rice P5C reductase can therefore be described as FIGURE 5 | Apparent molecular mass of rice P5C reductase under native conditions. Aliquots (100 μL) of the purified protein, adjusted to 10 mg mL −1 , were subjected to gel permeation chromatography on a Superose 12 HR 10/30 (Pharmacia) column that had been equilibrated with 50 mM Tris-HCl buffer, pH 7.75, containing 250 mM NaCl. A molecular mass of 401 ± 19 kDa was estimated for P5C reductase. Based upon these results, the enzyme could consist of 13-14 subunits. Identical elution patterns were obtained at increasing NaCl concentration up to 1 M (data not shown). a pentamer of dimers, which has a doughnut-like shape with the dimensions of 112 Å × 85 Å (Figure 6). This arrangement is formed by a mutual exchange of the C-terminal domains between two neighboring protein subunits, while in contrast the nucleotide binding (N-terminal) domains do not interact with each other and are pointing away from the hollow core. The N-terminal domains are flexible, and only 11 out of 20 (in two decamers) were defined well enough in the electron density maps to be reliably modeled. The hinge that allows for independent movement of the N-terminal domains is predicted to be around the residues 175-180. The protein chain for which the electron density maps were of the best quality was modeled and copied by non-crystallographic symmetry (NCS) operations, and a refinement was performed with secondary structure restraints for all N-terminal domains. This approach allowed us to obtain the model of a full decamer (Figure 6). The electron density for nine (out of 20 in two decamers) N-terminal domains was poor, which is evident by the elevated R factors of model R/R free = 25/34%. Plant P5C Reductases Share Peculiar Properties that Can be Functional to the Multiple Roles Hypothesized for Proline Metabolism in the Cell Here we report an exhaustive characterization of rice P5C reductase, affinity-purified after heterologous expression in E. coli. Under saturating conditions, the purified enzyme showed a specific activity (about 160 μkat mg −1 ) significantly higher than those of the enzymes isolated from other plant sources [8.5 μkat mg −1 for barley (Krueger et al., 1986), 4.4 μkat mg −1 for soybean (Chilson et al., 1991) and 5.1-11.3 μkat mg −1 for spinach (Murahama et al., 2001)]. However, this exceptionally high turnover rate depended on the use of NADH as the electron donor. With NADPH instead, a V max value of about 12 μkat mg −1 was found. In any case, even with NADPH the resulting catalytic efficiency (K cat /K M = 7.09 × 10 6 M −1 sec −1 ) is significantly higher than that reported for other enzymes in amino acid metabolism. For instance, a K cat /K M = 4.2 × 10 4 M −1 s −1 has been found for E. coli γ-glutamyl kinase, the enzyme that catalyzes the first step in proline biosynthesis in bacteria (Pérez-Arellano et al., 2006). That of P5C dehydrogenase, the enzyme that oxidizes P5C back to glutamate, ranged from 3.4 × 10 5 to 4.2 × 10 5 M −1 sec −1 in rat (Small and Jones, 1990) and potato (Forlani et al., 1997), respectively. No information is available regarding this parameter for plant P5C reductases other than the similarly high value (26.7 × 10 6 M −1 sec −1 ) found for the enzyme from A. thaliana (Giberti et al., 2014). The enzyme from the human pathogen S. pyogenes also showed a high K cat /K M value (28 × 10 6 M −1 sec −1 ; Petrollino and Forlani, 2012). The extremely high efficiency for the reaction catalyzed by P5C reductase may be a consequence of the need to rapidly convert even small amounts of P5C into proline, in order to avoid possible cytotoxic effects caused by P5C accumulation (Deuschle et al., 2004;Miller et al., 2009;Senthil-Kumar and Mysore, 2012). This seems consistent with the relatively high K M(app) values found for L-P5C (Table 2), suggesting that the activity of the enzyme is far from being saturated at physiological concentrations of the substrate: any increase in P5C concentration would therefore result in an equivalent increase of the rate of its utilization by P5C reductase. The significantly higher maximal reaction rate observed with NADH most likely depends on a faster NAD + release from the active site. Consistently, the NADH-dependent activity was strikingly inhibited by micromolar concentrations of NADP + . If both pyridine nucleotides were made available, the reaction proceeded at rates only slightly higher than those in the presence of NADPH alone, confirming a preferential use of NADPH but also suggesting that both co-factors could be used alternately. Such a substrate ambiguity, together with the inhibition by oxidized pyridine dinucleotides, has been interpreted as functional to the proposed role of proline synthesis in maintaining a favorable NADP + /NADPH ratio under stress conditions, as well as in regenerating NADP + in photosynthetic tissues (Sharma et al., 2011). From this perspective, in humans the non-allosterically regulated isozyme specifically expressed in erythrocytes, PYCR1, would serve primarily for NADP + generation, whereas the other, prolinesensitive isozyme ubiquitously expressed in the other tissues would be devoted to proline synthesis (Merrill et al., 1989). Since only a single enzyme form is present in most plants, both functions might be ensured through the emerging complex pattern of substrate preference and product inhibition, where proline and NADP + inhibit only the NADH-dependent reaction. Patterns in Lineweaver-Burk plots ( Figure 2F) accounted for an inhibition by NADP + of uncompetitive type with respect to P5C. This strengthens earlier results obtained with phosphonate inhibitors of plant P5C reductase (Forlani et al., 2007(Forlani et al., , 2008) and provides experimental evidence supporting an ordered substrate binding, previously hypothesized only on the basis of the crystal structure of the bacterial enzyme (Nocek et al., 2005). The present data therefore suggest that P5C binds before NADPH. Since in the case of most NAD(P)H-dependent reductases the coenzyme binds before the substrate (Sanli et al., 2003), this is an unusual feature that might depend on the cyclic structure of both substrate and product. Most interestingly, the use of either co-factor had drastic effects on the susceptibility of P5C reductase to the presence of salts. A twofold stimulation by 100 mM KCl or 10 mM MgCl 2 had been reported for the enzyme partially purified from pea (Rayapati et al., 1989), but in that case activity had been evaluated with NADH as the electron donor. Conversely, the two isozymes purified from spinach were inhibited by NaCl in the 100-500 mM range, and by MgCl 2 at lower concentrations, when assayed using NADPH (Murahama et al., 2001). These contrasting results could imply a functional diversity among plant P5C reductases. However, the experiments performed in this study on the enzyme from rice, a monocot, yielded patterns overall similar to those previously obtained with the enzyme from the dicot A. thaliana (Giberti et al., 2014). Therefore, it appears more likely that such differences may depend on nonuniform experimental conditions, e.g., the inclusion of different MgCl 2 levels in the standard assay mixture. Most importantly, in previous studies the possibility that significant levels of chloride and Na + or K + ions may be present in the standard reaction mixture as a consequence of P5C buffering has been largely underestimated. P5C is a labile compound; when synthesized by the periodate oxidation of hydroxylysine, it is purified by cation-exchange chromatography in 1 M HCl (Williams and Frank, 1975). The resulting low pH values stabilize the compound that is routinely stored under these conditions and neutralized just before the enzymatic assay. If neutralization were achieved with sodium or potassium hydroxide, concentrations as high as 100 mM NaCl or KCl would be present in "untreated controls." In the present study, the adoption of strictly controlled assay conditions ensured that chloride ion concentration in controls was never higher than 15 mM, and no inorganic cations were present. Making one step forward, the effect of various cations and anions was investigated. Our results clearly show that anions are the cause of the inhibition of the NADH-dependent reaction of P5C reductase, with sulfates being inhibitory at concentrations as low as 1 mM. On the contrary, the stimulation of the NADPH-dependent activity seems to depend on the presence of cations, and divalent cations were effective at lower doses than monovalent ions ( Figure 3D). Divalent anions at high concentration had detrimental effects, since the Na + -dependent stimulation of enzyme activity was lower with sodium salts of divalent anions than that obtained with sodium salts of monovalent anions (Figure 3F). If the concentrations at which these effects were evident in vitro are considered, in several cases it seems likely that they can occur in vivo and influence the resulting rate of proline synthesis. Substrate Affinity, Product Inhibition, and Ion Effects May Unravel Substrate Ambiguity, and Represent a Likely Mechanism for in vivo Modulation of P5C Reductase Activity Taking into account the main, if not exclusive, cytosolic localization of P5C reductase (Funck et al., 2012), the question remains as to which may be the physiological electron donor. The mechanisms for substrate inhibition that have been shown in the present study for the NADH-dependent activity seem to substantially prevent the use of the non-phosphorylated cofactor. Indeed, in the presence of NAD(P)(H) and proline concentrations similar to those reported for rice cells, the activity with both electron donors did not significantly differ from that with NADPH alone, probably because the inhibitory effect of proline was amplified by the low physiological concentration of P5C (Forlani et al., 2013), and that of NADP + by the high NADP + /NADH ratio (Hayashi et al., 2005 ; Figure 4). Notwithstanding this, the ability of NADH utilization could be useful in special circumstances. The activation of proline synthesis may maintain a favorable redox balance inside the cell. On the other side, the complex pattern of co-factor preference, substrate inhibition, and salt effects may contribute to a fast activation of proline synthesis under salt stress conditions. Following the exposure to hyperosmotic stress, the oxidative pentose phosphate pathway (OPPP) is rapidly induced, leading to cytosolic NADPH production (Baxter et al., 2007). Moreover, stress-induced inward Ca 2+ -fluxes are able to activate calmodulin-modulated NAD kinase isozymes ) that in turn increase the NADP(H)/NAD(H) ratio. A higher NADPH availability would enhance the activity of P5C reductase, which shows an K M(app) for NADPH very close to the intracellular concentration of the dinucleotide, and raise the carbon flux within the proline biosynthetic route. Consistently, A. thaliana plants overexpressing NAD kinase 2 showed increased levels of free proline ). In the meantime, the activity of the OPPP would lower the NADP + /NADH ratio, relieving in part the inhibition of the NADH-dependent activity. Over a longer period, an increase of the cytosolic Na + concentration, which can reach 60 mM (Anil et al., 2007), would enhance further the NADPH-fueled reaction. In this way, P5C reductase would be able to respond to wide fluctuations of P5C synthesis by P5C synthetase isozymes without the need of a transcriptional control. Also changes in the levels of other ions could modulate P5C reductase activity. For instance, magnesium ion concentration in rice cells is estimated to range between 1 and 10 mM (Hayatsu et al., 2014), and it was found to increase up to three-fold in leaves of salt-stressed rice plants (Bertazzini et al., 2012). Such fluctuations would positively affect the catalytic rate of rice P5C reductase using NADPH as the electron donor. In any case, the overall picture supporting the occurrence of differential effects of salts on the activity of P5C reductase depending on the electron donor used (Table 3) allows to explain previous contradictory findings. For instance, the translation inhibition of AtP5CR under stress conditions (Hua et al., 2001), and the inhibition of spinach P5C reductase by salts in the 10 −2 -10 −1 M range (Murahama et al., 2001), results that in the absence of post-translational mechanisms modulating enzyme activity and a strong preference in vivo for NADPH, respectively, would be inconsistent with stress-induced proline accumulation. The Three-Dimensional Structure of P5C Reductase is Conserved Across all Kingdoms of Life, but the Rice Enzyme Reveals Dynamic Movements Here, we also report the first crystal structure of a plant P5C reductase. Our analysis was limited to basic structural studies due to the fact that only low-resolution (3.40 Å) data were obtained, and some problems hindered the refinement of the structure influencing the quality of the final model. Due to high R-factors and lack of the electron density for a substantial part of the protein, we decided not to deposit the structure in the PDB. Nevertheless, the structural results are solid enough to solve the uncertainties about the oligomeric state of plant P5C reductase. In previous studies the migration of the native protein during gel permeation chromatography led to a rough estimate of 10-12 monomers in spinach (Murahama et al., 2001), and of 12-14 monomers in A. thaliana (Giberti et al., 2014). Also the rice enzyme showed an elution profile that is consistent with a 14mer ( Figure 5). Based on the structural information obtained, a decameric arrangement might therefore be assumed for all three plant proteins. However, even though the bacterial, the human, and the plant enzymes form very similar decameric arrangements, the structure of rice P5C reductase reveals dynamic movements of the domains. During model building and refinement, several of the dinucleotide binding domains had unclear or missing density and could not be reliably modeled, which suggests the presence of mobile elements in the crystal (Figure 6). This is contrasting with previous studies on bacterial (S. pyogenes and N. meningitides) representatives that have shown almost identical conformation of the subunits in their substrate-free and substrate-bound structures. Because no conformational changes were observed in these proteins, it was hypothesized that they operate by the lock and key mechanism (Nocek et al., 2005). Similarly, no significant conformational changes were reported for the extensively studied human enzyme (Meng et al., 2006). One possible explanation of lack of movement could be the formation of crystal lattice contacts between domains and their stabilization. In fact, an inspection of crystal contacts in the case of the human P5C reductase confirmed the presence of interactions between molecules that lock the previously mentioned hinge between the two domains of monomers (Salemme et al., 1988). The presence of a conformational plasticity implies the existence of regulation mechanisms. Many allosteric systems contain semi-rigid domains or subunits interacting via flexible regions. This design allows for the propagation of local events over a long distance to affect activities elsewhere (Cui and Karplus, 2008). Further biochemical and biophysical studies are required to investigate the dynamic nature of P5C reductases. However, it appears that these enzymes might be more dynamic than it has been previously thought. funded by the US Department of Energy, Office of Science, Office of Basic Energy Sciences under Contract No. W-31-109-Eng-38. MB was the recipient of a DAAD (German Academic Exchange Service) fellowship supporting a stage in DF's laboratory.
2016-06-18T00:06:59.096Z
2015-07-28T00:00:00.000
{ "year": 2015, "sha1": "a98240c3521b712601a96652169ab6788ce30a40", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.00565/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2fc1704a395f07b47a3b728166ff018106354ac2", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54693987
pes2o/s2orc
v3-fos-license
On Branched Chain Processes, the Laws of Development of Which Are Expressed by Numerical Sequences Like Fibonacci Numbers, a New Look at Their Nature Branched chain chemical reactions represent a special class of chemical transformation reactions of matter, for the discovery and experimental-theoretical development of which Semenov N. N. and Hinshelwood C. N. were awarded the Nobel Prize in 1956. In nature, such processes are widespread. Objective. To investigate the nature of various numerical sequences of the Fibonacci type and to find out under what conditions they can reflect (express) the patterns of development of branched chain processes. In this work, the state is formulated that branched chain chemical reactions are a particular case of branched chain processes of any nature in different spheres, including biological. It is shown that many branched chain processes can generate numerical sequences of Fibonacci, Lucas, Shannon and others of the same type, which reflect the dynamics of their development. For all the indicated numerical sequences, it is typical that their formation is determined by a general recurrent law, which has been the subject of research for many well-known mathematicians. For each of the indicated numerical sequences they established other laws alternative to the recurrent one, however, they were all of a private nature. They were in accordance with the recurrent law only in the case of one specific sequence. Comparative analysis of many numerical sequences allowed us to find a universal law that is common for all types of sequences and terminologically define it as the law of "doubling with subtraction." For all numerical series the formation of which follows a recurrent law, the law "doubling with subtraction" is equally valid. The opposite is not true since there are numerical sequences that obey only the law of "doubling with subtraction," and the recurrent law is not valid for them. It means that the new law is more fundamental and, in fact, is the primary law and the recurrent law is secondary. Significant differences also exist in the consequences of these two laws. For example, the increments of sequences formed according to the recurrent law and the law of "doubling with subtraction" have fundamental differences in their mathematical expression, although they lead in different ways to the same result, namely, to Ф = 1,618... with the serial numbers of the sequence terms tending to infinity. In the work for each of these laws, a branched chain biological process that was unique to it was found and put in correspondence. In the case of the law of "doubling with subtraction", the process was followed by the termination of chains with characteristic parameters: the chain length is three links, the branching factor is -2. In the case of a recurrent law, the process was without chain termination, with an infinite length and with a branching factor of -2, and with some delay limiting the branching. It seems interesting that such different branched chain processes of different character are described by the same Fibonacci sequence. The processes of branched chain character corresponding to the sequences of Lucas, Shannon, and others are discussed. Conclusions. According to our work, it follows that all formal mathematics, that is, all the mathematical features and patterns related to the Fibonacci sequence is just a description of those features and patterns that are inherent in the branched chain processes that actually produce these and other sequences. The Usage of the Numerical Fibonacci Sequence in Biology and Medicine Fibonacci (Leonardo of Pisa), well-known Italian mathematician, about whom, G. Polya, a prominent modern mathematician and educator, said: "It was an outstanding Italian mathematician, perhaps the most brilliant scientist of all mathematicians of the whole European Middle Ages [1]. Among the mathematical problems that Fibonacci (FB) solved, there was one about many pairs of rabbits might be born from a single pair at certain conditions and he (FB) gave a solution to this problem in the form of a numerical sequence: 1, 1, 2, 3, 5, 8, 13, 21,...etc. This numerical sequence was later called the Fibonacci numbers or Fibonacci sequence or even simply -FB. G. Polya himself used a Fibonacci sequence in his research. Besides, he paid special attention to the recurrent law of its formation, expressed in the form: Fn = Fn-1 + Fn-2, n= 3, 4, 5…, and initial conditions F1=1, F2=1. Polya, as mathematician, was more interested in mathematical side of this recurrent law. The increment of this numerical FB sequence for n, aspiring to infinity, has the value Fn/Fn-1 ≈ 1,618 … That is why there are reasons to identify the concept of the FB number and the "golden ratio". The golden ration is a rule about how parts and integer must be related to each other in order to be in harmony and for some reason this harmony is expressed by an irrational number 1,618… In biological and medical literature a lot of attention is given to a question about the usage of Fibonacci sequence, which increment is connected with the golden ratio. According to data of this literature, in recent years, the work on the application of the Fibonacci sequence as a methodological basis for the first stage of clinical trials of various drugs, mainly anticarcinogenic agents, began to appear in large quantities. In work [1] it is stated that the FB sequence and its modified version are the most used methodologies for determining the dose increments which are used in the first phase of clinical drug trials. In the same work, an interesting explanation is given about why the FB sequence is widely used. The main thing is that this is one of the first methods described for these purposes. It is easy to understand for patients, investors and regulators (such as the Ethics Committee). Some authors believe that the FB sequence is preferable when the dosetoxicity curve has a sharp steepness [2,3], according to animal toxicology. It is claimed that about 50% of the current clinical trials of the first phase are still conducted using FB sequence, and at the same time, the modified FB sequence is currently the most used [2]. It should be noted that the first phase of clinical trials of anticarcinogenic agents is the first development of human trials and it is an important step in drugs creation. On this first stage (dose-orientational), clinical trials are conducted to determine the optimal recommended dose for a new compound for further testing in the second phase of the trials [2]. For cytotoxic drugs, this dose corresponds to the highest dose associated with an acceptable level of toxicity which is related to the maximum tolerated dose. From the point of view of efficiency, it is clear that it is important to move up to the maximum tolerated dose, and it has to be done quick, but as long as it is safe, but it is unknown where the safe point is. Therefore, there is a problem about minimizing probability of getting too low or too high doses for patients. In the literature it is noted that the contradiction between safety and efficiency in the first phase of clinical trials will be solved with the help of new methods, which are in development. In most oncological trials of the first phase, it is often claimed, that the dose increment was corresponded to a modified FB sequence. At the same time, there is some dissatisfaction in the literature about the dose escalation and its defining laws. It is noted that even the term in the method of the modified FB sequence is vague and incomprehensible. There is no clarity about the nature of the FB sequence itself. Further, the questions will be considered in detail both about their nature and what they reflect. There should be the law of dose escalation by the type of a numerical sequence of FB or another type, at least for standardizing the conditions for testing the first phase. We believe that our work will give a greater understanding of their nature and semantic content. For example, let us mention several recent works about the application of the numerical FB sequence in clinical trials of the first phase. In 2015, a large group of American researchers from various cancer research facilities in Texas (12 authors) published a work about the first phase of clinical trials of the Bi-shRNA STMN1 BIV drug against hard-to-treat cancer type [4]. In the same year, a large group of South Korean authors published an article of the first clinical trials about determination the pharmacokinetic characteristics of DHP107 in humans, a new oral drug composed of lipid ingredients and paclitaxel [5]. The preparation DHP107 demonstrated efficiency, safety and pharmacokinetics of intravenous application, comparable with paclitaxel as the second line of therapy in patients with advanced gastric cancer. The dose escalation law with the modified FB sequence was used In both of these studies. Quite a lot of publications of this kind about the use of FB numbers to determine the necessary doses and medicines is contained in the literature of recent years. In recent years, authors have begun to focus on more fundamental things, the important role of Fibonacci numbers and associated with them golden ratio in the biological and medical spheres. Noteworthy is the work of Japanese authors: "Two Golden Ratio indices in fragment-based drug discovery" [6]. Fragment-based drug discovery (FBDD) is a new scientific direction that can be translated as a fragmented basis for drug development. It is identified with low molecular ligands (~ 150 Da), from which highly reactive molecules with theoretically drug-like properties are collected. It was the product of the latest scientific achievements and, in the opinion of some researchers, this trend seems promising [7], and the fact that the Fibonacci numerical sequence is involved in it causes a great interest. In this work, two new indexes based on the "Golden ratio" are proposed, which increase the effectiveness of drug development based on the FBDD method. The authors of this work method selected thirty examples from a large number of literature sources about the use of the FBDD, in which its application was successful, and analyzed it. They analyzed the ratio of the number of heavy atoms (not hydrogen) entering the lead drug to the number of heavy atoms that make up the original fragment (Scaffold part), and the same ratio of the number of heavy atoms between the scaffold part and evolution part. From the graphs of distributions of frequency of occurrence, the obtained values of these ratios, based on the data of thirty used examples, they saw a sharp peak of the maximum. It corresponded to the values of these ratios in the region of 1.618 for both cases. These results were then reinforced by the thermodynamic data of both the theoretical and experimental plan obtained by the authors. In the same study, the authors discovered yet another circumstance related to the Fibonacci numbers. They found that the percentage of ligand efficiency, which is an important parameter used in FBDD, is determined by the value of F. Based on the obtained data, two indexes were proposed, which make the FBDD process more specific and increases its efficiency. Interestingly, based on an analysis of the merits and capabilities of this method, the study states that it creates a new era in the drug development. Another interesting point of the work is that the authors are wondering why the Fb numbers and the golden ratio appear in the FBDD method and they hope that this understanding will be achieved in the future. Another important example of the fundamental connection between biological and medical phenomena and processes with Fb numbers can be found in this study [8]. In this study, the authors analyze both theoretically and experimentally the relationship of two drugs used in the form of their combination in inhibiting the channels. Such combinations are often used in clinical practice to investigate the mechanisms of action of drugs. For example, Schild-analyzes, which uses agonists and antagonists, can open a mechanism of competitive action. The authors of this work considered two different types of channels that are blocked by two substances due to their reversible binding to sites located inside the pore. These channels are terminologically defined in quantitative pharmacology as Syntopic and Allotopic Models [8]. Syntopic Model included one binding site for both drugs, and Allotopic Model is two different sites. It was believed that the binding of any single substance is already accompanied by the closure of the pore. Channels remain open if their connecting sites are free. After a rigorous analysis of the inhibition of these two types of canals by each of the substances separately and with their joint participation, differences were determined between them by this criterion. As a result, a cubic equation y3 -2y + 1 = 0 was obtained, which solution, under certain conditions, surprisingly led to a golden ratio and interesting consequences (giving the roots 0.618.. and -1.618..). This circumstance is also surprising because, as will be shown below, a similar cubic equation resulting three roots, two of which refer to the golden ratio, was obtained by analyzing the law of doubling with subtraction which was discovered by us. These are two examples from all literature, when the solutions of the golden ratio obtained from the cubic equations describing completely different processes by their nature. The authors of this work attempted to verify their results experimentally at the receptors 5-hydroxytryptamine type 3 (5HT3) using known channel blockers. Unfortunately, there is no clear data on this in the study. Another very interesting work is the work on the connection of Fibonacci numbers with nucleotide frequencies in the human genome [9]. In this study, a mathematical model is presented which, based on two quite convincing assumptions, can accurately predict nucleotide frequencies in single-stranded human DNA using Fibonacci numbers. The idea of the mathematical model was based on the analogy with the Fibonacci numbers. In the relationship between two neighboring terms of the continuously increasing numerical FB sequence, there always exists a limit which is numerically equal to F-value. The main assumption accepted in the construction of this mathematical model, that this is similar to nucleotide frequencies with the growth of the number of nucleotides. There are still some assumptions, but they are less important than the one mentioned above. This mathematical model was based on the numerical Fb sequence and the recurrent law of its formation. The second rule of Chargoff's correspondence and several more plausible assumptions were also used. It is known that the first rule of Chargoff's correspondence is related to double-stranded DNA, and was used by Watson and Crick to substantiate their double-stranded DNA model. The second rule of Chargoff's correspondence relates to single-stranded DNA. For this case, a mathematical model has been developed. The data obtained from this mathematical model, applied to all 24 human chromosomes, produces surprisingly accurate results of nucleotide frequencies for all nucleotides. Unfortunately, the developed mathematical model seems rather abstract, and it is difficult to reveal why it has such predictive power. Apparently, the comprehension of this with the link between the meaning and the numerical series of the FB is still ahead. Nevertheless, the developed mathematical model makes it possible to raise the level of understanding of the phenomenon contained in the second rule of Chargoff's correspondence in addition to the explanations found in [10]. Another important scientific direction, which is widely represented in numerous publications, concerns examples of the detection of manifestations of regularities of the FB and the golden ratio in various medical phenomena and processes. In this case, the publication [11] deserves attention, which presents the first study of the connection of the Fibonacci cascade to the distribution of coronary artery damage in the human heart responsible for elevation of the ST segment in myocardial infarction. In this study, the appearance of Fb numbers in distribution of the damage of the coronary arteries in the human heart is demonstrated. The authors of this study believe that the predisposition to this relation appears in nature, perhaps because this ratio optimizes the packing efficiency of structures in a confined space in such a way that unused space is minimized and the supply of energy or Sequences Like Fibonacci Numbers, a New Look at Their Nature nutrients is optimized [11]. In the anatomical structure of living organisms and their parts, we often find confirmation in the regularities peculiar to the numerical Fb sequence. In particular, recently appeared an increased interest in the anatomical structure of the human body and its parts, mainly, the hands. It is connected, apparently, with the development of a perfect functional robotic arm, the creation of which is being fought by many researchers. They need knowledge of the regularity of the anatomical structure of the hand and its functional characteristics. In this case, we can note the work [12], where from 100 healthy male volunteers of Chinese origin, data of the functional proportions of the arm approximated by the numerical Fibonacci series is presented. A statistical analysis of the data was done and results were presented in confirmation of their connection with the numerical data of the FB sequence. In the American journal of otolaryngology, there is an article about the regularities in the structure of the cochlea and other spiral forms, which common in nature and arts and their relationship to the numerical sequence of Fb [13]. The law of the golden ratio is used in medical oncological practice as a method for establishing the exact location of the navel in women after surgical operations [14]. The meaning of this need is aesthetic. In this study, it is believed that this method favorably differs from others used for these purposes. Many other medical are also discussed in the literature. For example, is there a relationship between systolic and diastolic blood pressures and the golden ratio [15]. In the questions of psychology, a golden ratio also appears. At the same time, there are works whose authors directly express their dissatisfaction with the fact that the widespread use of Fb numbers in the first phase of clinical trials is done under conditions when their very nature is not clearly known and, especially, this applies to the modified Fb numbers. The data presented on the use of the FB numerical sequence show how significant this problem is for medicine and biology. At the same time, the questions about the nature of the Fibonacci numbers and the nature of their connection with the phenomena and processes observed in the medical and biological spheres remain open. In a number of works the opinion is expressed that they can be solved in the future. Objective: Based on the new law, which is more fundamental than the alternative recurrent law for the formation of Fb numbers, reveal the nature of the phenomena and processes that cause their appearance and which they describe as processes that have a branched chain nature. About Branched Chain Reactions In 1956, the Nobel Prize for scientific development of the problem of branched chain chemical reactions was awarded to the Nobel Committee. One of the laureates of the award was Semenov N. N., whom we consider to be the founder of chemical physics, the creator of the quantitative theory of chain reactions. The second winner Cyril Norman Hinshelwood was awarded the Nobel Prize, according to the encyclopaedic interpretation, for the theory, kinetics and mechanisms of chain branched reactions. Academician N. N. Semenov in his Nobel speech, read in Stockholm, in connection with the awarding of the Nobel Prize in 1956, which was devoted to the discovery of a vast class of chain chemical reactions, determined and characterized branched chain reactions. This speech was published in our book as a separate brochure and in the journal «Advances of Chemistry» [16]. At the end of the Nobel speech, N. N. Semenov noted that many chemical reactions, which we know as simple, can be chain or branched chain when examined deeper. The "active particle" is the fundamental concept of the theory of branched chain chemical reactions. This concept was introduced into scientific circulation by N. N. Semenov and became an encyclopedic concept. In a chain reaction, the primary active particles appear, causing a long series of successive transformations of the substance. These reactions of continuation or development of the chain are possible because in the interaction of the active particle with the initial substance, in addition to the final substance, an active particle is formed which also reacts with the starting material, etc., until for some reason a "death " of this particle happens which is a chain break. The active particle, which cyclically does something with the medium, while not changing itself, will be an example of a simple chainunbranched process. In particular, such an example of an active particle can be the enzyme molecule in a simple enzymatic process or any other catalyst in the catalytic process. Other leading concepts and parameters in the theory of branched chain processes that occur under certain conditions that determine their nature and mechanism of action are the chain length and the branching factor. The length of the chain is the number of its links from the termination to the other termination, meaning how many times during this interval the active particle reacted. Coefficient of branching is the number of active particles appearing after each act of continuation of the chain. If the branching ratio is 1, then this is just a chain process. In the case when two or more active particles appear after the interaction of one active particle, the branching factor is 2 or more and this is a branched chain process. According to the ideas of N. N. Semenov: "Such reactions are the most common type of chemical transformation of substances." This direction of branched chain reactions continued to develop successfully and found expression in the reactions of valuable lipid peroxidation, having important role to biology and medicine. The role of an active particle is played by various radicals, in particular peroxide nature. It should be noted that we also made some contribution to the development of this problem, both theoretically and experimentally. In particular, in solving the problems of inhibiting the free radical chain process of lipid peroxidation by antioxidants, acting both by the chain termination mechanism and by the mechanism of inhibition of their branching, having a list of published works in this direction, a link of one of them [17]. It should be noted that even now a lot of studies are published, in particular on medical topics, in which membrane peroxide oxidation of lipids is studied, also its role and importance for the functional state of cells, organs and tissues and the organism as a whole. Branched chain processes that occur in the organism are not limited to this. For example, has been discovered the phenomenon of branched chain oxidation of oxyhemoglobin under the influence of certain factors, hydrogen peroxide with complete inhibition of catalase or nitrite ions [18]. Everything considered above was from the category of branched chain reactions of chemical nature, the direct chemical transformation of substances. However, in addition to the chemical reactions associated with the transformation of substances, there are classes of other phenomena that also proceed as branched chain mechanism, which is a whole layer of processes not directly related to chemistry, which were considered as phenomena and processes of the other nature. It can be assumed that the concepts and terminology developed for branched chain chemical reactions should be fair and applicable to them too. The implementation of such non-chemical processes is inevitably associated with the existence of active particles in them. Using the terminology of the Nobel laureate N. N. Semenov, a pair of rabbits can formally be regarded as active particles. A pair of rabbits, like an active particle, leads the chain, continues it and generates new chains, which means that it makes a branching of the chains. This process of multiplying and increasing the number of pairs of rabbits will occur through a branched chain mechanism. Below we will consider the regularities and characteristics of such processes and show that they are responsible for the appearance of a numerical sequence of FB and other numerical series. The Numerical Sequences of Fibonacci, Lucas and Shannon, Their Characteristics and Laws That They Are Determined by Earlier we showed FB sequence, which can be defined as canonical and also noted that this was not its only form. It is also represented as a series of 1, 2, 3, 5, 8, 13, etc. and a sequence of the mathematical form 0, 1, 1, 2, 3, 5, 8, etc. For both of them, a recurrent law is valid, stating that each following term, starting with the third, is equal to the sum of the two previous ones. G. D. Cassini discovered a new law for the formation of a numerical Fibonacci sequence that is alternative to a recurrence law, which connects three neighboring numbers in FB sequence of the following form: F 2 (n) -F (n-1)*F (n+1) = (-1) n+1 . After experimenting with this equation, one can make sure that it is directly valid only for the canonical form of the FB series and it is individual. In the 19th century, the French mathematician Lucas discovered yet another law for the formation of the FB series, and, in two of its expressions, taking into account the evenness of the serial number of its terms. This law is expressed by the following expression: F 2n = F 2 n+1 -F 2 n-1 ; F 2n+1 = F 2 n+1 + F 2 n The experimental verification, which is easy to perform by substituting numerical values into these expression, confirms their fidelity, however, directly only for FB of canonical forms. And further on, the mathematical formalism applied to this sequence led to the establishment of a whole series of identities and the regularities inherent in it. One of them, discovered by the French mathematician Binet, takes a special place among them. The peculiarity is explained by the fact that the law of formation of FB, is radically different from all others, because it determines the terms of the sequence by the golden number F, equal to 1.618.., obtained from the solution of the golden ratio problem. It should be noted that back in 1595, Johannes Kepler, an outstanding astronomer and mathematician, noticed one important peculiarity of the numerical FB sequence. He noticed that the ratio of the next term to the previous one while the sequence increases in the direction towards infinity tends to 1.618... Which means, a connection between the numerical FB sequence and the golden ratio has been established. We see what outstanding people, mentioned by us, were dealing with this problem. Also, by works of other, less prominent researchers many more laws have been established which characterize and determine the properties of the FB sequence. The French mathematician Lucas created a new series of numbers, completely different, but the recurrent law of their formation was the same as for the FB sequence. This numerical series has the form: 1; 3; 4; 7; 11; 18; 29; 47; 76; 123;…etc. The first two terms are also the initial conditions, as for a FB. This sequence is called Lucas sequence, and its elements -Lucas numbers. An important feature of Lucas sequence is the particular law of its formation, discovered by the mathematician Binet, by using golden number Ф having the simplest form: F n = Ф n -Ф -n и F n = Ф n + Ф -n , respectively for odd and for even "n". He also established another law for the numerical FB sequence, but it has difficult (complex) form. If the numerical sequences of FB and Lucas has the same recurrent law, and the sequences are cardinally different from each other, then it means that this is due to its different initial conditions. The initial conditions (1,1) and (1,2) give the same FB sequence. The initial conditions (1.3) give us another numerical sequence, called the Lucas sequence. Taking the initial conditions (1.4), we get the third kind of a numerical sequence formed according to a recurrence law, with the form: 1, 4, 5, 9, 14, 23, 37... etc. Choosing other initial conditions (1.5), we get the following numerical sequence: 1, 5, 6, 11, 17, 28, 45... etc. The last two sequences can be called Shannon sequence, since he was the first who drew attention to them and published them [19]. The verification of the applicability of the Cassini, Lucas and all other known laws to the Lucas sequence and two Shannon sequences gives a negative result. This means that all these laws are specific only for canonical form of FB sequence. On the other hand, all known laws that establish connections between numbers of Lucas sequence are not applicable to either the FB sequence or the Shannon sequence. This means that all these laws are specific only for one specific sequence. And now we can discuss a new law which was discovered by us after studying these sequences. As a result, we established new law which is radically different from all previously known laws. The formulation of this law: "Any term in the FB sequence and its value is determined by the doubled value of the previous term minus the value of the third previous term". Initial conditions for it will be the first three terms of the sequence. If this law is expressed analytically, then it has the following formula: F (n) = 2*F (n-1) -F (n-3), where n = 3, 4, 5,..., etc -is any determinable sequence number of FB, and F (n) is its value, n-1 its previous term with the value F (n-1) and n-3 the third number with the value F (n-3). It is not difficult to verify the truth of the relation between different terms of any of the sequences presented here (FB, Lucas, and two Shannon sequences). Thus, the resulting relation is the law determining the formation of the numerical sequence of FB, Lucas, two Shannon sequences, and also many other numerical sequences. In the literature, this law does not occur and is not mentioned, which allows us to call it new and terminologically define as the law of "doubling with subtraction" (DWS). In this terminology, it reflects its greater fundamentality and generality in relation to all numerical sequences. Let us remind that in literature the recurrent law is considered to be the main, fundamental and it is the basis of definition of the numerical FB sequence. About the Similarities and Differences Between the Law of "Doubling with Subtraction" and Recurrent law for the Fibonacci Sequence The fact that these laws appear to be identical with each other follows a direct verification of the results obtained by using them. They are obtained exactly the same for all the sequences listed above. This apparent identity is confirmed theoretically. Indeed, the mathematical expressions of these laws have the form: F (n) = F (n-1) + F (n-2) and F (n) = 2*F (n-1) -F (n-3). Combining them, subtracting the second equation from the first one, we obtain F (n-1) = F (n-2) + F (n-3), the same recurrent law. The main goal of this article is not about showing new law, our goal more significant. We reasonably assert that the law discovered by us is the primary and fundamental law for the formation of Fibonacci numbers in its numerical sequence. A well-known recurrent law, which is listed everywhere as the main and determining one, is not main at all. It is secondary to the law that we discovered, it is its consequence, and we will justify it below. The fact that the recurrent and our laws are valid for all sequences means that they are universal, all other laws are particular. Therefore, we have two universal laws. Which one of them is the main, and which one is secondary, can be answered this way: We take the canonical FB sequence and arbitrarily add to each of its terms any same number, let it be number "1" for simplicity. As a result, we get sequence: 2, 2, 3, 4, 6, 9, 14, 22, etc., which does not obey the recurrent law and at the same time the law that we discovered works absolutely fine. It works the same way if we subtract this number "1" from sequence numbers instead of adding it. After subtracting, we get the sequence: 0, 0, 1, 2, 4, 7, 12, 20…etc. If we remove the first two terms "0" we can see that it still does not obey the recurrence law, but obeys the law discovered by us. The DWS law is also valid for other sequences formed according to the formulas shown in Figure 1. These sequences were obtained by a different modification of the recurrent law, therefore, these are not sequences of Fibonacci type, since they no longer obey the recurrence law. At the same time the DWS law remains valid for them. On the abscissa axis -the serial numbers of the sequence, along the ordinate -their values. Sequences are shown in the columns on the right. Sequence 1 (row 1) -FB sequence which is formed by the formula F (n) = F (n-1) + F (n-2); sequence 2 (row 2), formed by the formula F (n) = F (n-1) + F (n-2)+1; sequence 3 (row 3), formed by the formula: F (n) = F (n-1) + F (n-2) -1; sequence 4 (row 4), formed by the formula F (n) = F (n-1) + F (n-2) -2. The last fact gives a reason to assert that the law discovered by us has a general nature, while the recurrent law is secondary to it and is valid only for more specific sequences. Our next step is a comparative analysis between the recurrent and the new law according to their differences and some characteristic indexes. About Specific Sequences of FB Type and Their Properties A large number of numerical sequences which formation follows the main recurrent law, which formula has the form: = + , can be attributed to a certain type of sequences, determined as FB sequences, along with FB sequence itself. The question comes up: is there any numerical sequence of FB type, which simultaneously belongs to another type of numerical sequences, for example, geometric or arithmetic progression. This possibility exists only for a geometric progression. For the existence of such sequence, it should satisfy the conditions expressed by the following relations. -terms of the FB sequence and the geometric progression with the indexes denoting their serial numbers, "a" is the first term, and "q" is the denominator of the geometric progression. The last equation is easily transformed into a quadratic equation of the following form: The resulting equation is identical to the known equation of the golden raiot and its solution will be the following two roots, q1 and q2. First root, the irrational number 1.618..., in the special literature is usually denoted by the letter Φ [6]. Other root, the irrational number is 0.618... (with a minus sign), which is denoted by the letter φ (the opposite notation also exists). The relationship between these numbers is determined by the relation Φ*φ = 1. The presence of two roots means that in nature there are two numerical sequences that are formed according to a recurrence law, which means, they belong to the FB type, and which simultaneously are a geometric progression. The view of these two rows: 1, Ф , Ф , Ф , Ф , Ф , Ф , … . About Special Sequences of the Class Formed by the DWS Law Consider the case where the FB sequence is formed according to the DWS law, which formula expression is represented in the form = 2 − , where , $% terms of the numerical Fibonacci sequence with their ordinal numbers, respectively, n, n-1 and n-3 (n>3). The question about the possibility of the existence of a numerical sequence obtained using DWS law and at the same time being a geometric progression is solved in a similar way, which means that the following equation must be: ɑq ' = 2ɑq ' − ɑq ' , where ɑ is the first term of the geometric progression, and q is its denominator. It is clear that the value found for q will be a criterion for the possible identity between the geometric progression and the numerical sequence, meaning that the same sequences will be both a geometric progression and a numerical sequence formed by the DWS law. The equation q ' = 2q ' − q ' can be easily transformed to q − 2q + 1 = 0. The resulting cubic equation can be solved by the Cardano method, in which, by appropriate substitution, it is reduced to an incomplete cubic equation for which the solution is sought and then the solution of the initial equation is sought using a solution of incomplete cubic equation. In our case everything is simple, since this cubic equation can be reduced to the form: ( − − 1)( − 1) = 0. Three roots are easily found using this cubic equation, its: ≈ 1,618, ≈ −0,618, = 1. All of them are valid and two of them refer to the golden ratio. This is an amazing circumstance for us, since a similar cubic equation with roots, two of which also refer to the golden ratio, was obtained in one of the studies discussed above about the analysis of literature data of channel inhibition [8]. This can be verified by comparing the cubic equation obtained by us уравнение q − 2q + 1 = 0 and other one y 3 -2y + 1 = 0 which was obtained by authors of the study we mentioned before. Surprisingly, the analyzed phenomena, while being of fundamentally different nature, lead to a single outcome -the golden ratio. At the same time, our cubic equation is still somewhat different from the cubic equation obtained in the cited study. The second term of our equation has an exponent of 2 but in their equation -1. The results are also different. The first root is 0.618, we have 1,618, the second one is -1.618, and we have -0.618, sort of antisymmetric results. Why this happened and what kind of antisymmetric processes lie at their base deserves special attention. Apparently, there is something in common between the processes of inhibiting the channels and all the numerical sequences defined by the DWS law. This may be caused by a branched chain mechanism, which is responsible for the observed phenomena. Note that from the point of the chain processes that we are developing here, many things, in particular enzymes, can be considered as an active particle. It also should be noted that the presence of three roots in our case means that there are three numerical sequences in nature that are formed by DWS law and which are a geometric progression at the same time. The first sequence is infinitely increasing, the second is infinitely decreasing to zero (fading), the third is the single row. This third sequence satisfies the requirements of both the DWS law and the geometric progression, but does not satisfy the recurrent law. It turns out as an interesting thing, that the DWS law, according to which numerical sequences are formed, actually significantly differs from the recurrent law. It is more significant because it has relation with a larger range of phenomena than a recurrent law. In other words, the DWS law encompasses a recurrent law as an individual case. The Relation Between Two Neighboring Terms for Numerical Sequences of FB Type, Formed According to the Recurrent Law For numerical sequences of FB type the relation between two neighboring terms will be determined as: And so on this computational process can go on forever, forming a complex but interesting mathematical object the value of which tends to 1.618.., to Ф. Perhaps such a mathematical structure can have an importance for mathematics, representing interest as a mathematical object of study. Relations Between the Neighboring Two Members of the Sequence, Formed According to the Law of "Doubling with Subtraction" Now consider the same situation for the case when the terms of the sequence are formed according to the DWS law. The relations between the neighboring two members in this case will be determined as: The resulting mathematical construction significantly differs from the structure given above in section 3.2.1. According to the solutions and their results, it can be seen that there are fundamental differences between mathematical expressions that determine the relationship between any two neighboring members of the numerical FB sequence if they are formed according to the recurrent law and the DWS law. In the first case, a certain remainder (surplus) is added to the unit (let it be "a"), in the second case, some other remainder is subtracted from "2" (let it be-"b"). Since it is easy to see that in either case the values of the relations between any two neighboring terms of the sequence are always equal to each other and tends to increase with their ordinal numbers to the number Ф, then the remainders in the mathematical expressions will be: in the first case, a≃0.618, in the second, b≃0.382. Surprisingly, the remainders a and b vary that way that the change in the ordinal numbers of the terms of the sequence does not interrupt absolute equality: 1 + a = 2 -b. Thereby, a comparative analysis of the DWS law discovered by us with respect to the recurrent law shows that they lead to the same results for sequence of the FB type, although, they are different. Moreover, it should be noted that the DWS law is more important than the recurrent law, because it covers a wider range of phenomena and this law is generalizing while recurrent law is a particular law ensuing from it. As a result, it should be reasonable to say that the DWS law and recurrent law are different and the recurrent law stands as a particular case of the more fundamental and significant DWS law. Behind this mathematical abstract side of these laws real phenomena are hidden which occur in nature and express these laws. Note that the recurrent law has long been represented and still represents, a great mathematical interest as an object of investigation. Many prominent mathematicians have studied it. The DWS law has even more significant mathematical interest is considering what we have noted before. The fact that the DWS law has such a procedure as doubling the previous member of the sequence to determine the subsequent one forming FB sequence seems to be significant for us. This is followed by a possible relation of a given numerical sequence to biophysical phenomena and processes. Therefore, it seemed interesting and important to find a certain model of a branched chain process of a biophysical nature that would be described by a numerical sequence of FB or other sequences of this type. Branched Chain Growth Processes of the Rabbit Pairs Number Developing According to the Numerical Fibonacci Sequence, in Accordance with the Law of "Doubling with Subtraction" or with the Main Recurrent Law In the nature, a variety of phenomena are observed which patterns of development can correspond to the numerical sequences of FB and, in some cases, to the numerical sequences of Lucas, Shannon, triangular numbers and others. The nature of such phenomena can be conditioned and associated with a branched chain patterns or a mechanism embedded in them. Apparently, the prevalence of chemical branched chain reactions, which Semenov N. N. was saying about, can be generally extended to the whole variety of biological and medical processes occurring both naturally and artificially, if we look closely at them, it may turn out that they have a branched chain character. In connection with the stated position that all numerical sequences of the FB type can reflect processes of a different nature that have a branched chain character, it is important to find certain model process that confirms this statement. In this regard, the law of "doubling with subtraction" carries a lot of content. The mechanism of the biological process, which has a branched chain character, is already visible in its form. In any case, the problem of rabbits, which Fibonacci formulated and gave an answer in the form of FB sequence, looks exactly this way. As an active particle in this problem a pair of rabbits can be represented as a whole. A pair of rabbits, like an active particle, leads the chain. It continues the chain and generates new chains producing a branch. Below we will consider the problem of rabbits formulated by Fibonacci which is presented in our interpretation from the point of view of the branched chain character of its process, as well as those surprising consequences to which this problem leads. Biological Model of the Branched Chain Process About Increasing the Number of Rabbit Pairs and Forming a Numerical Fibonacci Sequence, According to the Law of "Doubling with Subtraction" As it was already noted, the features of a hypothetical branched chain mechanism that generates the law of development of phenomena in accordance with the numerical FB sequence are viewed in the DWS law. The process of increasing the number of rabbit pairs can relate to such phenomena. Let us note what biophysical meaning can be contained in the expression of this law. In the formula expression of this law, the operation of doubling something previous can be interpreted as a doubling the number of active particles, which are the pairs of rabbits. Previous pairs of rabbits are the number of the pairs which will give offspring. As a result, the number of pairs increases twofold, after all rabbits will give offspring. The action of subtraction can be identified with chain breaks after three acts of chain continuation performed by active particles. With regard to rabbits, it can be considered that all pairs of rabbits on the third occasion will disappear after committing it. Seeing in the resulting ratio a certain biophysical meaning, a mechanism for breeding rabbits was compiled and described on its basis, and an analysis of its functioning with analysis of the obtained results were done. Le us note our other assumptions. We believe that the newborn pairs of rabbits mature and become capable to reproduce in certain stages and synchronously at the same time as it is necessary for the parent pairs to recover for giving the new offspring. In other words, after the first row of births parenting pairs and newborns simultaneously participate in the next one at the same time. It is quite logical to give this system of births their numbers according to its order: 1st, 2nd, 3rd, etc., as we will do later on. Thus, newly born pairs of rabbits enter the next row of births together and simultaneously with the parental pair, which means they become able to bring a new pair in the next row along with the parent couple. We can now describe the branched chain mechanism of breeding rabbits in accordance with such concepts and analyze the results of its functioning. Further, we will make minor clarifications. To select a branched chain mechanism, in accordance with which the process of increasing the number of rabbits during their breeding will be done, it is necessary to determine the conditions. The main concept here, let us emphasize this, is a pair of rabbits, which can be likened to an active particle. The properties of this active particle (a pair of rabbits) determine the process of increasing the number (of rabbits) according to branched chain mechanisms. Endow it with the following properties. It branches the chain with a branching factor of 2, which means that each pair of rabbits can produce only one pair, which means only reproduce themselves. A pair of rabbits, like an active particle, can create a chain with the length of three, which breaks after it. Such properties of an active particle or a pair of rabbits with connection to the length of a chain in three elements, with its break at the end, are determined by the subtracted second term of the DWS law. This property means that a newborn pair of rabbits which committed three rows of birth disappears from the process. In Table 1, in its first function row, in the order of their numbers, the values of the corresponding terms of the canonical numerical FB sequence are written. Let us assume that this numerical sequence recorded in the first row reflects the branched chain process of increasing the total number of pairs of rabbits at each stage in accordance with the indicated ordinal numbers. The first and second number are so-called artificially created in the biological process initial conditions for the development of the branched chain process of increasing the total number of rabbit pairs when they are bred. We take any serial number; let it be the fourth number with its value -3. If the branching factor was equal to two, which would mean doubling the total number of rabbit pairs of the previous step, then in the selected fourth column of the first row there would be a number 4 instead of 3, which means that one pair is missing. Moving on to the fifth column of the first row. According to our thoughts, there should be a number 6, but there is number 5. Again, one pair is missing. In the sixth column of the first row, in theory, there should be 10, but we see 8, this time missing 2 pairs. In the seventh column there should be a 16, not 13, that is a shortfall of 3 pairs. For the eighth column instead of 21 there should, in theory, be 26 and the shortage is 5 pairs. It is clear that the row of missing pairs (the second functional row of Table 1) also follows the numerical FB sequence and falls behind the main sequence by three consecutive numbers. For greater persuasiveness, we can check this on any member of the main sequence, for example, on the eleventh term of the sequence. Its value is 89. The value of the previous term (10th) is 55. Therefore, the value of the term which should have been there is 110 and the number of missing pairs is 21. This is exactly the value that the 8th member of the sequence has, falling behind 11th by three numbers (check Table 1). In accordance to our logic this can be understood as the permanent disappearance of those pairs of rabbits that have exhausted their resource for the continuation of chains, limited in length of three links or in the biological sense by three acts of reproduction. A break occurs at the end of any three-link chain, the disappearance of the active particle leading this chain. In Table 1, the third functional row shows the number of newborn rabbits at each step of the process. The filling of this line was done using the logic of branched chain processes with a branching factor -2. According to this logic, the number of new active particles (pairs of rabbits) appearing at each step is always numerically equal to the number of all active particles (the total number of pairs of rabbits) of the previous step. It is easy to verify that the obtained numerical sequence of the number of rabbit pairs withdrawn from the process is a Fibonacci sequence. This circumstance seems extremely interesting. Table 1 shows that the staged development of the branched chain process described here for all three presented indicators: the total number of pairs, newborn pairs, the number of pairs withdrawn from the process, correspond to the same numerical sequence of FB. The difference between them is that a number of "total numbers" outstrips a number of "newborns" and a number of "withdrawn pairs from the process" by one step and three steps respectively. Thus, the biophysical model of a branched chain process created on the basis of the DWS law reproduces the numerical sequence of FB with respect to the total number of rabbit pairs when they are bred. This model also implies two more important facts regarding the number of newborn couples and pairs withdrawn from the process. The number of both newborn and withdrawn from the process pairs also grows in accordance to the numerical sequence of FB. The Biological Model of the Branched Chain Process of Increasing the Rabbit Pairs Number According to the Recurrent law Which Forms the Numerical Fibonacci Sequence Giving the semantic content to the FB sequence using the logic suggested above, according to the new DWS law, allows us to look at the biophysical essence of the main recurrent law in a new way, believing that it reflects its particular branched chain process. Let us set the task to determine what kind of process is it, which recurrent law induces and describes. All the conditions that were used above in the model of branched chain process, which is forming the DWS law, are kept the same. If we examine the essence of FB numbers formation from the same perspective, then we come up with an interesting conclusion that recurrent law describes a branched chain process without chain termination, in other words, with an infinite length. Although, it has one important and necessary condition. This condition lies in the fact that a new emerging particle (a newborn pair of rabbits) becomes able to branch the process only after one step, skipping the next one and after that becoming immortal, leading its chain to infinity. This is the biophysical meaning of the main recurrent law of the FB sequence formation, reflecting the number of newborn rabbits appearing at each step or stage of development of the entire process, with the exception of the first two initial stages. We get a nontrivial answer by speculating using the same logic. So, if the next term for newborn rabbits is the sum of the first two preceding terms, the resulting numerical value for newborns will be the fact that exactly the same number of mature rabbit pairs produced them. Then the total number of newborns and their parents will be exactly twice the number of newborn pairs. Also, there are still immature rabbits in the transition to this stage, that is, newborns in the first preceding stage. So, the total number of rabbits will be determined by the expression: Fn (total)=2Fn (new) + Fn-1 (new), where Fn (new) and Fn-1 (new) are terms of the FB sequence for newborn pairs rabbits, and Fn (total) is the n-th term of the FB sequence for the total number of rabbits. It is not difficult to see that the resulting sequence for the total number of rabbits will also be a FB sequence. In the last expression, instead of the terms from a number of newborn pairs, we can substitute the terms of the total number of pairs and then we will have: Fn (new)=Fn-2 (total) and Fn-1 (new) = Fn-3 (total); Fn (total) = 2Fn-2 (total) + Fn-3 (total) = Fn-2 (total) + Fn-3 (total) + Fn-2 (total) = Fn-1 (total) + Fn-2 (total). This means that the same recurrent law of formation of a numerical sequence is obtained. However, it is necessary to conduct a more detailed analysis of the sequence nature in relation to the total number of pairs and newborn pairs of rabbits at each stage of the whole process. In the same way as it was done above, we will monitor the number of newborn pairs and the total number of pairs and enter the data in Table 2. Start with the initial situation. At the initial moment, there was one mature pair of rabbits, no newborn pairs yet. This moment is reflected in table 2 by putting number 1 in the second row of the first column and number 0 in the first row of the first column. At the second stage a newborn pair appeared and it is necessary to reflect this by putting the number 1 in the first row of the second column. The total number of all pairs in this case is two pairs, which is reflected in the cell of the second row and the second column. At the third step, one more newborn pair appears because the newborn pair in the previous step has not reached puberty yet. Therefore, we put number 1 in the first row of the third column. But in the second row and the third column, where total number should be indicated, we put the number 3. At the fourth step, 2 and 5 should be written in the corresponding cells (two of three pairs of the total number created two newborn pairs) and so on. Table 2 shows that the phased development of the branched chain process described here by the two presented indexes (the newborn pairs and the total number of pairs), also corresponds to the same numerical FB sequence. The difference between them is that a number of "total numbers" correspond to FB sequence with initial conditions (1.2), and a number of newborn pairs correspond to FB sequence with initial conditions (0.1). We interpret such differences as the delay of a newborn pair number from a total number by two steps. Eventually, we see that it turns out as an amazing case. Two fundamentally different branched chain processes proceed according to completely different laws, which lead to the same result corresponding to the numerical FB sequence. The DWS law generated by a branched chain process with a chain termination and with their three-link length, having a branching factor-2, which is associated with a doubling in the formulation of the law. Other branched chain process proceeds without chain termination with an infinite length, having a branching factor-2. Such a process generates a recurrent law, which characterizes it. These different laws from different processes lead to the same effect -the numerical FB sequence. It is easy to verify using the data of Table 1 that the summation of two FB sequences, reflecting the total number of pairs and newborn pairs leads to a sequence of 1, 3, 5, 8, 13, etc., that is, for n> 3 the further numerical sequence is a FB sequence, but according to the table 2, the result of this summation is the classical Lucas sequence. Lucas sequence is also obtained from the data of Table 1 when summing up the sequence of the total number of pairs and pairs withdrawn from the process. Indeed, the resulting series 1, 2, 4, 7, 11, 18, 29, etc., for n> 3 is the continuation of the Lucas sequence. As a result, we have an amazing case; the Lucas sequence turns out to be obtained from the FB series and it is its derivative. But it is even more surprising that two fundamentally different in nature and content branched chain processes, are described in a similar way by the same DWS law, a which particular version is the recurrent law. Thus, it is established that the Lucas sequence is determined by summing two Fibonacci sequences shifted by two steps, the first Shannon series, for its part, is determined by summing the two sequences -the Lucas and Fibonacci, also shifted by two steps. The second Shannon sequence is determined by the summation of two, one of which is the first Shannon sequence, and the second one is Lucas sequence, also shifted by two steps. Thus, all sequences are determined through the Fibonacci series. This is an important proposition, indicating that all sequences can reflect branched chain processes according to the type considered for FB. Thus, the biophysical essence of many known numerical sequences characteristics has a similar nature, all of which can reflect the dynamics of the development of branched chain processes with the certain conditions. It should be noted that the DWS law can have other forms besides Fn =2Fn-1 -Fn-3, for example: Fn = 2Fn-1-Fn-4; Fn = 2Fn-1 -Fn-2 and even Fn = 2Fn-1 -Fn-1. For each separate form of the DWS law expression there will be a variant of a branched chain process. The form of its expression Fn = 2Fn-1-Fn-3 describes a branched chain process with a chain termination with a fixed length of three links. If in consideration of the branched chain process of a biological nature there will be other conditions, for example, there will be another situation with the rabbit pair selection, that is, the chain termination conditions will change, then there will be another description of it by numerical sequences. For example, if the length of the chain is not three links (not three acts of continuation of the chain) but two, then the dynamics of such a process, in some cases, will have the form of a numerical sequence of natural numbers. It is easy to verify this by reasoning the same way as it was done before. Let us show what would have happened if, according to the biophysical model of the branched chain process reconstructed by us, pair of rabbits was withdrawn from the reproductive process and disposed only after 2 rows of birth. In this case, the result of determining their number would not be expressed by the numerical FB sequence. It would be expressed by the DWS law, but in a somewhat modified form, namely by: F (n) = 2*F (n-1) -F (n-2). Under the initial conditions (1.1), it would represent a simple single sequence. In the case of the initial conditions (1, 2), it would represent a sequence of natural numbers of the form 1, 2, 3, 4, 5, etc. Under the initial conditions (0, 1) -this would also be a sequence of natural numbers of the form 0, 1, 2, 3, 4, 5, etc. The last circumstance creates some interest. Many people, in accordance with what law the natural number of numbers is formed, will say that it will be expressed as F (n) = F (n-1) + 1 and only few know another kind of the law, namely: F (n) = 2*F (n-1)-F (n-2). And the main thing is that behind this law, which forms a natural sequence of numbers, the reality lies which it was derived of. Thus, the nature of the numerical FB sequence and other considered sequences is that they reflect the dynamics of the development of branched chain processes occurring under strictly defined fixed conditions, a particular example for which is the process of breeding rabbits, described by Fibonacci. Conclusion The application and use of the FB numbers and the associated with it golden ratio in the biology and medicine has become widespread. Apparently, this is only the beginning, everything will continue to develop. In this regard, the words of Warren Sturgis McCulloch, the famous neurophysiologist, who proposed a formal neuron, which became the prologue to the creation of artificial neural networks, which lies in the basis of artificial intelligence: "I spent two years measuring the ability of a person to bring a controlled oblong subject to the preferred form, because I did not believe that he preferred a golden ratio or that he could recognize it. He prefers and he can!». With respect to the Fb numbers associated with the golden ratio, the law of their formation, alternative to the recurrent one, is established in the work. This law ("doubling with subtraction", DWS) is valid for all numerical sequences for which the recurrence is also valid. At the same time, there are numerical sequences for which the recurrent law does not work, and the DWS law is valid. This means that the DWS law is more fundamental, and, in fact, is a primary law, and the recurrent law is secondary. Significant differences also exist in the consequences of these two laws. For example, the increments of sequences formed according to the recurrent law and the law of "doubling with subtraction" have fundamental differences in their mathematical expression, although they lead in different ways to the same result, namely, to Ф = 1,618... with the serial numbers of the terms of the series, which tend to infinity. In the work, a branched chain biological process that was unique to each one of them was found. In the case of the law of "doubling with subtraction", the process was followed by the termination of chains with characteristic parameters: the chain length is three links, the branching factor is -2. In the case of a recurrent law, the process was without chain termination, with an infinite length and with a branching factor of -2, and with some delay limiting the branching. It seems interesting that such different branched chain processes of different character are described by the same Fibonacci sequence. Both these laws are consequences of two fundamentally different branched chain processes. Doubling with subtraction is the consequence of a branched chain process with a chain termination and three-linked length with a branching factor of -2. The recurrent law is a consequence of a branched chain process without chain termination with an infinite length and a branching factor of -2, but with the condition that the new emerging active particles become truly active only through a one step. These different consequences from different processes lead to the same result, forming a numerical FB sequence. As a result, it can be considered justified that the quintessence of many known numerical sequences of the FB type is that they reflect branched chain processes, the spread of which is ubiquitous in nature. Perhaps Fibonacci was the first who drew attention to them. Interest in different applications of Fibonacci numerical sequences does not decrease. In support of the above, two recent works can be cited [20,21].
2019-04-22T13:12:44.865Z
2018-10-11T00:00:00.000
{ "year": 2018, "sha1": "096453dc3906c6710bc33f10f9bf8e696f33af4e", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.sjc.20180604.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e5c4868c2b872ad034bfed45e6a7a606080d3dab", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Mathematics" ] }
6326857
pes2o/s2orc
v3-fos-license
Adhesive arachnoiditis in mixed connective tissue disease: a rare neurological manifestation The overall incidence of neurological manifestations is relatively low among patients with mixed connective tissue disease (MCTD). We recently encountered a case of autoimmune adhesive arachnoiditis in a young woman with 7 years history of MCTD who presented with severe back pain and myeloradiculopathic symptoms of lower limbs. To the best of our knowledge, adhesive arachnoiditis in an MCTD patient has never been previously reported. We report here this rare case, with the clinical picture and supportive ancillary data, including serology, cerebral spinal fluid analysis, electrophysiological evaluation and spinal neuroimaging, that is, MRI and CT (CT scan) of thoracic and lumbar spine. Her neurological deficit improved after augmenting her immunosuppressant therapy. Our case suggests that adhesive arachnoiditis can contribute to significant neurological deficits in MCTD and therefore requires ongoing surveillance. SUMMARY The overall incidence of neurological manifestations is relatively low among patients with mixed connective tissue disease (MCTD). We recently encountered a case of autoimmune adhesive arachnoiditis in a young woman with 7 years history of MCTD who presented with severe back pain and myeloradiculopathic symptoms of lower limbs. To the best of our knowledge, adhesive arachnoiditis in an MCTD patient has never been previously reported. We report here this rare case, with the clinical picture and supportive ancillary data, including serology, cerebral spinal fluid analysis, electrophysiological evaluation and spinal neuroimaging, that is, MRI and CT (CT scan) of thoracic and lumbar spine. Her neurological deficit improved after augmenting her immunosuppressant therapy. Our case suggests that adhesive arachnoiditis can contribute to significant neurological deficits in MCTD and therefore requires ongoing surveillance. BACKGROUND Mixed connective tissue disease (MCTD) is a welldefined entity with a wide spectrum of clinical manifestations. Some patients initially diagnosed with MCTD eventually manifest symptoms more consistent with systemic lupus erythematosus (SLE) and vice versa. 1 Neurological manifestations are reported in 10% cases of MCTD. 2 Recent studies suggest that prevalence may be greater than reported before, 3 4 involving central and peripheral nervous system. The most common disorders are trigeminal neuralgia, vascular-type headache, aseptic meningitis, psychosis and convulsions. 4 5 Isolated cases of intracranial haemorrhage, cauda equina syndrome, transverse myelitis, optic neuropathy and retinal vasculitis have been reported as well. [6][7][8][9][10] Adhesive arachnoiditis is a relatively uncommon chronic pathological disorder, characterised by an inflammatory insult to the arachnoid layer of the meninges that leads to fibrosis. As a sequel, the arachnoid becomes abnormally thick and adherent to the surrounding layers of pia and dura mater. The subsequent abnormal adhesion of nerve roots to the dural sac or to each other (clumping) can produce neurological impairment. The usual symptoms of arachnoiditis are severe back pain, paraesthesia, lower limb weakness and dissociative sensory loss. Common causes are prior spinal surgery, spinal inflammation or infection such as tuberculosis meningitis, trauma, haemorrhage, injection of anaesthetic agents and oil-based myelographic contrast agents. It is diagnosed on clinical grounds and supportive MRI findings. 11 12 The pathogenesis of adhesive archnoiditis has not been fully elucidated. We present to the best of our knowledge the first case of adhesive arachnoiditis in an MCTD patient that resulted in myeloradiculopathic symptoms leading to significant neurological comprise. This manuscript also captures the challenges of correct diagnosis and subsequent management of this uncommon debilitating clinical entity. CASE PRESENTATION A woman aged 33 years presented with 2-year history of low back pain, getting worse within last couple of months. She was a non-smoker, non-alcoholic, professional beauty therapist who was happily married and had two successful pregnancies with full term normal delivery. There was no history of rheumatological diseases in her family. She was treated with immunosuppressants (hydroxy chloroquine/azathioprine/mycophenolate mofetil), oral steroids and aspirin over the course of her disease. She had a number of acute exacerbations, requiring steroids, typified by fatigue, hair loss and arthralgia. There had been no major systemic aspects of MCTD in conjunction with the neurological symptoms in last few years. On her recent presentation, she reported of severe lower back pain radiating to right leg associated with pins and needles from waist down, 2-3 episodes of faecal incontinence, poor balance, perineal and perianal numbness and globally altered sensations in both legs. She acknowledged involuntary jerking of both lower extremities at night. Clinically, she had restricted right straight leg raise test, absent right knee jerk and diminished bilateral ankle jerks. Sensory examination showed diminished pinprick sensation in both extremities, more pronounced on right, extending up into the waist in a symmetrical distribution to the T10 level with no sacral sparing. Her Romberg's test was positive. There was no abnormality in cranial nerves, upper extremities or upper trunk. Rest of deep tendon reflexes were well preserved with good 2-point discrimination, vibration, proprioception, pinprick and temperature sensation. Muscle bulk and tone was also preserved in upper and lower limbs with flexor planter response. On her urgent MRI lumbar-spine, conus medullaris and cauda equina were found within normal limits without any spinal cord signal abnormality, oedema or tumour. Her immunosuppressants (azathioprine and prednisolone) were up-titrated and given the huge impact of her symptoms limiting her activities of daily living, she received two fluoroscopic-guided caudal epidural injections 1 month apart with partial improvement in backache. Given the unusual nature of her symptoms, she underwent lumbar puncture and the cerebral spinal fluid (CSF) analysis was normal for white cell count (white cells <5), glucose and proteins. She underwent right lower limb neurophysiological studies (EMG/NCS) which did not report evidence of right L4 through S1 radiculopathy or sensorimotor polyneuropathy. On account of worsening symptoms, she had a non-contrast CT scan of thoracolumbar spine, which documented thoracic spinal cord arachnoid calcification (figure 1) and raised concern for chronic adhesive arachnoiditis. It was further evaluated with MRI thoracolumbar spine which reported abnormally distributed lower lumbar spine and thecal sac nerve roots demonstrating clumping (figures 2 and 3) and augmented the existing concern for chronic arachnoiditis. DIFFERENTIAL DIAGNOSIS Our top differential diagnosis was cauda equina syndrome. Other possible differentials that could mimic symptoms of our patient were spinal cord tumours, syringomyelia, complex regional pain syndrome and multiple sclerosis which were ruled out by appropriate investigations. Failed back surgery syndrome is an important differential of adhesive arachnoiditis in postoperative phase. TREATMENT Our patient's imaging studies were discussed with neuroradiology and in the light of her background history of MCTD, recent unusual neurological presentation and results of investigations, especially neuroimaging findings, it was decided that she likely had adhesive arachnoiditis related to her connective tissue disorder (MCTD Lupus overlap) which possibly started at the time when she initially presented with backache and had progressed since then possibly due to scarring. While she was on azathioprine for immunomodulation, rituximab (anti-CD20 monoclonal antibody) was added aimed at her neurological symptoms which successfully stopped progression of her symptoms. OUTCOME AND FOLLOW-UP Eighteen-month follow-up showed progressive improvement in her neurological symptoms with no recurrence. Her backache responded to analgesics. DISCUSSION In 1972, Sharp et al 13 described MCTD, an apparently distinct overlap syndrome sharing many features of SLE, scleroderma and polymyositis. As mentioned earlier, neurological manifestations are reported in 10% cases of MCTD. 2 Recent studies suggest that prevalence may be greater than reported before, 3 4 involving central and peripheral nervous system. The most common disorders are trigeminal neuralgia, vascular-type headache, aseptic meningitis, psychosis and convulsions. 4 5 Isolated cases of intracranial haemorrhage, cauda equina syndrome, transverse myelitis, optic neuropathy and retinal vasculitis have been reported as well. [6][7][8][9][10] Arachnoiditis, first described by Victor Horsley in 1909, 14 is a rare inflammatory condition characterised by thickening of the arachnoid membrane and adhesions of dura mater that causes intractable lower back pain and various other devastating neurological complications that include cranial neuropathies, myelopathies and radiculopathies. The most common aetiological factors in the development of spinal arachnoiditis are infection, intrathecal injection of steroids or anaesthetic agents, trauma, subarachnoid haemorrhage, ionic myelographic contrast materials, multiple back surgeries and lumbar puncture. 15 16 Adhesive arachnoiditis, the most severe type of chronic arachnoiditis, results in scar tissue formation, which compresses nerve roots and disrupts their blood supply and also normal flow of CSF. It can progress to arachnoiditis ossificans, an end-stage complication of adhesive arachnoiditis characterised by the pathological ossification of the spinal arachnoid. 17 Arachnoiditis can also mimic the symptoms of other diseases, such as spinal cord tumours, cauda equina syndrome, arachnoiditis ossificans and syringomyelia. 18 MRI is the gold standard in the diagnosis of arachnoiditis; however, unenhanced CT (CT scan) better elucidates the presence and extent of arachnoid ossifications and is thus interrelated to MRI. The pathogenesis of adhesive arachnoiditis is not clear. Burton 19 suggested this to be the end point of an inflammatory process starting with radiculitis and progressing to arachnoiditis and adhesive arachnoiditis. As a result of inappropriate proliferation of arachnoid cells and production to dense collagen deposits surrounding nerve roots causes them to scar the meninges. Idris et al 20 hypothesised an autoimmune-related mechanism which implicates nervous, immune and viscera musculoskeletal systems as important adjuncts in its pathophysiology, though its association with MCTD is yet to be evaluated. As mentioned before, a number of inflammatory insults have been associated with adhesive arachnoiditis. With our patient, underlying autoimmune connective tissue disease was possible inciting inflammatory event after ruling out all other possible causes on basis of laboratory data, neurophysiological studies and neuroimaging. True incidence of adhesive arachnoiditis and relationship to MCTD remains to be accurately documented. Being a rare disorder, there is no consensus on standard treatment of adhesive arachnoiditis. To date, conservative approach (medications, physical therapy, psychotherapy, epidural steroid injections) and surgical treatment options are decided on a case-by-case basis with mixed clinical outcomes. In our case, likely autoimmune basis of our patient's presentation guided us towards augmentation of immunosuppressive therapy and addition of rituximab which worked out well in our patient. Learning points ▸ Mixed connective tissue disease (MCTD), a distant overlap syndrome, can eventually manifest symptoms more consistent with systemic lupus erythematosus (or scleroderma/polymyositis) and vice versa. ▸ Adhesive arachnoiditis could be considered in an MCTD patient presenting with myeloradiculopathic symptoms. ▸ Arachnoiditis can also mimic the symptoms of other diseases, such as cauda equine syndrome, spinal cord tumours, arachnoiditis ossificans and syringomyelia, so it is important to differentiate between arachnoiditis and other neurological manifestations of MCTD. ▸ MRI and CT scans are important diagnostic investigations in suspected adhesive arachnoiditis. ▸ Further studies will help to define the complex underlying pathophysiology of adhesive arachnoiditis, involvement of the nervous system and role of immune system as currently only limited publications are available to address this uncommon entity. Contributors MUK was the chief author of the article and undertook most of the literature review. AF was the primary treating physician of the patient and was actively involved in subsequent revision of the article. JAJD consulted on the management of the patient. Competing interests None declared. Patient consent Obtained. Provenance and peer review Not commissioned; externally peer reviewed. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/ licenses/by-nc/4.0/ Copyright 2016 BMJ Publishing Group. All rights reserved. For permission to reuse any of this content visit http://group.bmj.com/group/rights-licensing/permissions. BMJ Case Report Fellows may re-use this article for personal use and teaching without any further permission. Become a Fellow of BMJ Case Reports today and you can: ▸ Submit as many cases as you like ▸ Enjoy fast sympathetic peer review and rapid publication of accepted articles ▸ Access all the published articles ▸ Re-use any of the published material for personal use and teaching without further permission For information on Institutional Fellowships contact consortiasales@bmjgroup.com Visit casereports.bmj.com for more articles like this and to become a Fellow
2017-08-15T14:43:42.647Z
2016-12-16T00:00:00.000
{ "year": 2016, "sha1": "5926f810ff9601cb82025c442562111abc9c9300", "oa_license": "CCBYNC", "oa_url": "https://casereports.bmj.com/content/casereports/2016/bcr-2016-217418.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5926f810ff9601cb82025c442562111abc9c9300", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236493645
pes2o/s2orc
v3-fos-license
Measuring the mass function of isolated stellar remnants with gravitational microlensing. II. Analysis of the OGLE-III data Our knowledge of the birth mass function of neutron stars and black holes is based on observations of binary systems but the binary evolution likely affects the final mass of the compact object. Gravitational microlensing allows us to detect and measure masses of isolated stellar remnants, which are nearly impossible to obtain with other techniques. Here, we analyze a sample of 4360 gravitational microlensing events detected during the third phase of the OGLE survey. We select a subsample of 87 long-timescale low-blending events. We estimate the masses of lensing objects by combining photometric data from OGLE and proper-motion information from OGLE and Gaia EDR3. We find 35 high-probability dark lenses - white dwarfs, neutron stars, and black holes - which we use to constrain the mass function of isolated stellar remnants. In the range 1-100 M_Sun, occupied by neutron stars and black holes, the remnant mass function is continuous and can be approximated as a power-law with a slope of $0.83^{+0.16}_{-0.18}$ with a tentative evidence against a broad gap between neutron stars and black holes. This slope is slightly flatter than the slope of the mass function of black holes detected by gravitational wave detectors LIGO and Virgo, although both values are consistent with each other within the quoted error bars. The measured slope of the remnant mass function agrees with predictions of some population synthesis models of black hole formation. INTRODUCTION Detecting and directly measuring masses of isolated stellar remnants, especially neutron stars and black holes, is virtually impossible with traditional astrophysical methods. Our knowledge of the mass function of neutron stars and black holes is based on observations of binary systems but the binary evolution likely affects the final mass of the compact object. However, isolated neutron stars and black holes must be ubiquitous in our Galaxy. Knowledge of their mass function would give us important clues about the evolution of massive stars, core collapse and supernova mechanisms, etc. Masses of neutron stars in binary systems can be measured with precise timing observations of radio pulsars either in double neutron star or neutron star-white dwarf systems. Mass measurements are also possible for neutron stars in X-ray binaries by combining X-ray and optical observations (e.g.,Özel & Freire 2016). Masses of neutron stars in double neutron-star systems peak at 1.33 ± 0.09 M , whereas those in neutron star-white dwarf binaries are more massive (typically 1.54±0.23 M ) (e.g., Kiziltan et al. 2013). The maximum observed mass of a neutron star is about 2.14 M (Cromartie et al. 2020). All known stellar-mass black holes were found in binary systems -either in black hole-star (via radial velocity or in X-ray binaries) or black hole-black hole/neutron star binaries found via gravitational waves by LIGO and Virgo. The distribution of dynamical masses of black holes in X-ray binaries is consistent with a narrow Gaussian at 7.8 ± 1.2 M (Özel et al. 2010) with an apparent absence of compact objects in the 2 − 5 M range (the so-called "mass gap"; Özel et al. 2010;Farr et al. 2011). The distribution of masses of black holes in 47 compact-binary mergers from the second LIGO-Virgo Gravitational-Wave Transient Catalog (Abbott et al. 2021) is consistent with a broken power law or a power law with a Gaussian feature. According to that study, the minimum black hole mass is lower than 6.6 M (with 90% credibility). Belczynski et al. (2012) and Fryer et al. (2012) proposed that the mass gap may be caused by the supernova explosion mechanism that should be driven by instabilities with a rapid growth time. This hinders formation of compact objects with intermediate masses. However, some "mass-gap" objects may still be formed, for example, from mergers of neutron stars and white dwarfs. Indeed, recent discoveries indicate that objects with masses intermediate between those of neutron stars and black holes do exist. The product of the binary neutron star merger in GW170817 has a mass of 2.74 +0.04 −0.01 M (Abbott et al. 2017). LIGO and Virgo have also detected a coalescence of a massive black hole with a 2.50 − 2.67 M "mass-gap" object in gravitational-wave signal GW190814 (Abbott et al. 2020). Thompson et al. (2019) and Jayasinghe et al. (2021) discovered ∼ 3 M dark companions orbiting giant stars. Isolated dark stellar remnants may be detected in gravitational microlensing events (e.g., Paczyński 1996;Gould 2000;Mao et al. 2002;Bennett et al. 2002). However, lens mass measurements are possible only in special cases, when the Einstein timescale t E , the microlens parallax π E , and the relative lens-source proper motion µ, are known: where κ = 8.144 mas yr −1 . The values of t E and π E can be measured (or constrained) from the light curve of the event, whereas µ is usually unknown. However, the most probable distribution of µ can be inferred from the Milky Way models, which allows us to estimate the masses and distances to lensing objects. This method was first proposed by Wyrzykowski et al. (2016) and Wyrzykowski & Mandel (2020), who searched for stellar remnants in OGLE microlensing data. However, as we explain in Mróz & Wyrzykowski (2021), the masses of compact objects inferred by Wyrzykowski & Mandel (2020) are overestimated and their "mass-gap" and black hole events are, in fact, most likely due to main-sequence stars, white dwarfs, or neutron stars. In this paper, we re-analyze a large sample of microlensing events detected in the third phase of the OGLE survey with the main aim of searching for stellar remnant candidates and measuring their mass function. Event selection The photometric data analyzed in this paper were collected during the years 2001-2009 during the third phase of the Optical Gravitational Lensing Experiment (OGLE-III) survey (Udalski 2003). We selected 91 fields with the largest number of epochs, covering an area of about 31 deg 2 of the Galactic bulge. The vast majority of the collected images (up to ∼ 2500 per field) were taken through the I-band filter, closely resembling that of the standard Cousins filter. A smaller number of exposures (1 to 35 per field) were collected in the V -band. OGLE-III used a mosaic CCD camera with a field of view of 0.34 deg 2 mounted on the 1.3-m Warsaw Telescope located at Las Campanas Observatory, Chile. Thanks to the small pixel scale (0.26" per pixel) and superb sky conditions (typical seeing 1 − 1.5"), OGLE-III could detect objects as faint as I ≈ 21 in 120-s exposures in dense regions of the Galactic bulge. Several studies used OGLE-III observations of the Galactic bulge to search for gravitational microlensing events. Over 4000 events were discovered in real-time by the OGLE Early Warning System (EWS; Udalski 2003). The system was designed for detection of ongoing microlensing events. Wyrzykowski et al. (2015) selected a sample of 3718 standard events found in the OGLE-III data (of which 1409 had not been detected before by EWS), which they used to construct maps of the mean Einstein ring crossing time and compared them with predictions of Milky Way models. Additional 59 long-timescale events exhibiting an annual microlens parallax effect were selected by Wyrzykowski et al. (2016), who searched for stellar remnant (white dwarf, neutron star, and black hole) candidates. In this paper, we analyze 3620 "class A" microlensing events detected by Wyrzykowski et al. (2015Wyrzykowski et al. ( , 2016. In addition, we run the event finder algorithm of Mróz et al. (2017) on OGLE-III data and find an extra 740 events. Thus, our final sample comprises 4360 events. To select stellar remnant candidates, we apply several selection cuts. First, we expect that microlensing events due to black holes have relatively long timescales because t E ∝ √ M . Long-timescale events are likely to exhibit light curve deviations caused by the orbital motion of Earth (the so-called annual microlens parallax effect). Even if the amplitude of the effect is too small to be reliably measured from the light curve, its value may be tightly constrained by the light curve data, which also provides useful information. Second, we use the proper motion of the source to infer the lens properties and so we select events for which the majority of the light comes from the source star (so that the proper motion of the source can be approximated as the proper motion of the baseline object, which we measure from the archival OGLE data). This is quantified by the dimensionless blending parameter f s , which is the ratio of the source flux to the total unlensed flux of the event. In the first step, we fit all 4360 light curves with a standard point-source pointlens microlensing model with parallax. There may be up to four possible solutions describing every light curve due to inherent degeneracies (e.g., Smith et al. 2003;Gould 2004;Skowron et al. 2011). Then, we select events with at least one solution with t E ≥ 60 d and f s ≥ 0.8 and remove binary-lens or binary-source events, as well as events with incomplete, poorly-sampled, or low-amplitude light curves. We end up with 87 events with timescales between 60 and 300 d. In this timescale range, the detection efficiency is virtually constant. We extract the optimized light curves of selected events with the difference image analysis method (Alard & Lupton 1998;Woźniak 2000). Proper motions Out of 87 long-timescale low-blending events in our sample, proper motions of only 46 are available in the Gaia Early Data Release 3 (EDR3; Gaia Collaboration et al. 2016Collaboration et al. , 2021. However, it is known that the completeness of Gaia EDR3 is reduced in crowded areas such as the Galactic bulge (Gaia Collaboration et al. 2021;Fabricius et al. 2021). Some sources in crowded regions may have spurious astrometric solutions and their proper-motion measurements may suffer from catastrophic errors (e.g., Hirao et al. 2020;Mróz & Wyrzykowski 2021). Precise measurements of proper motions are also possible with long-term groundbased observations. We use proper-motion measurements calculated using observations collected by the fourth phase of the OGLE survey (OGLE-IV; 2010-2020; Udalski et al. 2015). Positions of stars are measured on individual frames using the astrometric OGLE pipeline and are tied to the Gaia EDR3 reference frame. A detailed description of the OGLE Uranus astrometry project will be published else- where (Udalski et al. 2021, in preparation). OGLE proper motions are available for 68 events, 39 of which are common with Gaia EDR3. Figure 1 presents the comparison between OGLE and Gaia proper motions of these common stars, which agree well. They are listed in Table 1. In the following analysis, we use OGLE proper motions for 68 events. If OGLE measurements are not available and Gaia EDR3 astrometric solution has the renormalized unit weighted error (Lindegren et al. 2021) smaller than 1.4, we use Gaia (6 events). For the remaining 13 events, we assume that their proper motion is consistent with that of Galactic bulge stars (µ l , µ b ) = (−6.12, −0.19) ± 2.64 mas yr −1 . This proper motion corresponds to the velocity of the Sun relative to the Milky Way center (Schönrich et al. 2010) as seen from the distance of 8 kpc, and the uncertainty corresponds to the typical velocity dispersion in the Galactic bulge (100 km s −1 ). METHODS A detailed description of how to estimate the lens mass given the event light curve and the source proper motion is presented by Mróz & Wyrzykowski (2021). We estimate the masses of lenses using Equation 1. The values of t E and π E are measured (or constrained) from the light curve model, whereas µ is unknown -its value may be only constrained based on prior information from the Milky Way model. Note that µ = |µ| = |µ lens − µ source |, where µ lens and µ source are proper motions of the lens (which is unknown) and the source (which may be measured by OGLE or Gaia), respectively. Moreover, if the microlens parallax is detected in the light curve of the event, the direction of µ ∝ π E /π E is also known. Our event models have eight parameters. Five of them are "standard" point-lens point-source microlensing parameters that describe the shape of the light curve. These are: time t 0 and separation u 0 (in Einstein radius units) during the closest lenssource approach, effective timescale of the event t eff = t E |u 0 |, and North and East components of the microlens parallax vector π E,N and π E,E . Two parameters (µ s,N and µ s,E ) describe the North and East components of the source proper motion vector (relative to the solar system barycenter) and are measured by either OGLE or Gaia. The final parameter is the relative lens-source proper motion µ, its value is constrained only by the Milky Way model. Here, we assume that the source is located at a distance of 8 kpc in the Galactic bulge, we use the Milky Way model from Mróz & Wyrzykowski (2021) and use the Kroupa mass function as our priors. As discussed by Mróz & Wyrzykowski (2021), the choice of these priors has little effect on the inferred lens mass and distance. In particular, we opt not to use Gaia parallaxes to estimate source distances as they are not accurate enough to provide meaningful constraints. Model parameters (except µ s,N and µ s,E ) are measured in a geocentric frame that is moving with a velocity equal to that of the Earth at a fiducial time t 0,par . Every light curve may have up to four degenerate solutions (differing by signs of u 0 , π E,N , and π E,E ). Moreover, the lens may be located either in the Galactic disk or in the bulge, so the distribution of µ may be bimodal (as shown in Figure 1 of Mróz & Wyrzykowski 2021). To handle possible multiple solutions, we derive posterior probability distributions with the nested sampling Monte Carlo algorithm MLFriends (Buchner 2019) using UltraNest 1 (Buchner 2021). In the nested sampling algorithms, the entire eight-dimensional parameter space is filled with a set of live points taken from prior distributions (in the present case, we use uniform priors for all parameters but µ s,N and µ s,E , which are taken from Gaussian distributions). Then the live point with the lowest likelihood is removed from the set and replaced with a new one on the condition that its likelihood is larger than the likelihood of the removed point, so that the volume sampled by the live points shrinks at every iteration. The removed points are weighted by their likelihood and stored and then are used to generate the posterior distribution for all parameters. We run the sampler with a minimum of 1000 live points throughout the run and terminate the integration when the sum of weights of live points is smaller than 0.05 (frac remain) of the sum of weights of accepted points. The main advantage of our approach is that we can simultaneously explore all possible solutions. Standard Markov chain Monte Carlo samplers (for example, emcee by Foreman-Mackey et al. (2013)) may not work well if the posterior is multi-modal. To test our algorithm, we derived posterior distributions of masses and distances to all lenses from our sample with emcee and the results were very similar for 83 of 87 events. For the remaining four events, we re-run UltraNest with a larger number of 2000 live points and set frac remain to 0.01 and obtained virtually identical posterior distributions of parameters as in our initial models. In all four cases, emcee did not properly sample the multi-modal posterior. For every analyzed event we derive a posterior distribution in the mass -distance space (Table 2). We use the empirical mass-absolute brightness relations for mainsequence stars 2 (Pecaut & Mamajek 2013) and interstellar extinction maps of Nataf et al. (2013) to derive the expected distribution of I-band brightness of the lens. The extinction varies with the distance -we assume that the extinction is proportional to the integrated density of interstellar material along the line of sight following the model of Sharma et al. (2011) and we normalize it to Nataf et al. (2013) extinction maps. We compare the expected I-band brightness with the blended flux from the microlensing model. If a putative main-sequence lens is brighter than the blend, this indicates that the lens is dark. For every event, we calculate the probability p that the lens is not luminous. LENS MASS FUNCTION Masses of individual lenses in our sample may be determined with large uncertainties, the posterior mass distributions may be asymmetric or even bimodal in some cases. We use a hierarchical Bayesian modeling (Hogg et al. 2010) to infer the mass function of lenses. We have a sample of N events. For the nth event, we derive the posterior distribution for eight parameters p(ω n |d n ) using nested sampling, where d n are data for that event. This distribution is calculated using the prior distribution p 0 (ω n ), which includes information from the Milky Way model and a fiducial mass function of lenses g 0 (M ) (the calculation of the prior is described in detail by Mróz & Wyrzykowski 2021, we use the Kroupa mass function as a prior on the mass function g 0 (M )). For every event, we use nested sampling to obtain a set of K n samples ω nk which represents a random draw from the posterior distribution. Let us now assume that the mass function of lenses f α (M ) = dN/dM can be described by a set of parameters α. The likelihood function L α for parameters α is where p(ω n |α) = f α (M )p 0 (ω n )/g 0 (M ) (Hogg et al. 2010). This step is crucial for the inference of the mass function -we replace the fiducial mass function g 0 (M ) with the function we aim to model f α (M ). Thus, the derived mass function does not depend on g 0 (M ). The integral in Equation (2) may be approximated as the sum over samples from the posterior: We implicitly assume here that all lenses are drawn from the same mass function. This may not be true in general, for example, the remnant mass functions may be different in the Galactic disk and bulge. The analyzed here sample of events is too small to reliably separate these two populations. In our primary model, the mass function can be approximated as a histogram with B bins in log M : where s b is the step function and B b=1 exp(α b ) = 1. Following Hogg et al. (2010), we assume a smoothness prior on α: Kitagawa & Gersch 1996). We also consider a simpler model, in which the lens mass function can be expressed as a broken power law: where a 0 and a 1 are normalization constants, we assume flat priors on α 0 and α 1 . In both cases, we use the Markov chain Monte Carlo sampler emcee (Foreman-Mackey et al. 2013) to derive the posterior distributions for the mass function parameters α. RESULTS AND DISCUSSION We derive posterior probability distributions for all 87 long-timescale low-blending events in our sample and then we fit the hierarchical model to derive the mass function of lenses. We approximate the mass function as a histogram with 30 bins in log M with a width of 0.1 dex each. The constraints on the mass function are presented in the upper panel of Figure 2, the shaded region represents the 68% credibility interval and the solid blue line marks the median of the posterior distribution of α. The mass function peaks around 1 M ; this peak is a selection effect, however. Note that the analyzed sample of microlensing events contains only events with timescales longer than t E = 60 d and so the mass function is biased toward larger masses (since t E ∝ √ M ). For a comparison, we also measure the combined mass function for events with timescales longer than t E = 80 d. Both mass functions match well for masses greater than 1 M but the latter contains fewer low-mass lenses. Thus, our combined distribution reflects the real mass function of lenses for M 1 M but the peak and the turnover for lower masses are just a selection effect. When we fit a broken power-law model, we find the mass function slopes α 0 = −0.80 +0.56 −0.73 for 0.1 < M < 1 M and α 1 = 1.34 +0.15 −0.12 for 1 < M < 100 M . It is also clear from Figure 2 that for masses larger than ≈ 20 M , the data do not have enough constraining power and the allowed credible region is large (we can provide only upper limits on the mass function in that mass range). We then select high-probability stellar remnants. In our sample, there are 35 events with a probability that the lens is dark p > 0.95, 27 events with p > 0.98, and 21 events with p > 0.99. Our constraints on the mass function of high-probability dark lenses are presented in the lower panel of Figure 2. Among 35 events with p > 0.95, 25 objects have their proper motions measured either from OGLE or Gaia, proper motions of 10 objects are not constrained. We checked, however, that the combined mass function of 25 events with known proper motions is very similar to that presented in Figure 2. The sample of high-probability stellar remnants contains mostly faint events, which explains the lack of OGLE/Gaia proper motions. The shape of the mass function of high-probability dark lenses does not resemble that of the mass function of the entire sample. We measure the power-law slopes of α 0 = −0.51 +0.95 −1.64 for 0.1 < M < 1 M and α 1 = 0.83 +0.16 −0.18 for 1 < M < 100 M . When we restrict the sample to events with proper motions measured by either OGLE or Gaia, we find α 0 = −1.50 +1.52 −2.07 and α 1 = 0.92 +0.22 −0.20 , respectively. The mass function slope in the range 1 < M < 100 M is slightly flatter than the slope of the black hole mass function (1.58 +0.82 −0.86 ) inferred from the second LIGO-Virgo Gravitational-Wave Transient Catalog (Abbott et al. 2021) (their broken power-law model), although both values are consistent with each other within the quoted error bars. We can also compare the measured slope with the theoretical predictions based on population synthesis calculations by Olejak et al. (2020) using the StarTrack code (Belczynski et al. 2002(Belczynski et al. , 2008. They consider isolated black holes that are formed as a result of single star evolution, disruptions of binary star systems, or mergers of compact objects. Olejak et al. (2020) provide synthetic catalogs of black holes separately in the Galactic disk and bulge. We fit a power law model to their simulated data in the range 6 − 20 M and find the slope of 0.9 and 2.2 for the Galactic disk and bulge populations, respectively. The former is consistent with our findings but the latter slope is steeper. Our sample contains objects located both in the Galactic disk and bulge but its size is too small to reliably separate these two populations. In the range 3−10 M , occupied by "mass-gap" objects and black holes, our remnant mass function is continuous with no evidence for a gap between neutron stars and black holes. This result should be treated with caution. Masses of individual lenses have relatively large uncertainties (typically 0.3 − 0.5 dex in log M ) so one can argue that we cannot detect a narrow feature (gap) in the remnant mass function. We thus run simulations in which we assume a log-uniform mass function in the ranges 1 − 2 M and 6 − 30 M . We draw a sample of 35 objects from that fiducial mass function and assign each mass measurement the uncertainty of 0.05, 0.3, and 0.5 dex. We then use the hierarchical Bayesian modeling to infer the mass function based on the simulated data. Results of our simulations are presented in Figure 3. We are able to recover the gap in all cases, although the shape of the mass function becomes more blurry and the credible intervals become larger as the uncertainties increase. Nonetheless, we find that the shape of the mass function is much better constrained if the sample of simulated events is larger. We thus plan to analyze a larger sample of microlensing events detected during the OGLE-IV phase, which contains four times more events than the current sample. This will enable us to provide stronger constraints on the mass function in the "mass gap" regime. We cannot exclude that the mass gap is only partly filled with objects that form from mergers of binary neutron stars (e.g., Abbott et al. 2017). Another source Figure 3. We run simulations to check if we can recover a mass gap (2 − 6 M ) in the log-uniform mass function (upper left panel) of remnants. We draw 35 events from our fiducial mass function and assign each mass measurement the uncertainty of 0.05, 0.3, and 0.5 dex. We use our hierarchical modeling to infer the mass function using the simulated data. We are able to recover the gap in all cases. of contamination may be close binary systems of compact objects -if the orbital separation is much smaller than the size of the Einstein ring (typically a few au), such a system can be regarded as effectively a single lens for microlensing. The contamination from close binary main-sequence stars is less likely. We re-computed the dark companion probability assuming an equal-mass binary lens instead of a single star lens, which is higher than 88% for all objects classified as high-probability remnants (higher than 95% for 31/35 events). The main limitation of our work is the assumption that remnants and normal stars share the same velocity distribution. However, neutron stars may receive large natal kicks at birth (Hobbs et al. 2005), while there is no agreement about natal kicks of black holes (e.g., Callister et al. 2020 and references therein). If the proper motion of the lens is high enough, the Einstein timescale may be shorter than our threshold of 60 days and the event is not included in our sample. Moreover, large natal kicks may affect the determination of the lens mass, as discussed in more detail by Mróz & Wyrzykowski (2021). The amplitude of effect depends on the geometry of individual events, location of the lens, as well as the poorly known distribution of kick velocities. In the future, thanks to advances in precise astrometry and interferometry, it may be possible to directly measure masses (as well as velocities) of individual isolated stellar remnants. Direct mass measurements for many events will become possible thanks to precise astrometric observations by the Gaia satellite (Rybicki et al. 2018) and its planned successors (Hobbs et al. 2021). A new path for measuring masses of isolated objects is opened up by the first resolution of microlensed images by the GRAV-ITY interferometer (Dong et al. 2019). Although now interferometric observations are possible only for the brightest events, the planned upgrades to the GRAVITY instrument will enable observations of dozens of fainter events 3 . Further in the future, the planned Nancy Grace Roman Telescope is expected to detect hundreds of microlensing events by isolated black holes (Penny et al. 2019). Roman will provide both precise photometry and astrometry, enabling us to directly measure the mass function of isolated stellar remnants.
2021-07-30T01:16:04.575Z
2021-07-29T00:00:00.000
{ "year": 2021, "sha1": "c21e8cac3219417e50e54b46214ffc23e14633c1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c21e8cac3219417e50e54b46214ffc23e14633c1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4443586
pes2o/s2orc
v3-fos-license
Complex multi-enhancer contacts captured by Genome Architecture Mapping (GAM) Summary The organization of the genome in the nucleus and the interactions of genes with their regulatory elements are key features of transcriptional control and their disruption can cause disease. We developed a novel genome-wide method, Genome Architecture Mapping (GAM), for measuring chromatin contacts, and other features of three-dimensional chromatin topology, based on sequencing DNA from a large collection of thin nuclear sections. We apply GAM to mouse embryonic stem cells and identify an enrichment for specific interactions between active genes and enhancers across very large genomic distances, using a mathematical model ‘SLICE’ (Statistical Inference of Co-segregation). GAM also reveals an abundance of three-way contacts genome-wide, especially between regions that are highly transcribed or contain super-enhancers, highlighting a previously inaccessible complexity in genome architecture and a major role for gene-expression specific contacts in organizing the genome in mammalian nuclei. Principle of the method GAM applies a concept previously used for linear genomic distance mapping 29 to measure 3D distances by combining ultrathin cryosectioning with laser microdissection and DNA sequencing. By determining the presence or absence of all genomic loci in a set of single slices collected at random orientations from a population of nuclei, GAM infers parameters of chromatin spatial organization, including genomewide chromatin contact frequencies, radial distributions and chromatin compaction. Structurally preserved, fixed cells embedded in sucrose and frozen 30,31 are thinly cryosectioned, before isolating single nuclear profiles by laser microdissection. The DNA content of each nuclear profile is extracted, amplified and sequenced. Loci that are closer to each other in the nuclear space (but not necessarily on the linear genome) are detected in the same nuclear profile more often than distant loci (Fig. 1a, b). The co-segregation of all possible pairs of loci among a large collection of nuclear profiles sliced at random orientations is used to create a matrix of inferred locus proximities, allowing the calculation of chromatin contacts genome-wide (Fig. 1c, d). We applied GAM to mouse embryonic stem (mES) cells, where abundant data are available relating to chromatin contacts and The organization of the genome in the nucleus and the interactions of genes with their regulatory elements are key features of transcriptional control and their disruption can cause disease. Here we report a genome-wide method, genome architecture mapping (GAM), for measuring chromatin contacts and other features of three-dimensional chromatin topology on the basis of sequencing DNA from a large collection of thin nuclear sections. We apply GAM to mouse embryonic stem cells and identify enrichment for specific interactions between active genes and enhancers across very large genomic distances using a mathematical model termed SLICE (statistical inference of co-segregation). GAM also reveals an abundance of three-way contacts across the genome, especially between regions that are highly transcribed or contain super-enhancers, providing a level of insight into genome architecture that, owing to the technical limitations of current technologies, has previously remained unattainable. Furthermore, GAM highlights a role for gene-expressionspecific contacts in organizing the genome in mammalian nuclei. Article reSeArcH chromatin occupancy at enhancers and promoters 5,26,[32][33][34][35] . Mouse ES cells were fixed in optimal conditions, and cryosectioned at a thickness of approximately 0.22 μ m 30,31,36,37 . Each nuclear profile was isolated into a single PCR tube by laser microdissection (Extended Data Fig. 2a-c). The DNA content of each nuclear profile was extracted, fragmented, amplified using single-cell whole genome amplification (WGA) 38 , and sequenced using Illumina technology (Extended Data Fig. 2d, e). UCSC Genome Browser tracks of mapped reads from single nuclear profiles show that each nuclear profile contains a different complement of chromosomes and sub-chromosomal regions, as expected from chromatin passing in and out of each thin nuclear slice (Extended Data Fig. 2e). Efficiency of locus detection with GAM To map chromatin contacts genome-wide using GAM, we collected 471 nuclear profiles from mES cells at a mean sequencing depth of 1.1 million reads per nuclear profile (Supplementary Table 1). We selected 408 high-quality nuclear profiles (the mES-400 dataset) based on a combination of criteria (see Methods and Extended Data Fig. 3a). As the resolution of GAM is not fixed but directly dependent on the sequencing depth and on the number of nuclear profiles sequenced, we first estimated the optimal number of reads required to detect most windows. We find that 400,000 uniquely mapped reads per nuclear profile are required to detect > 80% of positive windows at 30 kb resolution (Extended Data Fig. 3b). To explore the genome coverage attained in the mES-400 dataset, we computed the detection of 30-kb windows genome-wide. Single nuclear profiles contained an average of 6 ± 4% (s.d.) of all 30-kb windows, as expected from the average proportion of the mES cell nuclear volume contained in each nuclear profile (Extended Data Fig. 3c, d; Supplementary Note 1). The equal detection of mouse chromosomes known to occupy different preferred radial positions in mES cells 35 is consistent with random collection of nuclear profiles (Extended Data Fig. 3e). Finally, comparisons with FISH confirm efficient detection of regions of around 40 kb in GAM (Extended Data Fig. 3f-h). To consider variations in window detection, we tested different normalization approaches and found that the normalized linkage disequilibrium 39 best reduced bias due to window detection frequency, GC content and mappability (Extended Data Fig. 4a, b). We find that normalized GAM matrices show fewer biases than Hi-C matrices corrected with ICE (iterative correction and eigenvector decomposition 23 ; Extended Data Fig. 4c). To test further the suitability of the mES-400 dataset to study chromatin contacts at 30 kb resolution, we measured its reproducibility by erosion and found that most contact information is already obtained with 272 nuclear profiles (correlation coefficient is 0.77, which rises to 0.89 for contacts within 3 Mb; Extended Data Fig. 5). Mapping chromatin contacts using GAM Before investigating in detail the properties of chromatin contacts detected by GAM, we tested whether the mES-400 dataset captures general features of chromatin architecture previously identified by 3C-based approaches, in particular the detection of compartments A and B (ref. 6) and of topologically associating domains (TADs 3,5 ; Fig. 2; Extended Data Figs 6, 7a-c). GAM and Hi-C contact matrices are highly correlated across whole chromosomes at 1 Mb genomic resolution (0.63 Spearman's rank correlation coefficient; range 0.43 to 0.71 for individual chromosomes). Earlier Hi-C studies have used principal component analysis (PCA) to classify all genomic loci into two compartments, A and B, based on their contact preference 6 . Compartments detected by PCA in the GAM dataset overlap significantly with Hi-C-derived compartments (Fisher's exact test, P < 1 × 10 −15 ; Fig. 2a), with 65% of 1-Mb windows being assigned to the same compartment, rising to 75% for the 50% windows with the strongest compartmentalization. GAM contact matrices also independently confirm the existence of TADs 3,5 ( Fig. 2b; Extended Data Fig. 7d-f). Identifying prominent interactions Chromatin is in constant local motion in the cell nucleus, and adopts different conformations both across the cell population and over time. Maps of 3D genome proximity not only measure specific physical interactions but also random contacts, which are heavily dependent on linear genomic distance. A unique feature of GAM is that the detection of genomic windows is independent of their interaction with other regions. Thus, the 'background' co-segregation frequency expected for non-interacting loci can be directly quantified across the genome for each genomic distance. We developed SLICE, a general mathematical model that identifies the interactions most likely to be specific (that is, non-random according to genomic distance) from GAM cosegregation data. SLICE calculates a 'probability of interaction' (P i ), which is an estimate of the proportion of specific interactions for each pair of loci at a given time across the cell population (Fig. 3a). SLICE is fully described b, Physically proximal loci are found more frequently in the same thin nuclear section (NP, nuclear profile) than distant loci. c, Loci present in each nuclear profile are identified. d, Locus co-segregation scored in a large collection of nuclear profiles is used to infer preferred contacts, radial position and compaction of each locus. in Supplementary Note 1. To identify the most specific interactions in the mES-400 dataset, we applied the SLICE model genome-wide (Supplementary Note 2). For further analyses, we considered only 'prominently interacting' locus pairs that had a larger than expected P i at a threshold of P ≤ 0.05, corresponding to locus pairs that most often co-segregate in the same slice (Extended Data Fig. 8a). As expected, P i matrices are sparser than those of GAM co-segregation ( Fig. 3b) or Hi-C ligation frequency (Extended Data Fig. 8b). These prominent chromatin contacts are therefore the best candidates to denote bases of chromatin loops formed by specific interactions at each genomic distance. To study the influence of gene expression state on chromatin interactions, we classified each genomic 30 kb window according to its expression level in mES cells 32 and the presence of putative mES cell enhancers 34 . We found many interactions involving enhancer regions, active genes (FPKM > 1) or inactive genes (FPKM < 0.01; Extended Data Fig. 8c). The number of interactions decreases with genomic distance, as expected, but spans many tens of Mb. For example, of 4.5 million interactions involving active genes, 3.0 million span less than 60 Mb, while 1.5 million span greater than 60 Mb (Extended Data Fig. 8d). Next, we tested whether active genes, enhancers or inactive genes interact with each other more or less frequently than expected given their genomic distribution. Notably, we find an over-representation of interactions connecting active genes and enhancers (permutation test, P < 0.002), whereas inactive genes interact no more frequently than expected by chance (Fig. 3c). Further analyses verified that interactions within and between active genes and enhancer regions are a robust feature of the mES-400 dataset (Extended Data Fig. 8e-i). To examine more closely the nature of the interactions between 30-kb windows containing active genes or enhancers, we asked whether they are positioned specifically over the enhancer and/or gene of interest at 5 kb resolution (Fig. 3d). Interacting active or enhancer windows preferentially contact 5-kb windows containing the enhancer compared with 5-kb windows that lie 15 kb upstream or downstream (Extended Data Fig. 8j; paired t-test, P < 10 −10 ). Preferred contacts are also seen between transcription start sites (P < 10 −10 ) or transcription end sites (P < 2 × 10 −3 ) and interacting enhancer or active 30-kb windows, but not for non-interacting enhancer windows (P > 0.4, Fig. 3e). Therefore, GAM identifies preferred contacts of enhancer-containing genomic loci not only with gene promoters, but also through the region downstream of the polyadenylation site traversed by RNA polymerase II before termination 40 . Taken together, these results identify interactions between regulatory sequences and active genes as major organizers of chromatin conformation. The specificity of contact detection with GAM suggests that it will be a powerful method for dissecting the cascade of enhancerdependent events that accompany the transcription cycle 41 and the role of specific SNPs and other genomic variants in genome folding and misregulated gene expression. Detecting interacting triplets GAM can capture many additional aspects of chromatin spatial organization genome-wide, such as the radial distributions of chromosomes and sub-chromosomal regions (Extended Data Fig. 9a, b), chromatin compaction (Extended Data Fig. 9c, d) and multivalent chromatin interactions involving three or more genomic regions. We were particularly interested in exploring whether the mES-400 dataset already held enough information to reveal multivalent interactions. Detailed analyses of GAM statistics (Supplementary Notes 1 and 2) indicate that the current mES-400 dataset allows detection of triplet contacts at the resolution of hundreds of kilobases, which corresponds to the chromatin organization level of TADs. To distinguish true, simultaneous triplet interactions between TADs from the superposition of independent pairwise events that do not occur in the same cell ( Fig. 4a), we extended SLICE to consider triplets and calculated a triplet score that reflects the likelihood of simultaneous, triplet interactions for each possible combination of three TADs. We further select the most likely TAD triplets by retaining only the 2% highest scoring (approximately 101,000 'top TAD triplets'; Table 2). To assess the properties of the top TAD triplets, we classified TADs across the whole genome according to the presence of super-enhancers 34 . TADs that did not contain super-enhancers were classified in three additional categories according to their level of transcription using published GRO-seq data 33 (low-, medium-or highly transcribed; Extended Data Fig. 10b). Remarkably, the set of top triplets spans a large range of genomic distances and contains TADs in all four categories; for example, of around 25,000 triplets involving super-enhancers, 81% span between 30 and 116 Mb (Extended Data Fig. 10c). We found that the top TAD triplets are significantly enriched for contacts that connect three super-enhancer-containing TADs ( Fig. 4c; Extended Data Fig. 10d), but not for triplets involving TADs that contain only typical enhancers (Extended Data Fig. 10e). Notably, the top TAD triplets are also enriched for contacts formed between highly transcribed TADs, or combinations of super-enhancer-containing TADs and highly transcribed TADs, consistent with previous observations that active genes co-localize 42,43 and that gene-rich R bands cluster 44 in mammalian nuclei. These observations were also confirmed using subsamples of the mES-400 dataset and found not to be a trivial consequence of A/B compartmentalization (Extended Data Fig. 10f- highly transcribed TADs form clusters where multiple preferred partners interact simultaneously in 3D space in mES cells, expanding on previous observations of clustering of bound Sox2 (ref. 45) and of pairwise contacts between super-enhancers detected by Hi-C 46 . Furthermore, we considered whether triplet associations between super-enhancercontaining TADs might be driven by the super-enhancers and found that super-enhancer-containing 40-kb windows co-segregate more frequently with the two other super-enhancer-containing TADs in their triplet than 40-kb windows located 120 kb upstream or downstream (paired t-test, P < 10 −6 ; Extended Data Fig. 10i, j). Next, we explored the role of the nuclear lamina in constraining triplet interactions, by scoring TAD proximity to lamina-associated domains 27 (LADs; Supplementary Table 3). Super-enhancer-containing and highly transcribed TADs that overlap or are close to LADs are involved in fewer triplet interactions (Extended Data Fig. 10k, l), indicating that TAD proximity to the nuclear lamina might restrict their access for interaction with more central enhancer clusters. To investigate the contribution of complex contacts between multiple genomic regions more globally, we calculated the genome-wide cosegregation probabilities between all window pairs or triplets in the GAM data. The scaling of these probabilities with genomic distance is not consistent with a polymer model that lacks specific interactions (the selfavoiding walk model). By contrast, we found that the observed scaling is consistent across large genomic distances with a polymer model that considers pair and triplet contacts as abundant features of chromatin folding (the strings and binders switch (SBS) model; Extended Data Fig. 11a, b), in agreement with recent simulations of specific DNA loci 47 . To explore the spatial conformation of super-enhancer-containing TAD interactions by an independent approach, we performed cryo-FISH experiments on two sets of four TADs, spanning 15 and 29 Mb respectively. Each set includes three super-enhancer-containing TADs (SE1/SE2/SE3 or SE4/SE5/SE6) and one non-interacting lowtranscribed TAD (Low1 or Low2, respectively; Fig. 5a; Supplementary Table 4). In the first region (containing Low1 and SE1-3), Low1 is not expected to interact with any super-enhancer-containing TAD, whereas in the second region (containing Low2 and SE4-6), Low2 is predicted to have a pairwise interaction with SE4 (P i = 0.12) but not with SE5 or SE6 (Fig. 5a). Contact frequencies measured for six interacting and two non-interacting pairwise combinations of TADs (Fig. 5b, c) show that TADs predicted to interact by GAM contact each other more frequently by FISH (18-74%) than non-interacting TADs (8-9%). The > 50% interaction between SE4 and SE5 is particularly notable, owing to their linear separation of 19 Mb. We also measured the median physical distance between TADs (Extended Data Fig. 11c, d; Supplementary Table 5) and found interacting TAD pairs at shorter physical distances than non-interacting TADs. Finally, three-colour FISH for SE1, SE2 and SE3 identifies examples of triplet TAD clustering in the same cell (Fig. 5d). Discussion GAM is a novel, ligation-free method for capturing chromatin contacts in an unbiased manner, independent of FISH and conformation-capture technologies. Using GAM, we uncovered a complex organization of the 3D structure of chromatin in mES cells, where functional genomic regions underlie specific chromatin contacts (Extended Data Fig. 11e). Especially notable is the enrichment for pairwise chromatin interactions between enhancer elements and active genes, particularly at transcription start and termination sites (Fig. 3e). The enhancer interaction pattern mirrors the average distribution of RNA polymerase II over active genes 32 , which is of particular interest in light of recent evidence for enhancer interactions that track polymerase progression through coding regions during transcription elongation 48 . Moreover, the identification of abundant three-way TAD interactions, where multiple strong enhancers and highly transcribed regions associate simultaneously in the same nucleus, reveals that regulatory elements form higher-order contacts across large genomic regions. With larger GAM datasets containing several thousand nuclear profiles and further developments of SLICE, it will become possible to extract a variety of spatial parameters to measure, at higher resolution, pairwise, triplet and higher multiplicity contacts, locus volume and radial positioning genome-wide, and the inter-dependency of different contacts. Most importantly, GAM requires small numbers of cells and is applicable to rare cell types specifically selected by microdissection from precious tissue samples, potentially including those obtained from biopsies of individual patients. In summary, GAM is a potentially powerful new tool in the genome biologist's repertoire that substantially expands our ability to finely dissect 3D chromatin structures, rendering many previously unanswerable questions experimentally tractable in a wider range of model systems, cell types and valuable human samples. Article reSeArcH Online Content Methods, along with any additional Extended Data display items and Source Data, are available in the online version of the paper; references unique to these sections appear only in the online paper. MethOdS The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. Cell culture. The mES cells used for this study were the 46C line 49 , a Sox1-GFP derivative of E14tg2a and gift from D. Henrique. Mouse ES cell culture was carried out as previously described 50 . In brief, cells were grown at 37 °C in a 5% CO 2 incubator in Glasgow Modified Eagle's Medium, supplemented with 10% fetal bovine serum, 2 ng ml −1 LIF and 1 mM 2-mercaptoethanol, on 0.1% gelatin-coated dishes. Cells were passaged every other day. After the last passage 24 h before harvesting, mES cells were re-plated in serum-free ESGRO Complete Clonal Grade medium (Millipore). Mouse ES cells were routinely tested for mycoplasma contamination. Preparation of cryosections. Ultrathin nuclear cryosections can be produced in the absence of resin embedding, by the Tokuyasu method 51 . This method preserves cellular architecture comparable to that observed in unfixed cryosections and maxi mizes the retention of nuclear proteins 30,31,36,37 . 46C mES cells were prepared for cryosectioning as described previously 37 . In brief, cells were fixed in 4% and 8% freshly depolymerized (EM-grade) paraformaldehyde in 250 mM HEPES-NaOH (pH 7.6; 10 min and 2 h, respectively), pelleted and embedded in saturated 2.1 M sucrose in PBS to prevent ice crystal damage before freezing in liquid nitrogen on copper stubs. Ultrathin cryosections were cut using a Leica ultracryomicrotome (UltraCut UCT 52 with EM FCS cryounit; Leica Microsystems) at approximately 220 nm thickness, captured on sucrose-PBS drops and transferred to 1-mm PENmembrane-covered glass slides for laser microdissection (Carl Zeiss). Sucrose embedding medium was removed by washing with 0.2 μ m filtered molecularbiology grade PBS (3 × 5 min each), then with filtered ultra-pure H 2 O (3 × 5 min each) and dried (15 min). In a few cases, the third PBS wash was substituted for a 5 min stain with molecular-biology-grade propidium iodide (1 μ g ml −1 in PBS; listed in Supplementary Table 1). The fixation protocol chosen provides optimal preservation of active RNA polymerases, nuclear components (such as TATAbinding protein) and nuclear architecture, unlike other commonly used fixation protocols using lower concentrations of formaldehyde in PBS buffers 30 . Isolation of nuclear profiles. Individual nuclear profiles were isolated from cryosections by laser microdissection using a PALM Microbeam Laser microdissection microscope (Carl Zeiss). Nuclei were identified under bright-field imaging and the laser was used to cut the PEN membrane surrounding each nucleus. Cut nuclear profiles were then catapulted using the Laser Pressure Catapult into a PCR Cap Strip filled with opaque adhesive material. One well in each strip of eight was left empty and taken through the WGA process as a negative control. Five of these negative controls were also used to make sequencing libraries as negative controls, while genomic DNA isolated from E14 mES cells and amplified using WGA was used as a positive control (Extended Data Fig. 2e; Supplementary Table 1). Whole-genome amplification. Whole-genome amplification (WGA) using the WGA4 kit (Sigma) was carried out with minor modifications to the previously described protocol 38 . Water (13 μ l) was added to each of the upturned PCR lids containing an isolated nuclear profile (in this and the following steps, volumes of buffer have been increased relative to the supplier's protocol in order to cover the entire inner surface of the PCR cap lid). PK mastermix (containing 8 μ l proteinase K solution, 128 μ l 10× single-cell lysis and fragmentation buffer) was added to each lid (1.4 μ l per lid), and 1 μ l of human genomic DNA was added to a single lid without a nuclear profile to act as a positive control. The lids were pressed into a 96-well PCR plate and incubated upside down at 50 °C for 4 h. After incubation, the PCR plate was left to cool at room temperature for 5 min, before it was inverted and centrifuged at 800 g for 3 min. The plate was heatinactivated at 99 °C for 4 min in a PCR machine and cooled on ice for 2 min. 2.9 μ l 1× single-cell library preparation buffer and 1.4 μ l library stabilization solution were added to each well and the plate was incubated at 95 °C for 4 min, before cooling on ice for 2 min. 1.4 μ l of library preparation enzyme was added to each reaction, then the plate was incubated on a PCR machine at 16 °C for 20 min, 24 °C for 20 min, 37 °C for 20 min and finally 75 °C for 5 min. After WGA library preparation, the PCR plate was centrifuged at 800 g for 3 min. 10× amplification master mix (10.8 μ l), water (69.8 μ l) and WGA DNA Polymerase (7.2 μ l) were added to each well and the sample was PCR amplified using the program provided by the WGA4 kit supplier. Cryosectioning and whole-genome amplification were generally carried out in a single day, but in some cases samples were stored overnight at − 20 °C midway through the protocol (Supplementary Table 1), without detectable differences in DNA extraction in controlled tests of this variable. Preparation of libraries for high-throughput sequencing. WGA-amplified DNA was purified using a Qiagen MinElute PCR Purification Kit and eluted in 50 μ l of the manufacturer's elution buffer. The concentration of each sample was measured by PicoGreen quantification. Sequencing libraries were then made using either the Illumina TruSeq DNA HT Sample Prep Kit or the TruSeq Nano DNA HT kit. In both cases, samples were made up to 55 μ l with resuspension buffer. For the DNA HT kit, the entire yield of the WGA reaction was used as input DNA, up to a maximum of 1.1 μ g, whereas for the Nano kits a maximum of 200 ng input DNA was used. Libraries were prepared according to the manufacturer's instructions. For DNA HT kits, samples were size selected to 300-500 nt using a Pippin Prep machine (Sage Science) with EtBr-free 1.5% agarose cassettes. Samples prepared with the Nano kits were size selected to 350 nucleotides using the bead-based selection protocol outlined in the kit. Library concentrations were estimated using a Qubit 2.0 fluorometer (Thermo Fisher Scientific) and libraries were pooled together in batches of 96. Each library pool was sequenced in single-end 100 bp rapid-run mode on two lanes of an Illumina HiSeq machine. Each library has 30-bp WGA adaptors at both ends, so the flow cell was not imaged for the first 30 bp of each run (these are known as 'dark cycles'). The custom run recipe was co-developed with Illumina. High-throughput sequencing data analysis. Reads were mapped to the mm9 assembly of the Mus musculus genome using Bowtie2 with default parameters. Reads that did not map uniquely (that is, had quality scores of less than 20) or were PCR duplicates were removed. Calling positive windows in GAM samples. To assess the efficiency of locus detection and the optimal resolution to study the mES-400 dataset, we divided the mouse genome into windows of equal size (ranging from 10 kb to 1 Mb) and scored window detection amongst single nuclear profiles. The mouse genome was split into equal-sized windows using bedtools 52 , and bedtools multibamcov was used to calculate the number of reads from each nuclear profile overlapping each genomic window. A combination of two distributions was fitted to the histogram of the number of reads per window. Fitting was done separately for each nuclear profile. A negative binomial distribution represents sequencing noise, and the parameters of the fit for this distribution were used to determine a threshold number of reads X where the probability of observing more than X reads mapping to a single genomic window by chance was less than 0.001. Such a threshold was thus independently determined for each nuclear profile, and windows were scored as positive if the number of sequenced reads was greater than the determined threshold. To obtain a robust estimate of the sequencing noise, we fit a log-normal distribution (representing true signal) simultaneously with the negative binomial, although the parameters of the log-normal are not used in determining the threshold. Nuclear profile dataset quality control. In order to exclude low-quality datasets from our analysis, we measured a number of quality metrics for each sample. The percentage of mapped reads and percentage of non-PCR duplicate reads was measured with a custom Python script. Sequencing quality metrics (mean quality score per base, the number of dinucleotide repeats and the number of single nucleotide repeats) were determined for each sample using FastQC (http://www.bioinformatics.babraham.ac.uk/projects/fastqc). Samples were checked for contamination with Fastq-screen (http://www.bioinformatics.babraham. ac.uk/projects/fastq_screen). We expect thin sections through the nucleus to contain a characteristic proportion of the whole genome, organized in clusters and not containing all autosomal chromosomes, as shown previously 37 . Therefore, we measured the total number of windows scored positive, the number of positive windows immediately adjacent to another positive window and the number of positive chromosomes for each sample. All of these quality metrics were fed into a principal components analysis and components were identified that best discriminated our five negative controls. This analysis determined that percentage of mapped reads was the most predictive metric. Negative controls had a maximum of 2% mapped reads, so of 471 total nuclear profiles sequenced, we excluded 63 with < 15% mapped reads to implement a conservative filter, giving a final dataset of 408 nuclear profiles. Quality values for all collected nuclear profiles can be found in Supplementary Table 1. Determining optimum GAM resolution. We conducted a statistical power analysis, which confirmed that 408 nuclear profiles are sufficient to use GAM to study chromatin organization at 30 kb resolution (Supplementary Note 2). Across the whole collection of nuclear profiles, most genomic 30-kb windows (96%) are detected in at least one nuclear profile. Calculating sequencing depth saturation point. Eroded datasets were created for each nuclear profile at each target read depth from 50,000 reads to 600,000 reads in steps of 50,000 reads by randomly removing mapped reads from the table of read depth per window per nuclear profile. Positive windows were called for each dataset and samples were compared across the eroded datasets to obtain a saturation curve, where the number of positive windows identified is plotted against the number of reads remaining after erosion. To calculate sequencing depth saturation point, we classified samples as saturated or unsaturated by calculating a coverage estimator C n , new to next-generation sequencing datasets, by analogy to species accumulation curves in ecology 53 (Supplementary Note 3.1). The saturation point was taken as the minimum number of reads where C n was greater than 0.8 Article reSeArcH (that is, where we estimate that of 80% of the true population of positive windows have been detected). CryoFISH. For localization of TADs and their distance measurements ( Fig. 5; Extended Data Fig. 11c, d), we performed cryoFISH as previously described 37,54 with small modifications. Ultrathin cryosections from mES cells from the 46C line were cut at around 200-220 nm thickness, captured in sucrose-PBS drops, and transferred to glass coverslips. Cryosections were first washed (2× , 30 min total) in 2× SSC, then incubated (2 h, 37 °C) with 250 μ g ml −1 RNase A (Sigma; in 2× SSC), washed (2× ) in 2× SSC, permeabilized (10 min) with 0.2% Triton X-100 in 2× SSC and washed (3× ) in 2× SSC. Cryosections were washed and treated (10 min) with 0.1 M HCl, washed (3× ) in PBS and then incubated (15 min) in 20 mM glycine in PBS. Cryosections were then dehydrated in ice-cold ethanol (30%, 50%, 70%, 90% and 3× 100%, 3 min each), dried briefly, denatured (10 min, 80 °C) in 70% deionized formamide, 2× SSC, 0.05 M phosphate buffer pH 7.0, and then re-dehydrated as above. After a brief period of drying, coverslips were overlaid onto probe mixture on Hybrislips (Invitrogen) and sealed with rubber cement for in situ hybridization. Probes consisted of MYtags custom labelled oligonucleotide libraries produced by MYcroarray. Probe coordinates and labels are given in Supplementary Table 4. Probe libraries were precipitated, air-dried and resuspended in deionized 100% formamide according to the manufacturer's instructions. Probes in formamide were mixed 1:1 with a 2× 'hybridization mixture' containing 20% dextran sulfate, 0.1 M phosphate buffer (pH 7.0) and 4× SSC. Probes were incubated (10 min) at 70 °C before hybridization. Hybridization was carried out at 37 °C in a moist chamber for approximately 40 h. Post-hybridization washes were as follows: 50% formamide in 2× SSC (42 °C; 3× over 25 min), 0.1× SSC (60 °C, 3× over 20 min), and 0.1% Tween-20 in 4× SSC (42 °C, 10 min). Nuclei were counterstained (45 min) with DAPI in PBS with 0.05% Tween-20, rinsed sequentially in 0.05% Tween-20 in PBS and then PBS alone. Coverslips were mounted in VectaShield (Vector Laboratories) immediately before imaging. Images from cryosections were acquired on a confocal laser-scanning microscope (Leica TCS SP8; 63× objective, NA 1.4) equipped with a 405 nm diode, and a white-light laser, using pinhole equivalent to 1 Airy disk. Images from the different channels were collected sequentially to prevent fluorescence bleed-through. For automated quantitative image analyses, images (TIFF files) were merged and each channel manually thresholded in ImageJ to define masks for nuclei, and for each locus. Distances between the edges of FISH signals within each nucleus were measured using a custom Python script. The median distance measured between two non-interacting TADs (d ni ) was used to estimate the non-interacting distance for a TAD pair with a different genomic separation (d est ) using an equation that assumes homogeneous distribution of the genetic material in the nucleus (see Supplementary Note 3.2). Applying this equation to the two pairs of non-interacting TADs provided us two estimates of the expected non-interacting distances at different genomic separations (grey shaded area in Extended Data Fig. 11d indicates the range between the larger and the smaller of these estimates). For Fig. 5b, d, images (TIFF files) were merged in Adobe Photoshop and contrast stretched. For measurements of detection frequency of 40-kb windows (Extended Data Fig. 3f-h), cryoFISH was performed as previously described 37,54,55 in mES OS25 cells (provided by W. Bickmore) grown as previously described 32,55 , and prepared for cryosectioning as described above. Fosmid probes (see Supplementary Table 4) were obtained from BACPAC Resources. The specificity of fosmid probes was confirmed by PCR using specific primers. Probes were labelled with tetramethylrhodamine-5-dUTP by nick translation (Roche), and separated from unincorporated nucleotides using MicroBioSpin P-30 chromatography columns (BioRad). Hybridization mixtures contained 50% deionized formamide (Sigma), 2× SSC, 10% dextran sulfate, 50 mM phosphate buffer (pH 7.0), 1 μ g μ l −1 Cot1 DNA, 2 μ g μ l −1 salmon sperm DNA and 2-4 μ l nick-translated probe. Probes were denatured (10 min) at 70 °C and re-annealed (30 min) at 37 °C before hybridization. Post-hybridization washes were as follows: 50% formamide in 2× SSC (42 °C; 3× over 25 min), 0.1× SSC (60 °C, 3× over 30 min), and 0.1% Tween-20 in 4× SSC (42 °C, 10 min). For probe signal amplification, sections were then incubated (30 min) with casein-blocking solution (pH 7.8; Vector Laboratories) containing 2.6% NaCl, 0.5% BSA, and 0.1% fish skin gelatin. The signal of rhodaminelabelled probes was amplified with rabbit anti-rhodamine antibodies (2 h; 1:500; Invitrogen) and Cyanine3-conjugated donkey antibodies against rabbit IgG (1 h; 1:1,000; Jackson ImmunoResearch Laboratories). Nuclei were stained with DAPI and coverslips were mounted with VectaShield immediately before imaging. Images were acquired on a confocal laser-scanning microscope (Leica TCS SP5; 63× oil objective, NA 1.4) equipped with a 405 nm diode, and HeNe (543 nm) laser, using pinhole equivalent to 1 Airy disk. Images from different channels were collected sequentially to prevent fluorescence bleed-through. For image display in Extended Data Fig. 4g, raw images (TIFF files) were merged in Photoshop and contrast stretched. Detection of individual nuclear profiles and of genomic loci within each image, of nuclear profile area and of locus coordinates were performed using an in-house supervised ImageJ script. Calculation of linkage matrices. The detection frequency (f A ) of a given locus ' A' is the number of nuclear profiles in which A is detected divided by the total number of nuclear profiles. The co-segregation (f AB ) of a pair of loci ' A' and 'B' is the number of nuclear profiles in which both A and B are detected divided by the total number of nuclear profiles. Linkage disequilibrium (D) and normalized linkage disequilibrium (D′ ) are calculated as previously defined 39 (Supplementary Note 3.3). In short, linkage is the co-segregation of A and B minus the product of their individual detection frequencies. The detection frequencies of two loci can differ considerably. To normalize for these differences, we use a normalized variant of the linkage disequilibrium (Supplementary Note 3.4). Heat maps of normalized linkage between all regions on the same chromosome were calculated from normalized linkage matrices L(i,j) where each entry is the normalized linkage of i and j. Hi-C Analysis. Mouse ES cell Hi-C data from ref. 5 was mapped and corrected using the iterative correction pipeline 23 and binned in either 50-kb or 1-Mb windows. Correlations between GAM and Hi-C were calculated across whole intra-chromosomal matrices. Defining A and B compartments from GAM and Hi-C datasets. We calculated A and B compartments for GAM and Hi-C according to the previously published method 6,23 . Each chromosome is represented as a matrix O(i,j) where each entry records the observed interactions between locus i and locus j. We generate a new matrix E(i,j) where each entry is the mean number of contacts for all positions in matrix O with the same distance between i and j. We divide O by E to give K(i,j) a matrix of observed over expected values. We then calculate the final matrix C(i,j) where each position is the correlation between column i and column j of matrix K. We then perform a principal components analysis on the correlation matrix C and extract the three components that explain the most variance. Of these three components, the one with the best correlation to GC content is used to define the A and B compartments 23 . Estimation of bias in GAM/Hi-C matrices. To examine the suitability of various normalization schemes for GAM data, we sorted all 30-kb genomic windows into ten bins on the basis of their average GC content. We then calculated an observed over expected (OE) matrix for each chromosome (see 'Defining A and B compartments from GAM and Hi-C datasets'). For each combination of two GC content bins, we took the mean OE values for contacts between windows in the two bins to create a heat map of mean OE values by GC content. The same approach was then repeated, stratifying 30-kb windows by average mappability or their detection frequency in the mES-400 dataset. To compare biases between GAM and Hi-C, we repeated the above procedure using GAM or Hi-C matrices at 50 kb resolution, and additionally stratified 50-kb windows according to the number of HindIII sites they contained. Analysis of topologically associating domains (TADs). The list of TAD boundaries at 40 kb resolution was obtained from ref. 5. Following a method published in ref. 56, the mean normalized linkage disequilibrium was measured in a 3 × 3 window box moved at an offset of two windows from the diagonal of the linkage matrix as a measure of long-range contacts. Depletion of long-range contacts was measured for previously defined TAD boundaries by comparing the long-range contacts at the boundaries with the long-range contacts 150 kb upstream and downstream. The statistical significance of this depletion was assessed by comparing the observed depletion of long-range contacts with the depletion measured from 5,000 randomly shuffled sets of TADs. Extracting probabilities of interaction (P i ) from GAM data. The modelling process used to convert pair or triplet co-segregation to P i is described in Supplementary Notes 1 and 2. Enrichment analysis of P i matrices. We created three lists of genomic features: active genes, inactive genes and enhancers. The UCSC known genes list was used as a reference. All genes with FPKM > 1 were classed as active. Genes with FPKM < 0.01 were classed as inactive. FPKMs were taken from mRNA-seq datasets from ref. 32. Enhancer locations were taken from ref. 34. We next calculated which 30-kb windows overlapped any of these features and counted the number of prominent interactions at a P value of ≤ 0.05, which connected 30-kb windows overlapping particular features. As a random control, we permuted the list of pairwise contacts 500 times by shifting all their genomic positions by a given random distance (thus preserving the number of significant pairwise interactions per chromosome and their distance distribution). The fold change was calculated as the observed interaction count divided by the mean of 500 random permutations. Enrichment or depletion was scored as significant if the observed count was respectively greater than or smaller than all of the randomly permuted values. Similar enrichments were also observed for prominent interactions at P values thresholds of ≤ 0.025 and ≤ 0.01. To account for the presence of PCA compartments, we subdivided 30-kb windows classified using the above scheme according to whether they were entirely contained within A or B compartments derived from Hi-C at 100 kb resolution. Analysis of TADs interacting in triplets. To identify triplets of TADs interacting simultaneously, we calculated all possible combinations of three TADs on the same chromosome. For all such triplets, we calculated the P i3 of all the 40-kb windows making up the TADs using SLICE. 40-kb windows were used here as the TAD positions in ref. 5 are given at 40 kb resolution. Finally, we ranked all triplets by their mean P i3 and selected the top 2%. To predict TAD triplets using pairwise P i values alone, we took the top 2% of TAD triplets ranked according to the minimum average P i calculated between all pairs of TADs that is, min(P iAB , P iAC , P iBC ), see equation 18 in Supplementary Note 1. Of the top triplet TADs, 41% could not be predicted using only the pairwise P i values. For the enrichment analysis, TADs were assigned as super-enhancer-containing TADs if they overlapped any previously identified super-enhancers 34 . TADs not overlapping super-enhancers were classified as low-transcription or hightranscription if they had GRO-seq coverage below the first or above the third quartile, respectively. TADs in the middle two quartiles of coverage were classified as medium-transcription. Enrichment was calculated as the observed number of each TAD triplet class (for example, SE/SE/SE) divided by the mean over 500 randomly permuted lists of TAD triplets, and was called as significant if the observed count was greater than or smaller than all of the randomly permuted values. To account for the presence of PCA compartments, we subdivided TADs classified using the above scheme according to whether they were entirely contained within A or B compartments derived from Hi-C at 1 Mb resolution. To calculate the enrichment of TAD triplets involving typical enhancers (TEs), we classified all TADs according to their overlap with a published list of typical enhancers from mES cells 34 . To analyse the impact of nuclear lamina association on triplet formation, we used a list of LAD regions in mES cells 27 . TADs were categorized into most (top 15%) and least (bottom 15%) triplet forming in accordance to the number of triplets in the top 2% that contained the TAD. The distances of TADs in each category to LADs where calculated using the closestBED tool 52 . Analysis of average linkage at 5 kb resolution. To define if chromatin interactions of 30-kb windows are centred on features they comprise (TSS, TES or enhancers), each 30 kb window overlapping exactly one enhancer or a single TSS or TES of an active gene (FPKM > 1; length > 120 kb), but no other gene or enhancer, was subdivided into six non-overlapping 5-kb windows. Subsequently, normalized linkage disequilibrium with other interacting enhancer or active 30-kb windows (SLICE P value ≤ 0.05 of the harbouring 30 kb window to the interacting 30 kb window) was calculated for the 5 kb window overlapping the feature of interest ± three 5-kb windows upstream/downstream. This resulted in a matrix in which each row represents a single interaction between two 30-kb windows and the columns represent the linkage for the 5 kb window of interest ± three 5-kb windows upstream/downstream. To normalize for distance effects, each row was divided by its own mean. Next, we took the mean of each column to obtain the average linkage at each distance from the 5 kb window of interest. Finally, these mean values were divided by the mean of the first and last column to obtain the average enrichment at the TSS relative to 15 kb upstream/downstream. The significance of each enrichment was calculated by performing a paired t-test between the list of linkages at the feature-containing 5 kb window and the average of the linkages measured at 15 kb upstream and downstream. As a control, non-interacting (SLICE P value > 0.05) 30-kb window pairs comprising the same features (enhancer, TSS, TES) were used. To ensure similar distance distributions, the true interactions were sorted into ten bins by their genomic distance and the control group was randomly reduced so that bin counts for each genomic distance range were the same. Analysis of average three-way co-segregation at 40 kb resolution. To define if SE/SE/SE triplet chromatin contacts are centred over the comprised superenhancers, all TADs containing a single super-enhancer that was less than 40 kb in length were selected. A 40 kb window was centred over the super-enhancer as well as ± three 40-kb windows upstream/downstream. TADs where the TAD boundary fell within any of these 40-kb windows were discarded. Next, based on all SE/SE/SE triplets that involved the selected TADs, the mean co-segregation frequencies between the super-enhancer-containing 40-kb windows and all 40-kb windows in the two partner super-enhancer-containing TADs were calculated. This was repeated for the 40-kb windows upstream/downstream of the selected super-enhancer. As described above for pairwise average linkage, we divided each resulting row by its mean, took the mean of each column and finally divided these by the average of the first/last column. The whole process was repeated for the same set of selected super-enhancer-containing TADs and their partner highly transcribed TADs in SE/high/high top triplets as well as non-interacting SE/SE/SE triplets which spanned the same genomic distances (control). The significance of the enrichment was tested by conducting a paired t-test between the co-segregation at each SE-overlapping 40-kb window and the average co-segregation at 40-kb windows 120 kb upstream or downstream. Polymer modelling. We employed the strings and binders switch (SBS) model 19 to represent chromatin, modelled as a self-avoiding polymer chain on a cubic lattice. In the present case, the chain is made of n = 512 beads. Along the polymer chain, specific beads are sites of attachment for floating binding factors (binders). The polymer has a fraction, f, of binding sites and a concentration, Cm, of binders, which have an affinity, E, for those polymer sites. Here, for simplicity, we set E = 2 k B T for all sites and f = 0.5 (for all details and general case, see references 19 and 57), as real transcription factor binding energies range from 2 k B T for non-specific binding sites to 20 k B T for specific ones. All other beads are inert, as they have no interactions apart from excluded volume effects (and chain length integrity constraints). Over a range of SBS parameters (f, Cm, E), the SBS polymers show two thermodynamically stable configurations, an open randomly distributed configuration or a closed and highly compact configuration, divided by a sharp transition 19,57 . The co-segregation probability obtained from GAM data does not match the prediction of the SBS model run with low concentration of binders (which corresponds to a self-avoiding walk polymer model), that is, a model where only steric hindrance effects among chromatin regions are present. Instead, we obtain a good match by considering SBS modelling conditions that take into account interactions between the polymer beads. In particular, the GAM data can be well fit by considering a mixture of open and closed SBS configuration states (40% open and 60% closed; Extended Data Fig. 11a, b). The SBS model is investigated by Metropolis Monte Carlo (MC) computer simulations 58,59 . Brownian molecular factors and polymer beads can randomly move from one site to a nearest neighbour site of the lattice, maintaining single-site occupancy and polymer integrity. Binding is only permitted between adjacent particles on the lattice. The binders can form multiple bonds up to six. MC averages are over up to 10 4 runs, each run being fully equilibrated with up to 10 12 single MC steps. We estimated co-segregation probability on polymer configurations in analogy with GAM. We cut through polymers with randomly oriented slices and scored bead co-segregation in the slices in the same fashion as we scored locus cosegregation in nuclear profiles (see Fig. 1b). The co-segregation probability is the probability for two beads at a given distance s on the polymer to be co-segregated in the same randomly oriented slice, in analogy with the probability for two loci at a given genomic distance to be co-segregated in the same nuclear profile. The co-segregation of triplets is the probability of three beads separated by s 1 and s 2 distances on the polymer to be co-segregated in the same slice (instead of two beads). We fixed the thickness of slices to correspond to the effective thickness (h eff ) of nuclear profile at genomic resolution 50 kb (h eff = 500 nm; see Supplementary Note 1). The genomic length corresponding to each polymer bead is 50 kb, which gives a linear length, d 0 = 200 nm, roughly estimated as: d 0 ≈ D 0 (s 0 /G) 1/3 , where D 0 is the nuclear diameter (9 μ m), and G the genome content (5.3 Gb for the mouse genome). Estimation of chromosome radial position from GAM data. Owing to the random orientation of sectioning with respect to the nucleus, the DNA content of nuclear profiles originating from different latitudes of the nucleus can be used to estimate radial distributions of genomic regions. For example, nuclear profiles cut through nuclei close to their periphery contain, by definition, a smaller proportion of the nuclear volume (or DNA content) than equatorial nuclear profiles (Extended Data Fig. 9a). Therefore, we predicted that the percentage of the genome covered by each nuclear profile could be used as a proxy for its latitude relative to the most equatorial nuclear profiles. For each nuclear profile, we calculated the coverage of each chromosome as the mean number of reads per Mb. For each chromosome, we took every nuclear profile in which that chromosome was in the top quartile of coverage and calculated the percentage of all genomic 1-Mb windows that were positive. The percentage coverage of a nuclear profile is a measure of its radius 60 , and therefore the mean percentage coverage of nuclear profiles containing a given chromosome is a measure of the preference of that chromosome to appear in nuclear profiles with a large radius (as is expected of more centrally positioned chromosomes). As expected, we found that the mean percentage coverage of nuclear profiles containing chromosomes 1, 2, 9, 11 and 14 negatively correlates with their radial position, previously measured in ref. 35. Therefore, chromosomes detected in nuclear profiles with lower average DNA content occupy more peripheral positions (Extended Data Fig. 9b). Estimation of locus volume from GAM data. We reasoned that de-condensed genomic loci should occupy larger volumes (or adopt more elongated conformations) than more condensed loci. De-condensed loci would therefore be intersected more frequently (and be detected more frequently in randomly-oriented nuclear Article reSeArcH profiles) than smaller or more spherical loci (Extended Data Fig. 9c). We divided the mouse genome into 30-kb windows and calculated the number of nuclear profiles where each window was detected (its detection frequency). We find that the detection frequency of 30-kb windows positively correlates with their coverage in a published DNase-seq dataset 26 (Spearman's correlation coefficient = 0.47, P < 10 −6 ; Extended Data Fig. 9d), as expected given that de-condensed chromatin ought to be more accessible to enzymatic cleavage. Furthermore, transcriptional activity has also been shown to correlate with chromatin de-condensation for individual loci 61 , or globally after overexpression of structural proteins 62 . Accordingly, we find that the transcriptional activity of 30-kb genomic windows (measured by GROseq coverage 33 ) is also positively correlated with their detection frequency in single nuclear profiles (Spearman's correlation coefficient = 0.27, P < 10 −6 ; Extended Data Fig. 9d). Code availability. Custom Python scripts used in this project are available from http://gam.tools/papers/nature-2017. Data availability. The GAM sequencing data are available from GEO (GSE64881). All other data are available from the corresponding authors upon reasonable request.
2018-03-30T13:03:14.411Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "bacba6b9c089e177607602c8f8a0836147c6b843", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc5366070?pdf=render", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "d56be4785f091a0e1f4ab344af41e74c390ac86c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
201506695
pes2o/s2orc
v3-fos-license
FORMAL AND FUNCTIONAL PECULIARITIES OF THE INTERRUPTION-REPAIRS IN SPEECH INTERACTION The paper provides the study of the phenomenon of interruption as repair in Мodern English dialogical discourse. The article outlines the analysis of the interruption-repairs from the point of view of their formal and functional characteristics. The serearch presents the complex methodolody, wich consists of the method of text interpretation, methods of deduction and synthesis, contextual, pragmalinguistic and functional methods, semantic method, which is aimed at studying speech realisation of the interruption-repairs by certain verbal means. It was stated that the interruption-repairs may have varied forms: correction, repetitions (full or partial), what-queries, paraphrases, echo questions, explicit recognition of misunderstanding, requests to confirm the correctness of vision of a situation in a certain light, conjectures or beliefs. The paper introduces the following basic types of the interruption-repairs depending on its functional peculiarities: interruption-correction, asking for clarification, explanations and additions / specifications in accordance with the needs of the communicator who interrupts a partner. The results obtained illustrate that the basic model of the interruption-repairs can be depicted in the following way: the emergence of the need for repair – repair – reaction to repair. According to the results of the research, the interruption-repairs are amplified by the phenomenon of the second utterance that depict the reaction of the speech recipient, presupposing the semantics of consent, negation and assumption, disclosure, refutation, justification, or refusal. The conducted research helps to acknowledge that the interruption-repairs contribute to overcoming communicative failures and cognitive dissonance, which is the key to a productive, successful communicative interaction. The prospects for study consist in further investigation of age characteristics, non-verbal means of the interruption, as well as strategies and tactics, which are involved in responding to the speech interruptions, that will allow a more detailed study of an addressee factor. Introduction Topicality of this scientific research is predetermined by the fact that nowadays a lot of researchers are referring to the problem of verbal interaction as a result of a growing interest in the study of a vast paradigm of communicative behavior. Communication is a two-way process, which includes speech generation of and its perception. In the course of a communicative interaction, partners adhere to certain role norms, but the balance between communicators is not always present. Quite often, one of the partners takes on the initiative in the conversation. Under this condition, the interruptions of speech may occur. The theory of repair includes self-correction and correction of another communicant 1 . A characteristic feature of repairs 2 correction of another communicant) that are employed in the process of interruption is that they help find ʺmutual understandingʺ 3 during communicative interaction. This peculiarity of the interruption-repairs cannot be neglected, since the speakers involved in communication should make efforts to ensure that the result of the speech interaction is successful. That is why it can be stated that the interruption-repairs require great effort since communicative partners are supposed, firstly, to be completely immersed in the communicative process, being active listeners; secondly, to focus on those moments of communication that remain unclear; and thirdly, to contribute to the achievement of ʺlinguistic transparencyʺ 1 in communicative interaction. The object of the research is interruption-repairs in speech interaction. The subject of the research is presented by formal and functional characteristics of the interruption-repairs. The aim of the paper is to outline the phenomenon of interruption as speech repair in Мodern English dialogue discourse by means of characterizing its formal and functional specificities. Methodology The research methods include the method of text interpretation that involves the analysis of each extract from dialogical discourse containing speech interruption; contextual method, which is used to characterise the intereruption in a particular context; functional method that helps to study specific functions of each example of the interruption as speech repair; and semantic method, which is aimed at studying speech realisation of the interruption-repairs by certain verbal means. Results and Discussion The interruption-repairs (the correction of the unnterance of a communicative partner in the form of the interruption) can have fairly varied forms, namely: interruptions can be in the form of correction, repetitions (full or partial), what-queries, paraphrases, echo questions, explicit recognition of misunderstanding, requests to confirm the correctness of vision of a situation in a certain light, conjectures or beliefs. We consider that such interruption-repairs help the addresser of the interruption stimulate the consciousness of the addressee in accordance with their own needs and personal vision of a particular communicative situation in which the speakers are involved. Of particular importance are the interruptions aimed at solving problems (task-oriented dialogues) 2 , since the constant agreement with the communicative partner does not always have a positive effect and may lead to misunderstanding instead of a successful communicative cooperation. The situation of interruption as speech repair can be a part of interaction limited to the communicative dyad or triad. A communicative dyad is the interaction of two communicants, where one of them, wishing to correct the utterance of the interlocutor, interrupts him/her. Communicative dyads are associated with a specific local segment of discourse and embrace cognitive pictures of only two communicants. The communicative triad takes place when the third communicant intervenes in a conversation, correcting the utterance of one of the speakers or completely changing the course of a conversation of both speakers. Such interruptions are marked by the imposition of three cognitive pictures. That is why the communicative triad comprises the layering of contexts. Functional peculiarities of the interruption-repairs are revealed in the following types of interruptions depending on the pragmatic aim of communication: the interruption-correction, asking for clarification 3 , explanations and additions / specifications in accordance with the needs of the communicator who interrupts the partner. The most common type of the interruption-repair is the interruption-asking for clarification, which can be defined as a kind of repair, where the speaker interrupting the communicative partner expresses the need for a further clarification of some information, usually due to the lack of understanding of the previous utterance. The interruption-repairs, especially the type of asking for clarification, represent a significant contribution to the successful development of communicative interaction, since they are potentially useful and conducive to the progress of a dialogue 4 . In addition, these interruptions can have a positive effect on the results of the interactive process. Thus, such interruption-repairs perform the function of optimising the communicative process of dialogical interaction and, which is of utmost importance, contribute to reaching the interlocutors' mutual understanding, as is shown in the example: ʺHe was swarthy, with deep-set light brown eyes and a little mole on his cheek. His gun had a silencer on it, and-ʺ Greenburg was looking at her in confusion. (a) ʺI'm sorry. I don't understand what-ʺ (b) ʺThe carjacker. I called 911 and-ʺ She saw the expression on the detective's face. (c) ʺThis isn't about the carjacking, is it?ʺ 5 . In the given above example the interruption-asking for clarification indicates that a detective Mr. Greenburg misunderstood what his partner was referring to. Therefore, this type of interruptions is the result of incompatibility of the communicants' cognitive pictures. The first part of the second interruption (b) is the interruption-explanation that demonstrates compliance with the cooperative speech strategy. The second part of the interruption (c) is the interruption-asking for clarification, which is used to reach an agreement, because at this stage the possibility of a communicative failure is strong. The formal indicator of the interruption-asking for clarification in the form of repair is an explicit recognition of misunderstanding using phrases like Sorry for interruption but…/ Sorry. I don't understand… / Sorry. Can you explain… / Would you explain me, which help clarify the situation or a certain moment of communicative interaction being difficult to comprehend within the concrete interaction, or with the help of ʺwhat queriesʺ 6 , which signify the difficulty of understanding and the impossibility of producing a correct interpretation. Consider the following example: ʺBut I've been offered a scholarship and-ʺ (a) ʺSo what? You'll spend four years wasting your time. Forget it. With your looks, you could probably peddle your assʺ 7 . The provided example of the interruption is marked by the usage of the question structure (a). The conversation is between the father and the daughter, who made up her mind to become a teacher. Despite the fact that the daughter received a scholarship her farther is against such a choice, as the teachers receive poor salary. Because of his convictions the father does not listen to his daughter till the end, interrupting her. The father's goal is to understand the reasons for choosing such a career and, at the same time, to impose his own vision of the daughter's future. The interruption-correction implies the semantic link between the stimulus-utterance and the reaction-utterance at the point of non-matching of visions of the same subject of discussion by communicative partners, which causes the interruption. The formal indicators of such interruptions are lexical inclusions and various deictic markers. The following example shows the lexical inclusion marriage in the stimulus-utterance and the reaction-utterance. This lexical unit demonstrates the attitude of communicants to relationships in which they are: The utterances (a) and (c) are related in meaning, moreover, the utterance (c) is a logical continuation and addition of (a). This type of the interruption-repair is often characterized by ignoring the partner's point of view (preinterruption phase (b)), as well as emotional excitement of the communicant. In the case of such repairs the speaker feels the need to complete the thought till the end, which is the reason for the interruption. In contrast to the previous type of interruptions, the basis of the interruptions-additions / specifications of the partner's utterance is the semantic connection of the interruption-utterance with the utterance of the communicative partner, which forms the preinterruption phase. In this case, the communicant feels the need to supplement the utterance of a communicative partner, as in the example: 00:03:30 Ben: Clothes That Fit... Is that the outfit that took over the-00:03:33 Patty: Yeah, I think they bought one of the factories on Front Street. My daughter tells me they sell clothes on the web 10 . In this example, the connecting element, which combines the utterance of the preinterruption phase and the interruption-utterance, is the lexical unit Yeah, which expresses agreement with the communicative partner and leads to complementing the utterance of the partner, helping adhere to the Cooperative Principle. Thus, the formal indicators of such interruptions may be expressions that show consent, confirmation of the viewpoint, additions: yeah, yes, I agree, totally agree, moreover, what is more, in addition, besides, and other words / expressions-connectors that show the link between the stimulus-utterance and the reaction-utterance. The interruption-explanation is used to put emphasis on the speaker's own vision of the manner of verbal or non-verbal behavior, an explanation of the point of view, being usually limited to the specific communicative situation. The example below illustrates the interruption-explanation of non-verbal behavior while praying: 00:30: It should be noted that the interruption-repairs are not always found in their pure form. Quite often, in conversations there occur mixed groups of functions within a single utterance, as in the following example, where the interruption (see (a) and (b)) is at the same time the interruptionasking for clarification (a) and the interruption-explanation of one's own point of view (b): 00:52:36Charlotte: Yes, but-00:52:37 Samantha: (а) But what? What's the problem? (b) I mean, we haven't been anywhere together since Carrie and Big's wedding blowup honeymoon disaster 12 . In the example given above, the interruption-utterance is the interruption-asking for clarification (a) in the form of a rhetorical statement, indicating that the communicant does not wait for the partner's response, but continues uttering her own thoughts (b), that acquire the form of the interruption-explanation. Let us consider another example: (a) ʺAndrew, we could get contracts from some of the big companies and-ʺ 13 . In the illustrated fragment, the interruptions act as the repair, being the correction and explanation at the same time ((b), (d)). From the context of the dialogue it is clear that the basic structure of the interruption in the form of the interruption-repair is the stimulus-utterance ((a), (c)) and the reaction-utterance ((b), (d)), where the reaction functions as the repair. The given basic structure of interruptions as the repair can acquire various transformations in accordance with the context of a concrete dialogical interaction. In the example provided above, the interruptions are endowed with semantics of denial, where the addressee does not accept the suggestion of the communicative partner ((b), (d)) and offers his/her own understanding of the situation. (b) ʺThat's not what we do, Tanner.ʺ (c) ʺThe Chrysler Corporation is looking for-ʺ And Andrew smiled and said, (d) ʺLet's do our real jobʺ It should be noted that the effectiveness the interruption-repairs can be evaluated by the recipient's response to them. The reaction for the interruption can be quite varied, and the interrupter cannot fully predict how the other communicant will respond. Still, the purpose of this interruption is to achieve a communicative consonance. That is, we analyse the reaction to interruptions in the postinterruption phase. Therefore, in our study we take into consideration three phases of the interruption: preinterruption, interruption and postinterruption phase. The first utterance is the stimulus, while the second one is responsive and reactive. The preinterruption phase is the stimulus-utterance, the interruption phase is the reaction-and the stimulus-utterance, and the postinterruption phase comprises the reaction-utterance. Thus, the situation of interruption comprises the stimulus-utterancereaction-utterance+stimulus-utterancereaction-utterance. Hence, the basic model of the interruption-repairs is as follows: the emergence of the need for repairrepairreaction to repair. The interruption-repairs are amplified by ʺthe phenomenon of the second utteranceʺ 14 . The second utterances usually depict the reaction of the speech recipient, presupposing the semantics of consent, negation and assumption, disclosure, refutation, justification, or refusal. The choice of strategies and tactics of communicative behavior in second utterances (the reaction-utterances) is predetermined by the stimulus-utterance. Consider the following example: ʺI don't know if you remember me-ʺ ʺI remember you,ʺ she interrupts. ʺHow could I not remember you?ʺ 15 . The example illustrates a situation, where the reaction-utterance of the interruption is subordinated to the communicative strategy of cooperation and is marked by compliance with the Principle of Cooperation, the Principle of Politeness, and thus helps the communicant who interrupts the partner save face. The communicant of the second utterances may accept a scenario offered by a partner (cooperative interruptions) as in the above given example, or reject the manner of behavior given by the communicative partner (intrusive interruptions), as in the following example: 00:12: Thus, the main feature of the second utterance is the dependence on the previous statement. It must be noted, though, that this phenomenon does not work in situations of interference by a third person who has not participated in the communicative interaction. Conclusions Thus, the analysis of the implementation of intentions of communicants who interrupt their partner with the aim to repair their utterance, as well as the analysis of functional feasibility of the interruption-utterance can be performed only after a complex study of the communicative situation, namely, the situation of interruption together with the context, linguistic and extralinguistic factors. Consequently, the process of interruption should not be perceived only as a negative phenomenon, depriving a speaker of the right to complete his communicative step in order to demonstrate a dominant communicative position and a desire to organize an interactive process according to his/her own scenario. On the contrary, the interruptions may serve to clarify or adjust the speaker's speech depending on the needs of the commuvication, to reach consensus and agreement with the interlocutor. Interruptions perform the function of repairing the speech flow, and also help overcome communicative failures and cognitive dissonance, which is the key to a productive and successful communicative interaction. The prospects for study consist in further research of age characteristics, non-verbal means of the interruption, as well as strategies and tactics, involved in responding to the speech interruptions, allowing a more detailed study of an addressee factor.
2019-08-23T20:28:04.796Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "68e7f19c0c900f32cc32b25bffc7e1bbf53f351f", "oa_license": null, "oa_url": "http://olj.onua.edu.ua/index.php/olj/article/download/153/337", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3bd7a338d9f2025cd4321de315baff8c7864fc48", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
99196
pes2o/s2orc
v3-fos-license
Designing an Algorithm for Generating Named Spatial References We describe an initial version of an algo-rithm for generating named references to locations of geographic scale. We base the algorithm design on evidence from corpora and experiments, which show that named entity usage is extremely frequent, even in less obvious scenes, and that names are normally used as the first focus on a global region. The current algorithm normally selects the Frames of Reference that humans also select, but it needs improvement to mix frames via a mereological mechanism. Introduction Geospatial data of public interest such as weather prediction data and river level data are increasingly made publicly available, e.g. DataPoint from the Met Office in the UK, River Level data from SEPA in Scotland and Global Forecast system data from NOAA in the US. We are interested in developing computational techniques for expressing the information content extracted from these datasets in natural language using data-to-text natural language generation (Reiter et al., 2005) techniques. For example, from precipitation prediction data corresponding to several locations across Scotland, we are developing techniques to automatically generate the statement Heavy rain likely to fall as snow on higher ground in the northeast of Scotland. An important subtask here is to automatically generate the spatial referring expression (SRE) higher ground in the northeast of Scotland to linguistically express the location of the snowing event found in the precipitation prediction data. This paper presents corpus analysis and experimental studies to guide the design of an algorithm for SRE generation. Studies of human written SREs (Turner et al., 2010) show a broad range of descriptors such as north, east, coastal, inland, urban, and rural to specify locations. Descriptors belong to one of many perspectives on the scene, or Frames of Reference (Levinson, 2003) or FoR for short, such as direction, coastal proximity, population density and altitude. Our own corpus studies (Section 2) show that geographic names are the dominant descriptors in weather forecast texts, route descriptions and river level forecast reports. Our experiment to empirically understand the extent of usage of geographical names in SREs (Section 3) also shows that names are the most used descriptors, as well as the FoR that sets the first focus on a region. Using this empirical knowledge we propose an initial version of an algorithm (Section 4) that automatically generates SREs using names as well as other descriptors. Corpus Analysis The first stab at the problem was a corpus analysis study. We gathered a total of 36 texts in 3 domains (route descriptions, weather forecasts, river forecasts), in 3 languages (English, Portuguese and Spanish), for 3 target audiences (general public, fishing enthusiasts, kayaking enthusiasts). We define an SRE as an adverbial (inland) or a noun phrase (the north), which ties non-spatial information to one location. Only sentences that contained at least 1 SRE were included in the corpus. For each SRE at least 1 FoR was annotated. 3. (a) Dry with sunny spells on Saturday and Sunday these mainly inland (b) with Aberdeenshire coast becoming cloudy. Sentence 1 was extracted from a river level report for Manitoba, Canada, which seems to be aimed at the general public. In the instance, we identified 3 SREs, all of which using named entities as FoR. Sentence 2 is a route description for drivers to reach Cambridge, England, so it is also aimed at the general public. 2a uses a cardinal direction as FoR, 2e uses the entity's type, while 2bc use named entities. Sentence 3 also seems to be intended for the general public; it is extracted from a weather forecast report for Aberdeenshire, Scotland. Both SREs use coastal proximity as FoR, while 3b also includes a named entity. In total the corpus yielded 556 SREs, out of which 318 (57%) use named entities, either in isolation or combined with other FoR. It is important to remember that another 7 FoR appear in the corpus -cardinal direction, coastal proximity, population density, type, motion sequence, river segment and size -which means that names account for more than half of a total of 8 choices. With the corpus in place, it became clear that names do not compete with other FoR in a balanced manner. Because of this expressive imbalance, we were lead to the suspicion that humans choose to refer to geographic regions by their names using a different strategy than when choosing other FoR. We suspect people may be more precise when they use FoR such as cardinal direction or coastal proximity, but they can be very imprecise when using names. This suspicion lead us to our first hypothesis: Hypothesis 1: People mostly use named entities to refer to locations of geographic scale, even if the fit between the named location and the located entity/event is poor. By the above hypothesis we mean that named entities are used as spatial references also in situations where using a name as reference is not so obvious. For instance, if the named location only covers a small portion of a located entity/event, or if the located entity/event is much smaller than the named location, we suspect that most people still use the named location as reference, hence the high frequency of named entities in the corpus. Experiment Even though the corpus analysis returned fruitful insights, we remained with a major shortfall to design a computational algorithm for an NLG system. We expect such an algorithm to be used in data-to-text systems -i.e. systems that write text from information stored in data bases -so a dataand-text parallel corpus is more suitable to inform us what our SREG algorithm must consider. Thus we resorted to experiments with human participants to collect spatial expressions, while having full access to the data underlying the text. Pilot To test hypothesis 1, we designed a pilot experiment (see Figure 1), where we showed 3 different maps (conditions) of fictitious countries to 14 human participants and asked them to describe where on those countries they could see a patch of rain. Both the no-name condition and the good-fit condition placed the rain patch very neatly on one specific region of the country, with the difference that the no-name condition did not have any names for the regions and the good-fit condition did. In the poor-fit condition, named regions were also present but the patch covered only a small portion of several regions. Participants were split into balanced groups and each group saw maps in a different order. The rationale behind the no-name con- dition is to certify that people resort to other FoR when names are not available. Curiously, names were not as dominant in the pilot experiment as they are in the corpus. The FoR used by all participants were names, cardinal direction (north, south, etc.) and some proximity (coast, border, etc.). In the vast majority of responses (94%), people used multiple FoR to refer to the location of the rain patch, which we believe helped balance the usage of FoR across responses. Names were used in 79% of responses in the goodfit condition -proximity 86% and direction 50%and in the poor-fit condition names were used in 64% of responses -direction 79% and proximity 57%. Even though names were not dominant, people still used names in most cases, even in a scenario where using a name was not so obvious (the poorfit condition), speaking in favour of hypothesis 1. After results from the pilot experiment, we could see that most responses use a first focus frame (of reference) and a a second focus frame. Take the SRE coastal areas of Frogdon for instance. Frogdon (a name) indicates the first focus area, while coastal areas (proximity) sets a second focus on one particular portion of the first focus area. We suspect that most first-focus areas are named regions, which leads us to a second hypothesis: Hypothesis 2: When mixing named entities with other FoR, people use named entities mostly for first-focus areas and other FoR for second-focus. The main experiment The above results were not formally verified with statistical tests because we believe our sample of name-1st If both names and other FoR were used, but named entities were used as first focus. both-1st If names and other FoR don't compete for first focus, but remain on the same level, so the resulting subregion is a union of multiple sub-regions. For example: northwestern Fruitport... southwest of Breading... eastern part of Meatcott... not in the far northeast or southeast. Fruitport, Breading and Meatcott are named regions but far north-east and south-east are directions. None is a part of the other, so the named areas and not far northeast and south-east complement each other at the same focus level. none If no FoR, but only vague descriptors were used. Finally we counted all possible combinations of FoR usage and aligned those with experimental conditions, as displayed in Table 1. The first intriguing observation is that 5 responses did not use any FoR, according to our annotation. 2 of them used only a quantifier (much, most), 2 only the name of the country (Musicland), and 1 used both (some parts of Musicland). Using only the name of the country does not successfully complete the task, because it does not answer the question "where in the country will it rain?". Quantifiers were also not annotated as other FoR because they are extremely vague. We were aiming at FoR that help a hearer more precisely identify referenced locations. Even more interesting, 2 SREs created named entities in the no-name condition, i.e. where no name was available as per task. One participant decided to name an unnamed subregion of Musicland as Drum County and referred to it 'by its name'. Although odd, this suggests how people strongly feel the necessity for named entities when describing geographies. This is very similar to another response in the pilot experiment, where the participant described one unnamed subregion as the penultimate state before reaching the coast, and later stated in the comments that names should be on the map. Hypothesis 1 states that people use names with a high frequency in any condition where names are available. If we exclude the no-name condition from the count, this hypothesis is supported with 97% (90/93) of name usage in the good-fit condition and 98% (91/93) in the poor-fit condition . We did not observe a significant difference in name usage between good-fit and poor-fit conditions, χ 2 (1,N=186) = 0.21, p = .65. Hypothesis 2 was also supported, again excluding the no-name condition. People very often (113/126 or 90%) use names as the first-focus area and other FoR as the second focus-area. After testing the above hypotheses, we observed the same phenomenon as identified by Turner and colleagues (2010): that people resort to other FoR more often when the fit between (rain) patch and region is poorer. In the good-fit condition 54% (50/93) of responses used other FoR, while 87% (81/93) of poor-fit responses contain other FoR. This means that there is a significant need for other FoR when moving from a good-fit to a poor-fit scenario, χ 2 (1,N=186) = 26.18, p < .001. Preliminary conclusions To date this project has shown evidence that: • Humans use several FoR when referring to geographical locations. • Regardless of scenario, named entities are almost always used. • Named areas mostly function as a first focus area, wherein a descriptor of a second FoR can still be selected. Algorithm We used the knowledge described above to inform an algorithm that selects Frames of Reference. The procedure is basically the ContentSelector algorithm of the RoadSafe project (Turner, 2009), which looks at an event that takes place in a geography and selects one or more frames out of an array of frames. The input to the algorithm, as for many geographic information systems, is a set of points with latitude-longitude coordinates and some other value denoting the status of the point in some event. In Turner's sense, a Frame of Reference is a set of descriptors, and a descriptor is a non-overlapping partition of a geographic region where each descriptor can be used to refer to a specific partition. The frame contains all points of the dataset, but each descriptor encompasses a particular subset of points. For instance, take the US as our global geography, which contains several thousands of points. The Frame of Reference StateNames contains 50 descriptors, one for each US state, so each descriptor contains a couple of hundreds of points. Altogether StateNames contains all points that form the US. Another frame could be CoastalProximity, which is composed of only 2 descriptors, Coastal and Inland, where most points belong to the Inland descriptor and the rest to Coastal. Note that in this example, all points that belong to the descriptor Kansas of the frame StateNames also belong to the descriptor Inland of the frame CoastalProximity, but such overlaps are not always true. Out of the points that form the descriptor Texas, some belong to Inland and others to Coastal. Following the US example, the high-level goal of the algorithm is to select one or more descriptors that best locate a target subset of all the points in the US. For instance if our dataset contains a binary variable for "rain" for each point, and we are interested in describing the location of the "raining points" -or simply answering the question "where in the US is it raining?" -the algorithm's task is to return a set of descriptors that encompasses the majority of points with rain=true values. If the result is {Colorado, Coast}, the NLG system where the algorithm lives should be able to produce the sentence "it will rain on the Coast and in Colorado". Turner describes the ContentSelection algorithm in detail (p. 122), so below we highlight its main steps: 1. Take as input a set of points representing an event, along with meta-data for Frames of Reference. 2. Count the density of target points for each descriptor of each frame. 3. Remove a frame if all its descriptors have non-zero densities. 4. Of the remaining frames, rank them by a predefined preference order. 5. Use the first frame with non-zero densities. 6. Try adding each subsequent frame, if this reduces the number of false positives. 7. Use the descriptors with non-zero densities of the chosen frames. We take the algorithm and include, first of all, a NamedAreas frame. This however is currently done in the same fashion as all other frames in the RoadSafe project. The true conceptual modification to the original algorithm was the threshold of density (step 3). RoadSafe fixes this value at 0, which means that if all descriptors of a Frame of Reference have at least 1 target point, then this frame cannot be chosen. We suspect that humans are more lenient when computing density. We believe that humans can choose frames where all descriptors have non-zero densities, by focussing on descriptors with high densities and ignoring descriptors with low (yet non-zero) densities. Therefore our version of the algorithm selects a descriptor as candidate if it reaches a density threshold, and it ignores a FoR if all its descriptors are candidates. A small-scaled quantitative evaluation To test how the algorithm currently performs, we ran it using 7 weather forecast datasets provided by the UK's meteorology agency: MetOffice. The data contained numerical predictions for a region in the UK (Grampian), and each dataset also accompanies a textual summary, against which we used to compare our algorithm. We chose DICE to evaluate how comparable each output was. This metrics has been widely used by the Referring Expression community . The results are displayed in Table 2. To compare MetOffice's FoR choices with those by our algorithm, we ran it using 6 different density thresholds: 0.0, 0.2, 0.4, 0.6, 0.8 and 1.0. A density threshold is in this sense the minimum event density a descriptor can have to be accepted as a candidate. If you recall the explanation of the algorithm above, a Frame of Reference is rejected if all its descriptors are rejected, but equally if all its descriptors cannot be rejected. For example, it only makes sense to select Inland as a descriptor if Coastal is not a candidate; if both Inland and Coastal are equally valid, then we can say the event (e.g. rain) is taking place in the entire region, as far as coastal proximity is concerned. As explained above, the fixed density threshold in the original algorithm was 0.0, which means that 1 single point was enough to make a descriptor invalid. By running the algorithm with different density thresholds, we are able to have an idea of some optimal threshold, where non-zero-density descriptors still get rejected. From this initial evaluation, we could verify Table 2: Comparison of 1st-focus FoR choice between MetOffice texts and the algorithm running with different density thresholds. Assigning 2 (or more) 1st-focus FoR to a dataset is very similar to assigning "both-1st" to experimental responses. Please refer to Section 3.2 for a more detailed discussion on multiple 1st-focus FoR. Abbreviations: nam = NamedArea; dir = Directions; cst = CoastalProximity; MO = MetOffice; BL = Baseline; DT = Density Threshold; D = DICE score; * = all descriptors reach the threshold, so no FoR is discriminative enough to be chosen; -= no descriptor reaches the threshold, so no FoR qualifies as candidate to be chosen. that, at its current state, the algorithm is performing relatively well in choosing the 'favourite' frame, which is NamedAreas. Another important observation is that the algorithm reached, at this relatively small evaluation, its optimal density threshold at 0.4, as indicated by the DICE value of 0.7, which is higher than the baseline of 0.6. The baseline is simply the most common FoR in the dataset, which is named entities. Surely a more substantial evaluation with a larger dataset will be required before we are safe to make stronger claims about thresholds and performance. It is important to highlight how we annotated our corpus texts. Frames were considered chosen if they were the first-focus FoR in the description (see 3.1 for a discussion on first vs. secondfocus FoR). For instance, if "in Aberdeen and in the west" was the expression, both names and direction were annotated as first-focus frames; if "in western Aberdeen" was the case, then only name was considered first-focus, with direction annotated as second-focus and therefore outside the comparison with the algorithm. This is necessary because, although we gained valuable knowledge about first and second-focus with previous studies, the functionality for focus is not yet present in the algorithm, thus we are not yet ready to evaluate it for this mechanism. An example Below we provide an example of how the algorithm decides for Frames of Reference and descriptors. We take a dataset used in the evaluation exercise, which contains rain forecast data for the Grampian region, in Scotland. The region has a coastal line at the North Sea and is composed of 3 authority areas, namely: Aberdeen, Aberdeenshire and Moray. As explained above, the data is provided by MetOffice, who also provides textual summaries for the data. From an analysis of the summaries we identified 3 Frames of Reference used with a frequency higher than 5% to describe rain events. These frames, their descriptors and frequencies are: NamedAreas (83%): Aberdeen, Aberdeenshire and Moray. In the Directions frame, we coded only the inter-cardinal directions as descriptors. This is necessary because the algorithm needs to compute each descriptor as a non-overlapping atomic partition. A North descriptor would overlap with an East descriptor, forming exactly the partition North-East. For this reason, a description such as "the North" is achieved if the algorithm selects the descriptors North-West and North-East, but not South-West and South-East. The frequencies become the weights of each frame in the algorithm, and the decision for a descriptor is based on the utility score of a descriptor. Utility is computed by multiplying the event density within a descriptor and its Frame of Reference weight. The event density is the percentage of points of a given descriptor that are also within the event. For example, if the descriptor NorthEast has 32 points in total and 18 are marked with <rain,true>, while 14 are marked with <rain,false>, the rain-event density of NorthEast is 0.44. As discussed above, the algorithm was tested with different density thresholds, which set the minimum density value for a descriptor to be considered as candidate. In Table 3: Event densities of a dataset used in the evaluations. Following the description of the algorithm (in Section 4), the algorithm receives the set of points that 'are raining' as well as what descriptors can be assigned to each point. It counts the event density of each descriptor and attempts to reject any descriptor whose density is lower than the threshold. When the density threshold is set to 0, no descriptor is rejected so no frame can be selected. However, when we set the threshold to 0.4, Inland, NorthWest, SouthWest, Aberdeenshire and Moray get rejected. Because each frame now contains a rejected descriptor, all frames are good candidates as SREs. To break the tie, the algorithm resorts to frame weights and densities (i.e. utility). It computes that the utility score of Aberdeen is higher than that of the other non-rejected descriptors, NorthEast, SouthEast, and Coastal, so it selects the descriptor Aberdeen (and the NamedAreas Frame of Reference). Conclusions and future work In this paper we described an initial version of an algorithm that is able to select one or more Frames of Reference -and appropriate descriptors thereof -to describe an event taking place at a geographic scene. The current state of the algorithm seems promising insofar that it prefers the frame that humans also prefer: NamedAreas. This preference was better observed when the event-density threshold of the algorithm was set to 0.4. However this performance is only verified for first-focus frames, those that are used to reduce the global region to a smaller sub-region. To enable the algorithm to compute secondfocus frames, the key aspect will be mereology. A Frame of Reference mix is, at the current state of the algorithm, the geometrical union of two or more descriptors, which in turn share the same global region. Take for instance Texas and North; they belong to different Frames of Reference -StateNames and Directions respectively -but, in isolation, assume the same global area: the US. Although this may be a good mechanism to mix frames in some cases, our corpora are abundant of examples where one descriptor assumes another descriptor as its global region. Take the expression "northern Texas" for instance. It is not the case that the expression refers to the union of Texas and the north of the US. While "Texas" has the entire US as its global region, "northern" refers to the sub-area within Texas. In experiment 1 (see Section 3.2) we showed how names are very frequently the first meorological level when frames are mixed meorologically. We believe that a systematic approach to compute meorological Frames of Reference will substantially improve the performance of the algorithm. Based on evidence found, we also believe that named areas will play a particularly important role in meorological operations. Related Work The subtask of generating referring expressions such as the green plastic chair and the tall bearded man has been extensively studied by the NLG research community (Dale and Reiter, 1995;Van Deemter, 2002;Krahmer and Van Deemter, 2012). However, relatively fewer studies have been reported on SREs. A notable work is that of Turner and colleagues (2010), which implements the notion of FoR to generate approximate descriptions of geographical regions. As such Turner's algorithm seem to be too domain specific, as it covers only a subset of FoR that exist. The algorithm we propose aims to not be domain specific but it may be constrained to generat-ing expression that refer to locations of geographical scale such as regions of a country. Initially we are not concerned with describing the position of small-scale scenes such as a cup on a table. Below we describe how these spaces can be significantly different for our task. We also review the backbone concept for the algorithm, that of FoR, and we finally list some existing implementations for generating spatial referring expressions. Spatial frames of reference When choosing how to represent space with words, we need to select not only spatial entities but a spatial relation between them. Choosing a spatial relation depends largely on the perspective with which one looks at (or imagines) a scene. In cognitive sciences, people have used the term Frames of Reference (FoR) to refer to such perspectives. Levinson (2003) classifies cognitive FoR into 3 types: Intrinsic Objects have spatial parts such as front or top. Relative The 3rd object position is taken into account. Absolute Fixed bearings such as latitude longitude coordinates. In this work, we take the same position as (Turner et al., 2010), which perceives the absolute FoR as the one employed by humans when surveying geographical spaces. Generation of spatial referring expressions The first systems to use an SREG module date back to the 1990s. FOG (Goldberg, 1995) was the first large scale commercial application of NLG and it generated weather forecasts in English and French. Similar to FOG, many other systems focus on generating descriptions for weather data (Coch, 1998;Reiter et al., 2005;Bohnet et al., 2007). We can expect the spatial language in the output of such systems to employ the absolute FoR, given the geo-referenced input data. The other type of systems normally use SREG modules to describe a medium-scale (e.g. street) or a small-scale (e.g. room) space (Ebert et al., 1996;Dale et al., 2005;Kelleher and Kruijff, 2006). In such systems, we can expect intrinsic and relative frames. RoadSafe (Turner et al., 2010), is to the best of our knowledge the most recent system to implement an SREG module. Output spatial language employs absolute FoR and geo-referenced data is processed using DE-9IM (Clementini et al., 1993). RoadSafe implements the most sophisticated SREG module to describe geographical scenes using non-named FoR. We need to enable NLG systems to generate named spatial references as well.
2015-09-18T23:22:04.000Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "07a13f57999649410e3cdb60c67579ccc954e633", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/W15-4723.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "07a13f57999649410e3cdb60c67579ccc954e633", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
220700369
pes2o/s2orc
v3-fos-license
Inclusive inheritance for residual feed intake in pigs and rabbits Abstract Non‐genetic information (epigenetic, microbiota, behaviour) that results in different phenotypes in animals can be transmitted from one generation to the next and thus is potentially involved in the inheritance of traits. However, in livestock species, animals are selected based on genetic inheritance only. The objective of the present study was to determine whether non‐genetic inherited effects play a role in the inheritance of residual feed intake (RFI) in two species: pigs and rabbits. If so, the path coefficients of the information transmitted from sire and dam to offspring would differ from the expected transmission factor of 0.5 that occurs if inherited information is of genetic origin only. Two pigs (pig1, pig2) and two rabbits (rabbit1, rabbit2) datasets were used in this study (1,603, 3,901, 5,213 and 4,584 records, respectively). The test of the path coefficients to 0.5 was performed for each dataset using likelihood ratio tests (null model: transmissibility model with both path coefficients equal to 0.5, full model: unconstrained transmissibility model). The path coefficients differed significantly from 0.5 for one of the pig datasets (pig2). Although not significant, we observed, as a general trend, that sire path coefficients of transmission were lower than dam path coefficients in three of the datasets (0.46 vs 0.53 for pig1, 0.39 vs 0.44 for pig2 and 0.38 vs 0.50 for rabbit1). These results suggest that phenomena other than genetic sources of inheritance explain the phenotypic resemblance between relatives for RFI, with a higher transmission from the dam's side than from the sire's side. of the microbiota has been described in various species (Sandoval-Motta, Aldana, Martínez-Romero, & Frank, 2017;Sonnenburg et al., 2016), together with the evidence of its impact on host physiology (Marchesi et al., 2015;Sommer & Bäckhed, 2013). Behavioural/cultural inheritance is the transmission of information from one individual to another via learning mechanisms. Such non-genetic vertical transmission of behavioural characters has been demonstrated in various animal species, especially rodents (Champagne, 2008). However, the impact of these non-genetic sources of inheritance on the phenotypic variability of traits of economic importance in livestock species has rarely been investigated. This is mainly due to the difficulty of disentangling the different sources of inheritance with only pedigree and phenotype data, and the lack of appropriate data structure (large variety of different categories of relatives with phenotype) (David & Ricard, 2019). To overcome this problem and still take into account the different sources of inheritance when estimating the transmissible potential of individual, David and Ricard (2019) proposed the transmissibility model. Similarly to the animal model, the transmissibility model uses pedigree and phenotypic information to estimate variance components and predict a transmissible potential for an individual that combines all sources of inheritance. It differs from the animal model by estimating the path coefficients of inherited information from parent to offspring instead of using the pedigree-based expected transmission factor of 0.5 for both the sire and the dam (additive genetic relationship matrix). Because of the relatively high importance of feed-related costs in animal production systems (Calenge et al., 2014;Diaz, Crews, & Enns, 2013;Gilbert et al., 2007), selecting animals for a better feed efficiency (FE) is one of the best levers of action to improve farm profitability. In addition, improving FE reduces the environmental impact of livestock farming (Basarab et al., 2013;Saintilan et al., 2013). Residual feed intake (RFI), defined as the difference between the observed feed intake and the expected feed intake based on requirements for maintenance and production, is an interesting parameter. It quantifies FE as an indicator of the efficiency to use feed based on the animals' requirements for maintenance and production, contrary to the feed conversion ratio that is the ratio of feed intake to growth rate (Koch, Swiger, Chambers, & Gregory, 1963). It has been reported in the literature that the microbiota, epigenetic phenomena and feeding behaviour have an impact on FE (Ji et al., 2017;Verschuren et al., 2018;Young, Cai, & Dekkers, 2011). However, these findings do not prove that epigenetic, microbiota and behavioural inheritance play a role in the inheritance of FE. The objective of the present study is to determine whether non-genetic sources of inheritance are involved in the inheritance of FE by applying the transmissibility model to RFI in pigs and rabbits to test if at least one of the path coefficients of transmission differs from 0.5. | Material Two pig and two rabbit datasets were used in this study. The different datasets were collected from separate populations with different histories of selection for FE (Table 1). Rearing conditions are described in Déru et al. (2019) and Gilbert et al. (2007) for pigs and in Garreau, Hurtaud, and Drouilhet (2013) for rabbits. Briefly, the first pig dataset (pig1) includes data from French Large White maternal line male pigs, raised at the INRA UEPR France Génétique Porc phenotyping station (Le Rheu, France). Piglets from different selection farms were received at the test station at weaning (3 weeks of age), penned in postweaning facilities until 9 weeks of age and then moved to growing-finishing pens equipped with singleplace electronic feeders fitted with a pig scale (Genstar, Acemo Skiold). Pigs remained in the same group of 14 animals from 3 weeks of age until the end of the test. Among a total of 1,663 pigs, 880 pigs were fed a two-phase conventional dietary sequence and 783 pigs were fed a two-phase high fibre dietary sequence. Pigs had ad libitum access to feed and water. Feed intake and body weight gain were recorded from 30 to 120 kg of live weight. Pigs were then slaughtered, and carcass yield (CY) and lean meat percentage (LMP) were recorded for 1,603 animals. The average daily gain (ADG) was computed as the difference between BW at the end (BW end ) and the beginning (BW start ) of the test period divided by the number of days elapsed, and the average metabolic body weight (AMBW) was computed as (Noblet Karege & Dubois, 1991 (Labroue, Gueblez, & Sellier, 1997)]. Over an 18-week period (from ±67 to ±180 days of age), animals were fed ad libitum a pelleted diet of cereals and soybean meal. They had free access to water. Feed intake was recorded each time a pig accessed the feeder. Performances were recorded differently depending on whether the animals were candidates for selection (males) or not (females and castrated males). For candidates for selection, the test period covered the period during which the animals' body weight was between 35 and 95 kg, while for non-candidates the test period was from 10 weeks of age (~28 kg live weight) to slaughter (~110 kg live weight). For both groups, the average daily feed intake (ADFI) over the test period was computed as the sum of daily feed intakes divided by the number of days elapsed, and ADG was computed as the difference between BW at the end and the beginning of the test period divided by the number of days elapsed. Ultrasonic backfat thickness (BF) of selection candidates was measured on live animals at 95 kg as the average of six ultrasound measurements, at three locations on both sides of the spine, on the neck, the back and the hips. For non-candidates, BF was measured on the carcass at slaughter. The AMBW was computed the same way as for the pig1 dataset, resulting in a fixed value for the candidates for selection (the test period being between fixed weights: 35-95 kg). To account for the difference in the test period between the two groups, ADFI, ADG, BF and AMBW were standardized and zerocentred within groups, as proposed by Aliakbari, Delpuech, Labrune, Riquet, and Gilbert (2019). The two rabbit lines were the paternal lines AGP39 and AGP59 of Hypharm, a French breeding company. These two rabbit lines are selected for body weight at 63 or 70 days, CY and resistance to digestive disorders. For both lines, at weaning, four kits of the first litter of each dam were placed in individual pens. They had free access to commercial pelleted feed until 63 days of age for AGP39 and 70 days of age for AGP59. Feed intake was recorded every week as the difference between the weight of feed delivered and refusals, for 5,213 rabbits over a 5-week period for AGP39 and for 4,584 rabbits over a 6-week period for AGP59. Rabbits were weighed after weaning and at the end of each week, and weekly ADG (WADG) was calculated as the difference between body weight at the end and at the beginning of each week divided by the number of days elapsed. Weekly feed intake (WFI) was recorded every week as the difference between the weight of feed delivered and refusals. Weekly metabolic body weight (WMBW) was computed as WBW 0.75 , where WBW is the weekly body weight, that is the average of the weights recorded at the end and at the beginning of the respective week. For the analysis, WFI, WADG and WMBW were standardized per week (i.e., divided by their standard deviation) and were considered as repeated measurements of the same trait (5 and 6 repeated measures for AGP39 and AGP59, respectively). | Methods Data were analysed using the transmissibility model with maternal genetic effects, which is, for the different datasets, submodel of the following global model: where y is the ADFI over the growing period for pig1, the standardized zero-centred ADFI over the test period for pig2, and the standardized WFI for AGP39 and AGP59, β is the vector of fixed effects; t is the vector of transmissible values; p is the vector of permanent environmental effects (included in the models for data with repeated measurements); m is the vector of maternal genetic effects; l is the vector of litter effects (week by litter combination for rabbit data); e is the vector of residuals; X,Z,W,S, and R are the corresponding known incidence matrices. For pig1 and rabbit data, all random effects were distributed as centred normal distributions with variance-covariance matrices equal to A 2 m for the maternal genetic effects, I p 2 p for the permanent environmental effects, I l 2 l for the litter effect, I e 2 e for the residual effects, where I are identity matrices of appropriate size. For pig2 data, to account for potential variance heterogeneity between candidates for selection and non-candidates (i.e., RFI is a different traits in the two categories), random effects were distributed as centred normal distributions with variance-covariance matrices equal to A ⊗ G m for the ma- are the maternal genetic variances for the candidates and non-candidates for selection, respectively, and m CS−NC is their covariance effects, for the residual effects. The transmissible value t was normally distributed with matrix M 2 t (M ⊗ G t for the pig2 dataset), where M is the unknown transmission relationship matrix. Considering that for an animal i born from sire s and dam d: , the M matrix is a symmetric matrix with 1s on the diagonal and r ij as off-diagonal elements. In the case of two animals i,j with n common ancestors (l) r ij = ∑ n l=1 r ij,l , with r ij,l = k ij,l s q ij,l d , k ij,l = k il + k jl , q ij,l = q il + q jl , where k il , q il are the number of sire and dam transmissions between ancestor l and animal i, respectively; s and d are the unknown sire and dam path coefficients of transmission, respectively, subject to the following constraints: David & Ricard, 2019). Thus, in this model, the two path coefficients of transmission can take a large range of values that can model the different sources of inheritance: (aa) they can be both equal to 0.5 to model a purely genetic transmission. Indeed, in that case, the M matrix is the known pedigree relationship matrix A and the transmissible value is the direct breeding value. Thus, in that case, the transmissibility model is the animal model usually applied in genetic studies. For the sake of simplicity, reference to the "animal model" in the following article corresponds to the constrained transmissibility model with s = d = 0.5; (b) they can be both lower than 0.5 in agreement with the vertical transmission of epigenetic marks (Tal, Kisdi, & Jablonka, 2010;Varona et al., 2015); (c) one coefficient can be higher than 0.5 in agreement with single parent inheritance [microbiota (Bright & Bulgheresi, 2010), culture (Feldman & Cavalli-Sforza, 1975), see David and Ricard (2019) for details], which is of particular interest for the dam side. Since a trait maybe transmitted from one generation to the next by different sources of inheritance, the path coefficients of transmission estimated in the transmissibility model combine these different modes of transmission. Thus, testing if the transmission is not purely additive genetic consists in testing if at least one of the path coefficient differs from 0.5. Residual feed intake is obtained from a multiple linear regression of FI on traits accounting for expected production and maintenance requirements (Kennedy, Van Der Werf, & Meuwissen, 1993). Thus, in addition to the covariate that should be included in the model to compute expected production and maintenance requirements when analysing FI, the fixed effects included in the model were selected beforehand by comparing reduced nested mixed models (i.e., models that do not include transmissible and genetic effects) using likelihood ratio tests (LRT) and maximum likelihood estimation. Transmissible values and maternal genetic effects were included in the model for all species while other random effects were selected using the transmissibility model with the constraint that s = d = 0.5 (i.e., animal model), which runs much faster than the unconstrained transmissibility model. Selection was performed by comparing step-by-step nested models using LRT (REML estimation). In addition, variance heterogeneity between groups for the different random effects and correlations (different from 1) was also tested using LRT for the pig2 dataset using the animal model. All LRT tests for parameters on the boundary of their parameter spaces (test of variance equal to zero, correlation equal to 1, sire and dam path coefficients equal to 0.5) were performed by accounting for the change in the asymptotic distribution of the likelihood ratio statistic under H0 (i.e., mixture 1 2 2 p−1 + 1 2 2 p , where p is the number of parameters tested) (Foulley, Jaffrezic, & Robert-Granie, 2000;Self & Liang, 1987;Stram & Lee, 1994). To model ADFI in the pig1 dataset, fixed effects were the type of feeding regime (two classes) and the batch effect (36 levels), and ADG, AMW, LMP, CY were fitted as covariates. A litter random effect was not included in the models because its variance did not differ significantly from 0. For the pig2 dataset, fixed effects were the sex (three levels), pen within group (32 levels), batch (99 levels), group*herd (four levels), group*pen_size (10 levels), group*ADG (ADG as a covariate), group*BF (BF as a covariate) and AMBW as a covariate for the non-candidates for selection. Selected random effect was the litter effect. The variances of the transmissible values were not different between groups. Transmissible and maternal genetic correlations between groups did not differ significantly from 1. Consequently, a unique vector of transmissible values was considered for the two groups and the maternal genetic correlation was fixed to 1 in the analysis. For all the other random effects, the variances differed significantly between the candidate and non-candidate groups. For the two rabbit datasets, fixed effects were combined effects of week*batch (210 levels for AGP39 and 240 levels for AGP59), week*sex (10 levels for AGP39 and 12 levels for AGP59), week*litter size (35 levels for AGP39 and 42 levels for AGP59), week*WADG (WADG as a covariate) and week*WMBW (WMBW as a covariate). Selected random effects were the week*litter combination and permanent environmental effects. The parameters of the transmissibility model (variance components, the sire and dam path coefficients of transmission) can be estimated with the restricted maximum-likelihood method (REML) using ASReml and the OWN Fortran program developed by David (2018). Parameter estimates were used to compute dam transmissibility: d , which correspond to half the heritability in the animal model. To test the hypothesis of non-genetic inheritance: H0 "sire and dam coefficients of transmission are equal to 0.5" was tested against the H1 hypothesis "at least one of the coefficients of transmission (sire or dam) differs from 0.5." The transmissibility and animal models were compared by performing a LRT of size 5% (mixture 1 2 2 1 + 1 2 2 2 ). Indeed, the animal model is a special case of the transmissibility model for which sire and dam coefficients of transmission are fixed to 0.5; it is nested in the transmissibility model and LRT can be applied. If the null hypothesis H0 is rejected, it can be concluded that the underlying model is not purely additive genetic and values of the sire and dam path coefficients of transmission give information about the other sources of inheritance (for instance, if the dam path coefficient of transmission is higher than the sire one, single parent source of inheritance [microbiota] can be suspected). To compare the predictions of the two models, the correlation between the sire and dam transmissible values obtained with the transmissibility model (̂ st for sires, ̂ dt for dams) and the animal model (0.5 t for sires and dams) was then computed. In addition, the percentage of animals in common in the 10% best animals based on their transmissible was computed. | RESULTS Parameter estimates obtained with the animal and transmissibility models are provided in Table 2. For the animal model, the direct heritability of RFI (twice the sire or dam transmissibility) ranged from 0.10 ± 0.02 to 0.42 ± 0.09. Depending on the dataset, the maternal genetic variance obtained with the animal model represented 11%-52% of the direct genetic variance and was not significantly different from 0 for the pig1 dataset. The LRT that compared the animal and the transmissibility models showed that the null hypothesis "sire and dam path coefficients of transmission are equal to 0.5" (i.e., RFI is transmitted by genetic inheritance only) was rejected for the pig2 dataset only. For this dataset, the sire and dam path coefficients of transmission were both lower than 0.5, and the sire coefficient was lower (although not significantly given the SE) than the dam coefficient (0.39 Rabbit AGP59 Animal model a T A B L E 2 Parameter estimates obtained with the animal and transmissibility models for the different species vs 0.44). Even if not significant, we observed the same trend for the pig1 and AGP39 datasets: a lower value of the sire path coefficient of transmission compared with the dam path coefficient of transmission (0.46 and 0.38 vs 0.53 and 0.50, respectively). On the contrary, the sire and dam path coefficients estimated in the AGP59 line were both equal to 0.5 indicating that for that rabbit line the transmissibility model was equivalent to the animal model, resulting in similar estimates for the different variance components of the models. For the three other datasets, the variances of the transmissible value tended to be higher than the variances of the direct genetic effects obtained with the animal model, especially for the pig2 dataset for which it was nearly two times higher. The sire and dam transmissibility estimates for these three datasets tended to be higher than those obtained with the animal model, indicating a stronger parent-offspring regression than that considered with the animal model. When the breeding values were compared with the sire and dam transmissible values (Table 3), we found that, as expected given the sire and dam path coefficient estimates, the breeding and transmissible values were the same for the AGP59 dataset. For the other datasets, despite the very high correlation between the breeding values and the sire or dam transmissible value (correlation higher than 0.98), we observed that the two models would probably not have resulted in the selection of the same animals. Indeed, the percentage of animals in common in the 10% best animals based on their breeding value or their transmissible value was not very high for the sire's side (87%-93% depending on the species) and still lower for the dam's side (83%-90% depending on the species). It should be noted that the apparently low percentage of animals in common in the 10% best animals obtained for the sire's side in the pig2 dataset despite a very high correlation between breeding and transmissible values is due to the low number of animals used for the calculation (15). | DISCUSSION We chose the transmissibility model to detect non-genetic inheritance for RFI in two different species. Non-genetic inheritance is assumed when at least one of the two path coefficients of transmission (sire or dam) estimated by the transmissibility model differs from 0.5. It has been shown that, conversely to a model that aims at dissociating genetic from non-genetic inherited effects, the parameters of the transmissibility model are practically identifiable in most situations, which is its main advantage (David & Ricard, 2019). Of course, therefore, this model does not aim at quantifying the proportion of variance explained by different sources of non-genetic inherited effects. This objective can be only achieved by considering additional information in the model such as measurements of the shared microbiota, methylation patterns reflecting epigenetic transmission, etc. Indeed, disentangling genetic and non-genetic effects is challenging without additional information than pedigree and phenotypes (David & Ricard, 2019), which may explain the relatively low number of reports of significant epigenetic variance in the literature Varona et al., 2015). It has been proven by simulation that, in simple situations, the LRT comparing the animal and the transmissibility model is conservative (David & Ricard, 2019). However, given that maternal genetic effects can mimic the transmission of non-genetic effects by inducing different covariances between offspring and dam, and between offspring and sire (Willham, 1972), we included maternal genetic effects in the transmissibility and animal models even if not significant to avoid such confusion. It should be noted that it would have been possible to consider maternal transmissible values instead of maternal genetic effects in the models, that is to account for non-genetic inheritance for the maternal effects. We did not use this approach because maternal genetic effects are generally not considered in models for RFI in growing animals (Berry & Crowley, 2012;Do, Strathe, Jensen, Mark, & Kadarmideen, 2013;Drouilhet et al., 2013) and were therefore not the focus of this study. We considered a null covariance between the transmissibility value and the maternal genetic effects, since this parameter cannot be estimated with the data structure of the different datasets (Gerstmayr, 1992). It should be also noted that any other non-inherited factors that might induce different covariances between offspring and dam or offspring and sire may affect the conservativeness of the LRT that compares the animal and the transmissibility models (i.e., wrongly T A B L E 3 Correlations between estimated direct breeding values and transmissible values obtained with the animal and transmissibility models, and proportion of animals in common in the best 10 per cent for the different species conclude to non-genetic inheritance). When applying the transmissibility model, particular attention must therefore be paid to such sources of confusion of non-genetic inheritance that must be taken into account in the model if known. For instance, mitochondrial inheritance can cause deviation from the law of transmission assumed in the animal model, leading to a higher covariance between dam and offspring than between sire and offspring. The heritabilities obtained with the animal model were in line with previous studies. Gilbert et al. (2007) reported heritabilities of 0.14 and 0.24 for candidates and non-candidates for selection, respectively, in a subset of the pig2 dataset (four generations of divergent selection) using a two-step approach to estimate the genetic parameters of RFI. The higher heritability in the non-candidates for selection can be explained by a more accurate predicted feed intake compared with candidates. The higher heritability for the pig1 dataset compared with the pig2 dataset is probably due to different modalities of data quality control (only animals with "good" performances over the entire test period were retained for the analysis). This high heritability is in line with that reported by Do et al. (2013) (0.36-0.40). The moderate heritabilities reported in rabbits are close to the value reported by Drouilhet et al. (2013) Our results using the transmissibility model to study RFI in different species were not entirely consistent. Indeed, in one dataset (AGP59), sire and dam path coefficients of transmission were equal to 0.5, whereas in the three other datasets, the sire path coefficient of transmission tended to be lower than the dam path coefficient, although significantly different from 0.5 in the pig2 dataset only. Close inspection of the data structure (relationships between phenotyped animals) of both AGP59 and AGP39 datasets did not provide any insights that might explain this difference. An explanation could be the length of the test period which was longer for the AGP59 line compared with the AGP39 line. The impact of environment experienced by the animal on non-genetic heritable effects may therefore have been more pronounced in the AGP59 line, resulting in a modification of the non-genetic inherited effects that consequently differed more from those of the parents. The power to detect non-genetic inheritance with the transmissibility model increases with the size of the population, the deepness of the population structure (i.e., number of different family links), the relative importance of non-genetic inherited variance, the difference between sire and dam path coefficients of transmission and the magnitude of their difference from 0.5 (David & Ricard, 2019). Our results were in line with these considerations. In the pig1 dataset, sire and dam path coefficients were not very different and both close to 0.5 (dam higher, sire lower). Consequently, the relative importance of non-genetic inherited variance must be much higher than genetic inherited variance and/or a huge amount of data are necessary to detect non-genetic inheritance in such situations. Even if the relative importance of inherited variance was small for the rabbit AGP39 dataset, the discrepancy between sire and dam path coefficients of transmission was higher than for the pig1 dataset, and, even if close to each other, the sire and dam path coefficients obtained for the pig2 dataset differed the most from 0.5, which provided a more favourable situation for detecting non-genetic inheritance. Indeed, we were at the limit of significance for the AGP39 dataset and significant for the pig2 dataset. Increasing the size of the datasets will result in the gain in power required to detect non-genetic inheritance. However, that will also mean longer computing time and probably memory issues (to give an idea of the actual computing time, the transmissibility model ran for 9 hr before convergence for the AGP59 dataset on a Linux system with an Intel ® Xeon ® E5-2698v3 processor). To overcome this difficulty, we shall consider revising our program for estimating the parameters of the transmissibility model, which is currently based on ASReml, and to create a stand-alone software dedicated to the transmissibility model. Research is underway on this subject. Finally, it should be noticed that standard errors of estimates were generally slightly higher in the transmissibility than in the animal model, which is certainly a consequence of the additional parameters to estimate. When path coefficient estimates are different from 0.5 (all datasets except AGP59), we observed, as a general trend between the animal and transmissibility model, a decrease of the residual variance, maternal genetic variance and a higher transmissibility variance compared with the genetic variance. This confirms the confusion that exists between maternal genetic and non-genetic inherited effects. The lower genetic variance compared with the transmissibility variance is explained by the use in the animal model of a set value for the path coefficients of transmission (0.5), that is generally too high. Consequently, to find the best fit for the covariances between the different types of relatives in the population, the genetic variance estimate is smaller than the transmissibility variance. However, it should be noted that for the pig1 dataset, the dam path coefficient estimated in the transmissibility model was higher than 0.50 (0.53), but the transmissibility variance was still slightly higher than the genetic variance due to the sire path coefficient of transmission being lower than 0.5 (0.46). Finally, our results suggest that phenomena other than genetic sources of inheritance explain the phenotypic resemblance for RFI observed between relatives, with a higher transmission from the dam's side than from the sire's side. It is likely that one of these non-genetic inherited effects is the gut microbiota. Indeed, it has been reported in pigs and rabbits that the gut microbiota affects RFI. Microbiota might be crucial in improving FE in herbivores, because gut microbes hydrolyse the plant fibres that mammalian digestive enzymes cannot degrade (Dehority, 1991;Van Soest, Robertson, & Lewis, 1991), and which represent up to 70% of their energy intake (Flint, Bayer, Rincon, Lamed, & White, 2008). In rabbits, differences in ceacal microbiota composition were reported between lines selected for FE and a control line (Drouilhet et al., 2016). In pigs, extreme individuals for FE showed faecal microbiota differences (Verschuren et al., 2018). There is evidence that the microbiota is transmitted from the dam to her offspring. In livestock, the transmission from one generation to the next of all or part of this microbiota is most likely the result of physical contact between newborns and the dam. Colonization begins at birth during and after the passage through the birth canal, during suckling and maternal care and by contact with the immediate environment (Abecia, Fondevila, Balcells, & Mcewan, 2007;Penders et al., 2006). Transmission of the microbiota by the sire is rarely described in livestock, generally because there is no direct contact between the sire and his offspring. This difference may explain the higher path coefficients of transmission for the dams compared with the sires. Another non-genetic inherited effect that might affect the inheritance of RFI could be epigenetic effects. Indeed, it has been reported that epigenetic effects may impact FE [reviewed for pigs in (Ji et al., 2017), in cattle (Liu et al., 2019)]. However, to our knowledge, there is no evidence of the transmission across generations of epigenetic marks that affect FE. Given that the dam and sire transmissibility estimates obtained with the transmissibility model were equal (for AGP59) or higher than the transmissibility obtained with the animal model, the expected response to selection on direct effects (breeding or transmissible value depending on the model) would be higher (or equal for AGP59) with the transmissibility model. However, it is important to note that selection on transmissible values implies selection on a combination of genetic, epigenetic, microbiota and cultural inherited values. If selection is relaxed, part of the benefit on the transmissible value achieved by previous selection will theoretically gradually disappear and only genetic progress will be maintained (Tal et al., 2010). In this context, selection on breeding values appears to be more attractive for the longterm benefit of selection. Nonetheless, this would be the case if the estimated breeding values obtained with the animal model really reflect the true genetic breeding values, other inherited factors excluded. Nevertheless, it has been shown using simulations that the breeding values estimated with the animal model capture part of the non-genetic inherited effects when present (David & Ricard, 2019). This finding explains the high correlations obtained between transmissible and breeding value estimates reported in the present study. However, even if the correlation is high, selection on transmissible or breeding values will be different as indicated by the percentage of animals in common in the 10% best animals selected with each model, the percentage being less than 100% when path coefficient estimates differ from 0.5. The sensitivity to the environment of the non-genetic inherited factors can also be seen as an advantage. Indeed, modifying the rearing environment experienced by the future breeders may promote positive non-genetic inherited effects that will later be transmitted to the next generations. Recently, reported levers of action during key moments in the lives of the future reproducers when non-genetic inherited factors may be positively influenced. These key moments are mainly during foetal and early life. The levers of action are mainly based on animal welfare (through nutrition, housing conditions, human handling) and interactions between animals. For instance, it could be of interest, when possible, to identify dams with good maternal abilities and microbiota, and perform cross-fostering for the potential future reproducers (given their genetic potential) as a tool to promote the transmission of "good" microbiota, epigenome and behavioural skills to the next generations. To conclude, this study aimed at detecting non-genetic inheritance for FE in different species. The results obtained were not entirely consistent across species, but mainly support the existence of non-genetic inheritance for this trait, with a higher path coefficient of transmission for the dam's side than for the sire's side.
2020-07-23T09:04:37.182Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "e6d6223e154f84d00faecdd6362ab9990e3316fa", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jbg.12494", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7a6b8f2fdae532d922a36b50c739caa89c6ae24f", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
54019119
pes2o/s2orc
v3-fos-license
Identification of Hybrid Okra Seeds Based on Near-Infrared Hyperspectral Imaging Technology Near-infrared (874–1734 nm) hyperspectral imaging technology combined with chemometrics was used to identify parental and hybrid okra seeds. A total of 1740 okra seeds of three different varieties, which contained the male parent xiaolusi, the female parent xianzhi, and the hybrid seed penzai, were collected, and all of the samples were randomly divided into the calibration set and the prediction set in a ratio of 2:1. Principal component analysis (PCA) was applied to explore the separability of different seeds based on the spectral characteristics of okra seeds. Fourteen and 86 characteristic wavelengths were extracted by using the successive projection algorithm (SPA) and competitive adaptive reweighted sampling (CARS), respectively. Another 14 characteristic wavelengths were extracted by using CARS combined with SPA. Partial least squares discriminant analysis (PLS-DA) and support vector machine (SVM) were developed based on the characteristic wavelength and full-band spectroscopy. The experimental results showed that the SVM discriminant model worked well and that the correct recognition rate was over 93.62% based on full-band spectroscopy. As for the discriminative model that was based on characteristic wavelength, the SVM model based on the CARS algorithm was better than the other two models. Combining the CARS+SVM calibration model and image processing technology, a pseudo-color map of sample prediction was generated, which could intuitively identify the species of okra seeds. The whole process provided a new idea for agricultural breeding in the rapid screening and identification of hybrid okra seeds. Introduction Okra (Abelmoschus esculentu (L.) Moench), known as a supervising versatile vegetable, has been widely cultivated all over the world.It is a powerhouse of various nutrients, such as protein, cellulose, unsaturated fatty acids, and minerals, such as iron, calcium, manganese, potassium, zinc, and so on [1].Additionally, it is low in calories and fat free [2].It has been discovered that okra seeds are rich in flavonoids and polyphenols, all of which have strong anti-oxidative, anti-fatigue, and anti-cancer abilities [3][4][5][6].Screening and identification of seeds has always been an important part of the agricultural breeding process.Breeding specialists typically cross-fertilize different pure lineages of the desired trait to produce offspring heterosis.At present, many studies have focused on the breeding of okra, which includes hybridization breeding [7][8][9].Hybrid okra seeds have heterosis values that can rapidly increase productivity, improve the quality of okra as a food, and so on [7].However, the process of obtaining hybrid okra seeds is time-consuming and laborious.Breeding experts often have to plant hybrids to a certain stage in order to screen the seeds [8,9].They used plant characteristics, such as plant height, leaf width, and fruit length to select the optimal hybrid offspring.Spectroscopic and spectral imaging techniques provide comprehensive structural information on the components and properties of samples at the molecular level [10].Nowadays, near-infrared hyperspectral technology has been widely used in food detection and the identification of varieties [11][12][13][14][15]. Near-infrared hyperspectral imaging is a fast nondestructive detection technology combining machine vision and visible/near-infrared spectroscopy.With the help of near-infrared hyperspectral imaging, the spatial and spectral information of the samples can be contained simultaneously.Hundreds of contiguous wavebands for each spatial position of the sample make up the near-infrared hyperspectral images [16].The spatial and spectral information that represents the external and internal information of the sample is provided in hypercube form and it is possible to obtain multiple sample spectra in a single scan [17,18].So, it is very effective at detecting the seeds of hybrid okra by near-infrared hyperspectral imaging.Because different kinds of seeds contain different material information, seeds can be classified by near-infrared hyperspectral imaging combined with chemometrics.Yiying Zhao et al. (2017) identified the varieties of maize seeds using hyperspectral imaging and chemometrics [19].They also studied the influence of calibration sample size on classification accuracy and obtained satisfactory results while using the radial basis function neural networks (RBFNN) model with a calibration accuracy of 93.85% and a prediction accuracy of 91.00%.Apart from pure seed classification, there are some studies that are focused on the spectral changes of seeds with different treatments.Xuping Feng et al. (2017) used near-infrared hyperspectral imaging technology and multidimensional data processing and analysis methods to distinguish transgenic maize seeds, and they managed to achieve a classification accuracy of up to 99.43% with the partial least squares discriminant analysis [20].Min Huang et al. (2016) used near-infrared hyperspectral imaging to distinguish corn seeds of different years.They applied model updating to update the least squares support vector machines (LSSVM) model and the classification accuracy reached 94.40%, which was 10.30% higher than that of non-updated models [13].Santosh Shrestha et al. ( 2016) used near-infrared hyperspectral imaging of a single tomato seed combined with multidimensional data processing methods to analyze the quality of tomato [21].Junfeng Gao et al. (2013) used near-infrared hyperspectral imaging to distinguish jatropha seeds from different geographical environments, with an identification rate of 93.75% [22].To our knowledge, there is no research on the classification of hybrid okra seeds with the help of near-infrared hyperspectral imaging.Because of its small size, we can obtain a large amount of okra seed information at the same time, which is convenient for analysis and processing.In this study, a total of 1740 okra seed samples of three different related varieties were collected. The purpose of this study was to investigate four goals: (1) to examine the feasibility of using near infrared range (NIR) hyperspectral imaging techniques to identify the related hybrid okra seeds; (2) to select optimal characteristic wavelengths that identify the differences among hybrid okra seeds and their parent; (3) to build optimal discrimination models based on characteristic wavelengths, thus simplifying the prediction model and speeding up the operation; and, (4) to visualize the classification results of okra seeds in the form of a pseudo-color image by developing image processing algorithms. Okra Seed Samples Preparation The hybrid okra seeds used in this study and their parents were provided by the Zhejiang Academy of Agricultural Sciences, Zhejiang, China.All seeds were planted in the same block, line by line; planting conditions were strictly consistent, and all okra seeds were harvested at the same time in 2017.Then, all of the okra seeds were put into plastic bags and sealed in a plastic box to prevent moisture absorption during storage.The impact of environmental factors on seeds was eliminated as much as possible.The 1740 okra seeds included three different varieties: xiaolusi representing the father, xianzhi representing the mother, and penzai representing the hybrid progeny.Each variety had 580 seeds.All of the seeds were of normal quality with no apparent damage in appearance. Seed varieties were coded as 1, 2, and 3 for data processing.All of the samples were randomly divided into calibration and prediction sets in a ratio of 2:1.Therefore, 1160 okra seeds were used as the modeling set and 580 okra seeds were used as the prediction set.Okra seeds were evenly placed on a black plastic sheet. Near-Infrared Hyperspectral Imaging A laboratory-built hyperspectral imaging system was used to acquire hyperspectral images of okra seeds.The whole system includes the following equipment: an imaging spectrograph (ImSpector N17E; Spectral Imaging Ltd., Oulu, Finland); a high-performance CCD camera (C8484-05; Hamamatsu, Hamamatsu City, Japan) coupled with a camera lens (OLES22; Specim, Spectral Imaging Ltd., Oulu, Finland); two 150 W tungsten halogen lamps (Fiber-Lite DC950 Illuminator; Dolan Jenner Industries Inc., Boxborough, MA, USA); a mobile platform controlled by a stepper motor (Isuzu Optics Corp., Taiwan, China); and, a computer equipped with the data acquisition software (Xenics N17E, Isuzu Optics Corp., Taiwan, China) that controls the motor speed, exposure time, and so on.Next, a non-deformable and clear image should be obtained by the system.The spectral range of the hyperspectral imaging system whose spectral resolution is 5 nm is 874-1734 nm.The camera has 320 × 256 (spatial × spectral) pixels.In order to obtain clear and usable spectral images, relevant parameters of the test system need to be set before spectral collection.The height of the objective lens was set to 15 cm, the exposure time was set to 3 ms, and the moving speed of the platform was set to 15 mm/s.Before the spectral data and imaging process, the raw hyperspectral images should be corrected.The white reference image was acquired by using a white Teflon tile with nearly 100% reflectance.The black reference image was acquired by covering the lens completely with its opaque cap when the lights were all turned off.The calibrated image was calculated while using the following equation: where I raw is the raw hyperspectral image; I dark is the dark reference image and I white is the white reference image; I C is the calibrated hyperspectral image. Spectral Collection After near-infrared hyperspectral imaging acquisition, the spectral information of the whole images was collected.The spectral data of the okra seeds were collected at the wavelength range of 874-1734 nm.However, due to the influence of the surrounding environment and the optical equipment, the noise of the front and back ends of the spectrum was obvious.So, the obvious front and the rear bands of the noise were removed and the spectral data between 975.01-1645.82nm was selected to obtain the average spectral image of the three kinds of okra seeds.To obtain the relevant information of okra seeds, the background and the region of interest should be segmented.The average spectrum of okra seeds was calculated by using the pixel's spectrum of the region of interest.Firstly, the smoothed spectra were collected by applying wavelet transform (WT) which used Daubechies 8 with decomposition scale 3 to the raw spectra [20].Then, image segmentation was performed based on the different reflectance values between the background and the seeds.Finally, the averaged spectrum of okra seeds was collected for further analysis and all of these processes were conducted in MATLAB (2013b). Multivariate Data Analysis Three different methods were used in the present study: principal component analysis (PCA), partial least squares discriminant analysis (PLS-DA), and support vector machine (SVM).Seed morphology was first used to explore the feasibility of seed classification of hybrid okra.PCA analysis was carried out to visually show the differences between different kinds of seeds by their average spectral characteristics.Since the full band spectrum contains a lot of redundant information, two methods of extracting the characteristic wavelengths were adopted in this study: successive projections algorithm (SPA) and competitive adaptive reweighted sampling (CARS).In addition to the 14 characteristic wavelengths extracted by SPA, CARS extracts 86 characteristic wavelengths-almost half of the total 200 bands.Thus, in order to reduce the number of characteristic wavelengths and to ensure the simplification of the models, SPA was used again to extract the characteristic wavelengths based on CARS.Next, the PLS-DA and SVM models that are based on the full spectrum and characteristic wavelengths were built.Finally, CARS-SPA-SVM combined with the image processing algorithm was used to draw the predicted pseudo-color map to show the classification results more intuitively. PCA is a commonly used and effective data reduction compression algorithm and it has been used in NIR spectroscopy identification [23].Its basic principle is to convert multiple related variables in the original data into a new comprehensive variable (principal component) through linear transformation. The main components of the first few contributions in the new variable cover the main information of the original data.Therefore, this article preserves the first three principal components and compares three types of okra seeds by comparing the spatial distribution of the samples on three principal components. PLS-DA is a pattern recognition method that is widely used and classified by spectral data [24,25].In this paper, the spectral data of the sample was used as the independent variable X, the class number was used as the dependent variable Y, and the PLS-DA model was established in the Unscrambler X 10.1 by the retention one method, and the prediction set sample was predicted based on this classification model.According to the absolute value of the difference between the sequence number of the sample and the predicted value of the model, the discriminant accuracy of the modeling set, and the prediction set was calculated.As the N value had a decimal number, the set threshold was 0.5 in the actual calculation.The parameters of the model were determined by the sum of the predicted residual squares. SVM is a machine learning algorithm that is based on statistical learning Vapnik-Chervonenkis (VC) dimension theory and the structural risk minimization principle [26,27].It can be used for qualitative and quantitative analysis of data.SVM maps the input space into high-dimensional space through the kernel function; it constructs the optimal classification plane to separate the two classes accurately and correctly, and it introduces the penalty coefficient and the relaxation coefficient (c, g) to make the correction.This ensures that that the classification interval of the two classes is the largest and it thus ensures minimum risk.SVM is widely used in data classification and analysis.In this paper, the model set and prediction set of MATLAB 2013b are input, the SVM program to identify the species of okra seed is run, and radial basis function (RBF) is used as a kernel function of the SVM model.The optimal (c, g) parameter combination is determined by the grid search method in the range of 2-8 to 28 optimization, and the accuracy of the output model is identified. The quality of these classification models is evaluated by the classification recognition rate.If the predicted value obtained by these models is the same as the value that we have coded, we believe that the identification is correct.Furthermore, the classification recognition rate is calculated by identifying the ratio between the number of okra seeds whose identification is correct and the number of whole okra seeds. Software Tools Evince version 4.6 Hyperspectral image analysis soft package (ITT, Visual Information Solutions, Boulder, CO, USA) was used to analyze the hyperspectral image and MATLAB version R2013b (The Math-Works, Natick, MA, USA) was used to conduct multivariate data analysis.In addition, all of the graphs were designed using origin Pro 9.0 (Origin Lab Corporation, Northampton, MA, USA) software.The model performance was evaluated by the classification accuracy of the calibration set and the prediction. Spectroscopic Analysis A spectral image of the okra seeds that belong to our selected region is shown in Figure 1a and the trend of these lines that represent the components of the okra seeds is similar.Figure 1b shows the average spectra of all the samples that comprise three different varieties.Obvious and slight differences can be observed in Figure 1b.At the beginning and end of the band, the similarity between the male parent named xiaolusi and the hybrid offspring named penzai is relatively high, and the seeds of the female parent named xianzhi can be clearly distinguished from the other two species.In the middle band, the three are similar in the wave valley of the average spectrum, but the reflectivity is different, which can be used to separate the three different seeds.These changes may be due to the differences in the chemical and molecular structure of the progeny that is caused by different genetic effects of the parent, which all provide the basis for the subsequent chemometrics analysis [7,28].Therefore, it is necessary to use NIR spectroscopy combined with chemometrics to establish discriminant models for the classification of seeds. Appl.Sci.2018, 8, x FOR PEER REVIEW 5 of 13 the average spectra of all the samples that comprise three different varieties.Obvious and slight differences can be observed in Figure 1b.At the beginning and end of the band, the similarity between the male parent named xiaolusi and the hybrid offspring named penzai is relatively high, and the seeds of the female parent named xianzhi can be clearly distinguished from the other two species.In the middle band, the three are similar in the wave valley of the average spectrum, but the reflectivity is different, which can be used to separate the three different seeds.These changes may be due to the differences in the chemical and molecular structure of the progeny that is caused by different genetic effects of the parent, which all provide the basis for the subsequent chemometrics analysis [7,28].Therefore, it is necessary to use NIR spectroscopy combined with chemometrics to establish discriminant models for the classification of seeds. Principle Component Analysis of Spectral Data In order to explore the separability of different okra seeds, the PCA program that could minimize the interference of other useless data was applied to extract the critical components from the various spectral data [10,29,30].The three-dimensional (3D) principal component (PC) score plot of all the samples is illustrated in Figure 2. All spectral data from a range of 975.01 to 1645.82 nm were analyzed Principle Component Analysis of Spectral Data In order to explore the separability of different okra seeds, the PCA program that could minimize the interference of other useless data was applied to extract the critical components from the various spectral data [10,29,30].The three-dimensional (3D) principal component (PC) score plot of all the samples is illustrated in Figure 2. All spectral data from a range of 975.01 to 1645.82 nm were analyzed and the explained variance rate of the first three principal components was 99.36%, of which the contribution rate of the first principal component (PC1) was 81.41%, the contribution rate of the second principal component (PC2) was 16.59%, and the contribution rate of the third principal component (PC3) was 1.36%.Such contribution rates explain the vast majority of variables.It was obvious that the three different varieties distributed separately, but their borders were unclear and overlapped.Distinguishing all three varieties of okra seeds was not easy by PCA.Conventional chemometric methods, such as PCA, might not be suitable for analyzing the spectral data of okra seeds [16,31].Therefore, it is essential to conduct more modeling analyses to identify different kinds of okra seeds.and the explained variance rate of the first three principal components was 99.36%, of which the contribution rate of the first principal component (PC1) was 81.41%, the contribution rate of the second principal component (PC2) was 16.59%, and the contribution rate of the third principal component (PC3) was 1.36%.Such contribution rates explain the vast majority of variables.It was obvious that the three different varieties distributed separately, but their borders were unclear and overlapped.Distinguishing all three varieties of okra seeds was not easy by PCA.Conventional chemometric methods, such as PCA, might not be suitable for analyzing the spectral data of okra seeds [16,31].Therefore, it is essential to conduct more modeling analyses to identify different kinds of okra seeds. Classification Results and Analysis by the Discrimination Models Based on the Full Spectrum Discrimination models which could classify the hybrid okra seeds were built based on the full spectrum.Firstly, PLS-DA and SVM were used to establish the discrimination models based on the full spectrum, and the accuracy of the classification recognition was used as an evaluation index of the model performance.As shown in Table 1, the classification ability of the SVM model, whose classification accuracy rate of the modeling set and the prediction set reached 99.31% and 93.62%, respectively, was obviously higher than that of the PLS-DA model.The correct recognition rate of the modeling set and the prediction set of the PLS-DA model reached 83.36% and 82.59%, respectively, which was also available for classification.Zhengjun Qiu et al. (2018) built SVM models to identify the variety of single rice seed [18].Their accuracy of the training set and the test set reached 86.9% and 84.0%, respectively, which was not as good as our results.Xiaoling Yang et al. (2015) compared the classification results of waxy corn seeds while using the SVM model and the PLS-DA model [15].They also found that the performance of SVM is better than PLS-DA on most types of selected input datasets.When comparing the classification results of the two discriminative methods, the differences may be due to the fact that the SVM model uses a radial basis function (RBF) as a kernel function, and it performs a grid search within the optimization range to obtain a global optimal Classification Results and Analysis by the Discrimination Models Based on the Full Spectrum Discrimination models which could classify the hybrid okra seeds were built based on the full spectrum.Firstly, PLS-DA and SVM were used to establish the discrimination models based on the full spectrum, and the accuracy of the classification recognition was used as an evaluation index of the model performance.As shown in Table 1, the classification ability of the SVM model, whose classification accuracy rate of the modeling set and the prediction set reached 99.31% and 93.62%, respectively, was obviously higher than that of the PLS-DA model.The correct recognition rate of the modeling set and the prediction set of the PLS-DA model reached 83.36% and 82.59%, respectively, which was also available for classification.Zhengjun Qiu et al. (2018) built SVM models to identify the variety of single rice seed [18].Their accuracy of the training set and the test set reached 86.9% and 84.0%, respectively, which was not as good as our results.Xiaoling Yang et al. (2015) compared the classification results of waxy corn seeds while using the SVM model and the PLS-DA model [15].They also found that the performance of SVM is better than PLS-DA on most types of selected input datasets.When comparing the classification results of the two discriminative methods, the differences may be due to the fact that the SVM model uses a radial basis function (RBF) as a kernel function, and it performs a grid search within the optimization range to obtain a global optimal parameter combination [26,27].PLS-DA establishes a linear discriminant model, while the SVM algorithm establishes a non-linear model that can fully utilize the spectral information between different types and establish a classification model [24].Therefore, the classification effect is significantly better than PLS-DA. Selection of Effective Wavelengths The whole band of spectral data contains redundant information, and, in order to increase the processing speed of the models and reduce the modeling time, two different algorithms that can extract the characteristic wavelength were applied in this study.Furthermore, a combination of two methods was used to obtain the optimal number of characteristic wavelengths.Figure 3a shows the characteristic wavelengths which were extracted by the successive projection algorithm (SPA).SPA is a forward feature variable selection method, which selects the combination of variables with minimal redundancy information and minimal collinearity, and it therefore has a wide range of applications in spectral feature wavelength selection [32][33][34][35].Fourteen characteristic wavelengths were acquired.The band found near 1065 nm is related to the O-H stretching vibration [36].The spectral regions which include 1041-1143 nm, 1211-1225 nm, 1360-1390 nm, and 1621-1654 nm are related to the C-H stretching vibration [36].The band at around 1472 nm is related to the N-H stretching vibration [36].These groups exist in amino acids and other substances found in okra seeds, such as leucine, lysine, valine, and phenylalanine [37], indicating that the selected characteristic wavelength is representative and it can be used to establish an effective and reliable discriminant analysis model. CARS, a characteristic wavelength selection method, is based on Monte Carlo sampling and PLS regression coefficients.It was used to choose the optimal wavelengths.Initially, 86 characteristic wavelengths were extracted from the 200 full-band wavelengths.Although there is a certain degree of deletion for the whole band, it could be more concise.Figure 3b shows the distribution of characteristic wavelengths that were selected by CARS.Therefore, in this study, in order to further reduce the number of characteristic wavelengths, CARS was further screened in conjunction with SPA.Finally, 14 characteristic wavelengths were selected.The changes of the distribution of optimal wavelengths are shown in Figure 3c.All of the bands that are selected by CAR + SPA are related to the stretching vibration of the functional groups, which include N-H group, C-H group, and C=O group [36].According to some studies, the unsaturated fatty acids, proteins, and hydrocarbons of okra seeds also contain the corresponding functional groups [38].CARS, a characteristic wavelength selection method, is based on Monte Carlo sampling and PLS regression coefficients.It was used to choose the optimal wavelengths.Initially, 86 characteristic wavelengths were extracted from the 200 full-band wavelengths.Although there is a certain degree of deletion for the whole band, it could be more concise.Figure 3b shows the distribution of characteristic wavelengths that were selected by CARS.Therefore, in this study, in order to further reduce the number of characteristic wavelengths, CARS was further screened in conjunction with SPA.Finally, 14 characteristic wavelengths were selected.The changes of the distribution of optimal wavelengths are shown in Figure 3c.All of the bands that are selected by CAR + SPA are related to Classification Results and Analysis by the Discrimination Models Based on the Characteristic Band Spectrum Discrimination models were built based on the characteristic wavelengths to simplify the complexity and increase the operating speed.When it came to practical breeding applications, a detection device of hybrid okra seeds required faster processing speed and a more reliable model.SPA, CARS, and CAR + SPA were used to select the optimal wavelengths.The classification results using the SVM model were superior to the PLS-DA classification results that are shown in Table 2, and the CARS algorithm had better discrimination results than SPA and CARS + SPA.This may be because CARS extracts much larger characteristic wavelengths than the other two, and the spectrum contained sufficient active ingredients.Many studies have shown that there are differences in the internal content between the hybrid seeds and the parent, and near-infrared hyperspectral imaging could obtain the internal information of the okra seeds [1,2,20,22,24].The classification accuracy rate of the prediction set of the SVM model that is based on CARS (94.83%) was even higher than the SVM model based on the whole band spectrum.However, the accuracy of the established classification discrimination model based on the characteristic wavelength was lower than the models based on the full spectrum.This was particularly evident in the classification results of the PLS-DA model.When comparing the two classification models based on SPA and CARS + SPA, the recognition rate was over 79%.After the characteristic wavelengths were extracted by CARS + SPA, SVM was used to establish the model whose classification accuracy rate of the modeling set and the prediction set reached 97.41% and 92.24%, respectively, which provided a new reference method for the breeding identification of hybrid okra seeds.Yiying Zhao et al. (2018) used SVM models to classify the variety of maize seeds [19].Based on the optimal wavelengths, they achieved 93.85% calibration accuracy and 91.00% prediction accuracy, which was worse than our model.Wenwen Kong et al. (2013) established classification models to classify rice seed cultivar [39].Their SVM models based on optimal wavelengths achieved classification accuracy rates of 97.30% and 89.47% for the modeling set and the prediction set, respectively, which was almost as good as our models.The good effect of CARS indicates that most of the spectral bands are valid for the classification of okra seeds.Excessive deletion of spectral data is likely to result in the loss of a lot of classification information, thus causing the other two methods to be unsatisfactory.The results show that the near-infrared hyperspectral technology, when combined with the chemometrics method, can identify different kinds of okra seeds quickly and effectively and the SVM model has a good classification effect.Since okra seeds and their offspring were used as research subjects, there was a transmission of genetic information between parents and their offspring.Therefore, there was some overlap between the content of hybrid seeds and the content of material between parents, which forms a barrier to the identification of spectral classifications. Visual Prediction of Okra Seeds Hyperspectral images contain the spectral and spatial information of the samples simultaneously, and there is a certain correspondence that can be used by image processing technology between spectral and spatial information.In order to verify the performance of the classification model, the okra seed prediction maps were plotted based on the average spectrum of each seed in the hyperspectral image.The combination of the model and the image processing technology can generate a pseudo-color map for predicting the type of the sample, and distinguish different sample types with different colors, as well as to visualize the classification results intuitively.Because of the large amount of data in the full spectrum, the computational complexity is high, which is not conducive to the rapid prediction of the sample.Therefore, this paper selected the SVM model based on the optimal wavelengths extracted by the CARS algorithm as the classification model.The average spectrum of each okra seed in the hyperspectral image was taken as input, and the three different types of okra seeds were selected to be 686 grains in total.The original seed map and prediction map are shown in Figure 4.In Figure 4, blue refers to the female parent (xianzhi), yellow refers to the father (xiaolusi), and red refers to the hybrid seed (penzai).Comparing Figure 4a,b, the three different kinds of okra seeds can hardly be distinguished by the naked eye in the original map.There were some misjudgments in the classification images of the three okra seeds, but the overall correct discrimination rate is 91.41%.Affected by factors such as hyperspectral image segmentation algorithms and image resolution, the okra seeds in the visualized pseudo-color map have undergone some deformation, but most of them are still intact and they do not affect the identification and analysis.This method can be used to make rough preliminary judgments on the species of hybrid okra seeds, which provides a new method for the rapid and accurate screening of seeds in the process of cross breeding. Visual Prediction of Okra Seeds Hyperspectral images contain the spectral and spatial information of the samples simultaneously, and there is a certain correspondence that can be used by image processing technology between spectral and spatial information.In order to verify the performance of the classification model, the okra seed prediction maps were plotted based on the average spectrum of each seed in the hyperspectral image.The combination of the model and the image processing technology can generate a pseudo-color map for predicting the type of the sample, and distinguish different sample types with different colors, as well as to visualize the classification results intuitively.Because of the large amount of data in the full spectrum, the computational complexity is high, which is not conducive to the rapid prediction of the sample.Therefore, this paper selected the SVM model based on the optimal wavelengths extracted by the CARS algorithm as the classification model.The average spectrum of each okra seed in the hyperspectral image was taken as input, and the three different types of okra seeds were selected to be 686 grains in total.The original seed map and prediction map are shown in Figure 4.In Figure 4, blue refers to the female parent (xianzhi), yellow refers to the father (xiaolusi), and red refers to the hybrid seed (penzai).Comparing Figure 4a,b, the three different kinds of okra seeds can hardly be distinguished by the naked eye in the original map.There were some misjudgments in the classification images of the three okra seeds, but the overall correct discrimination rate is 91.41%.Affected by factors such as hyperspectral image segmentation algorithms and image resolution, the okra seeds in the visualized pseudo-color map have undergone some deformation, but most of them are still intact and they do not affect the identification and analysis.This method can be used to make rough preliminary judgments on the species of hybrid okra seeds, which provides a new method for the rapid and accurate screening of seeds in the process of cross breeding. Conclusions In this study, three types of okra seeds were identified using near-infrared hyperspectral imaging technology.A total of 1740 okra seeds were selected as the samples, of which 1160 seeds were used as the modeling set and 580 seeds were used as the prediction set.The PCA method was used to process the spectral data to initially observe the classification of the three types of okra seeds.Fourteen characteristic wavelengths were selected using the SPA algorithm, and 86 characteristic wavelengths were extracted using the CARS algorithm.Fourteen characteristic wavelengths were further extracted using the SPA algorithm based on the wavelengths that were extracted by CARS to simplify the models.PLS-DA and SVM discriminant models that were based on the full spectrum and the optimal spectrum were established.When compared with the two algorithms, the SVM algorithm is more effective at classifying the hybrid okra seeds.The recognition rate of the modeling set and the prediction set of the full-band discrimination model reached 99.31% and 93.62%, respectively.The characteristic wavelengths that were extracted by using CARS had a better Conclusions In this study, three types of okra seeds were identified using near-infrared hyperspectral imaging technology.A total of 1740 okra seeds were selected as the samples, of which 1160 seeds were used as the modeling set and 580 seeds were used as the prediction set.The PCA method was used to process the spectral data to initially observe the classification of the three types of okra seeds.Fourteen characteristic wavelengths were selected using the SPA algorithm, and 86 characteristic wavelengths were extracted using the CARS algorithm.Fourteen characteristic wavelengths were further extracted using the SPA algorithm based on the wavelengths that were extracted by CARS to simplify the models.PLS-DA and SVM discriminant models that were based on the full spectrum and the optimal spectrum were established.When compared with the two algorithms, the SVM algorithm is more effective at classifying the hybrid okra seeds.The recognition rate of the modeling set and the prediction set of the full-band discrimination model reached 99.31% and 93.62%, respectively.The characteristic wavelengths that were extracted by using CARS had a better modeling effect.The recognition rates of the modeling set and the prediction set reached 98.71% and 94.83%, respectively.Using the CARS + SVM model combined with image processing techniques, a pseudo-color map of category classification was generated to identify different kinds of okra seeds.The results show that the near-infrared hyperspectral image technology combined with chemometrics can identify the species of okra's parent and hybrid offspring, and provide methods and ideas for the later rapid detection methods of okra hybrid breeding.Future experiments will focus on expanding the information on the number of species of okra seeds to form a spectroscopic database of okra seeds to improve the reliability and stability of the classification and identification model, so as to classify the hybrid okra seeds more quickly and efficiently. Figure 1 . Figure 1.Spectral image of the okra seeds that were extracted from region of interest (ROI) using the near-infrared hyperspectral technology: (a) raw spectral image of all okra seeds; and, (b) average spectral image of three different varieties of okra seeds. Figure 1 . Figure 1.Spectral image of the okra seeds that were extracted from region of interest (ROI) using the near-infrared hyperspectral technology: (a) raw spectral image of all okra seeds; and, (b) average spectral image of three different varieties of okra seeds. Figure 2 . Figure 2. The three-dimensional (3D) principal component (PC) score plot of three different varieties of okra seeds. Figure 2 . Figure 2. The three-dimensional (3D) principal component (PC) score plot of three different varieties of okra seeds. Figure 3 . Figure 3.The distribution spectral image of characteristic wavelengths selected by different methods: (a)the spectral image of characteristic wavelengths selected by successive projection algorithm (SPA); (b) the spectral image of characteristic wavelengths selected by competitive adaptive reweighted sampling (CARS); (c) the spectral image of characteristic wavelengths selected by CARS + SPA. Figure 3 . Figure 3.The distribution spectral image of characteristic wavelengths selected by different methods: (a)the spectral image of characteristic wavelengths selected by successive projection algorithm (SPA); (b) the spectral image of characteristic wavelengths selected by competitive adaptive reweighted sampling (CARS); (c) the spectral image of characteristic wavelengths selected by CARS + SPA. Figure 4 . Figure 4. Images of the three different strains of okra seeds: (a) The raw hyperspectral image; and, (b) The pseudo-color image; (from left to right: xianzhi, xiaolusi, penzai). Figure 4 . Figure 4. Images of the three different strains of okra seeds: (a) The raw hyperspectral image; and, (b) The pseudo-color image; (from left to right: xianzhi, xiaolusi, penzai). Table 1 . Comparison of discrimination results obtained by partial least squares (PLS) and support vector machine (SVM) models with the complete spectral data.PLS-DA: partial least squares discriminant analysis. Note: PLS-DA model's parameter means the optimal number of LVs; SVM model's parameter means different penalty parameters (c) and kernel function parameters (g), shown as (c, g). Table 2 . Discrimination results of the PLS-DA and SVM models based on characteristic wavelength.SPA: successive projection algorithm; CARS: competitive adaptive reweighted sampling.
2018-12-01T12:12:19.045Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "e428eef726d98f9d592f3b7093976cc1a771f3b5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/8/10/1793/pdf?version=1538387691", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e428eef726d98f9d592f3b7093976cc1a771f3b5", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Engineering" ] }
56532183
pes2o/s2orc
v3-fos-license
An Overview of Renal Diseases in Children in Pokhara Objective: To determine the current pattern and prevalence of renal diseases in childhood in this region of Nepal. Material and Methods: A retrospective study of the renal diseases in children attending the Pediatric OPD and those hospitalised in Manipal Teaching Hospital, Pokhara was done over a period of 6 years (September 2000September 2006). A detailed clinical and laboratory evaluation was performed at baseline. The children were managed according to disease diagnosed. These cases are under follow up and some have undergone surgical treatment. Results: 228 children (123 boys & 105girls) were diagnosed to have renal disease. Among them 39.5% had urinary tract infection (UTI), 30.7 % were suffering from acute glomerulonephritis (AGN), 17.5% were cases of nephrotic syndrome (NS) and 12 % had some other problems for example, 6.14% had genetic defects, 2.63% had renal Stone, 2.2% had pre-renal acute renal failure, unexplained recurrent hematuria in 1.3%. All the cases of UTI underwent through investigation and were treated accordingly. All cases of AGN are planned for follow up for 11⁄2 yrs and among them 3 required biopsy till date. All cases of NS are under regular follow-ups and 2 have undergone biopsy. Renal stone was operated successfully. All cases of acute and chronic renal failures had required dialysis. Out of 5 (2.5%) chronic renal failures, 2 with end stage renal disease expired after repeated hemodialysis and three are still requiring dialysis. Among the obstructive uropathies, 43 % had renal stone, 36 % had posterior urethral valve and 21% VUR. Conclusion: It can be concluded that renal disease is not uncommon in children. It can be completely cured with proper and adequate treatment. Sometimes it has a bad prognosis when it reaches end stage renal disease. Early recognition, timely treatment and regular follow up are mandatory in management of children with renal diseases. Introduction Childhood Renal Diseases (CRD) are commonly associated with few or no specifi c symptoms 1 .This fraction of patients may not be managed adequately, hence the diagnosis of these cases are important.This study therefore evaluates the current pattern and prevalence of renal diseases in childhood.This study also points out the relative public health problems of renal diseases in childhood. Materials and Methods This was a retrospective study carried out for a period of 6 years from September 2000 to September 2006.All children attending the Paediatric Outpatient Department and those admitted in the paediatrics ward of Manipal Teaching Hospital were included in this study.All the children had come with symptoms related to the renal system.Every case was subjected to a detailed clinical examination, followed by relevant investigations.The children were divided in to four age groups; 0-1 yr, >1-5 yr, >5-10 yr and over 10 years.The renal problems were classifi ed as Urinary Tract Infections (UTI), Acute Glomerulo Nephritis (AGN), Nephrotic Syndrome (NS), & Others.The investigations included routine urine examination, renal function tests, ultrasonogram (USG) abdomen, Micturating Cystourethrogram (MCU), Intravenous Pyelography (IVP) and other scans as required. Discussion Children with renal disease are brought to the hospital with a variety of symptoms, regardless of being related to the symptoms of kidney diseases or not 2 .In our study out of all the children who attended the paediatric OPD in a period of 6 years, 250 were found to have signifi cant fi ndings that warranted a full pattern of investigations related to kidney disease, including urine routine examination, urine culture and sensitivity, renal function tests, ultra sonogram etc.Out of these 228 were fi nally labelled to have renal problems.In the general population, about 30 people in every 100,000 develop kidney failure each year.In the paediatric population of the age group 19 and under; the annual rate is only 1 or 2 new cases in every 100,000 children 3 .The number of urinary tract infections (UTIs) were the highest accounting for 39.5% of cases and then 30.7 % were of acute glomerulonephritis (AGN) followed by 17.5 % of nephritic syndrome ( NS).Throughout childhood, the risk of a UTI is two percent for boys and eight percent for girls 4 . Incidence and prevalence of nephritis in the paediatric population is not known 5 .However in a study by Zhongguo Dang Dai Er Ke Za Zh 29.09% were diagnosed as nephrotic syndrome, 22.00% as acute nephritis syndrome, 17.21% as isolated hematuria, 15.87% as purpura nephritis, and 7.30% as hepatitis B virus-associated nephritis.6This pattern is different from the pattern seen in our study.In the developed countries Acute post infectious (most often post streptococcal) GN has almost been wiped out but in Asia it still accounts for a large number of cases 7 .The overall prevalence of NS in childhood is approximately 2-5 cases per 100,000 children.The cumulative prevalence rate is approximately 15.5/100,000 8 .The type of symptoms that should be of signifi cance include edema, oliguria, hematuria, anuria and even evidence of renal failure.In older children presence of fever, hypertension, are some modes of presentations 9 .These features were noted in our study also.Hematuria is one of the most common urinary fi ndings that bring children to the attention of the paediatric nephrologists 10 .Thirty eight percent of our cases also presented with hematuria.Acute renal failure is a serious condition in critically ill patients, but less literature is available on acute renal failure in critically ill children 11 .The incidence rate of acute renal failure in PICU was 4.5% 11 .In our study 3.9% of the patients presented with ARF.The most common causes in our case was due to prerenal conditions like Post Diarrhoeal conditions, Septicemia followed by Post-Streptococcal Glomerulonephritis, Hepato-renal failure and Posterior Urethral Valve.While in another study the main cause of ARF was hemolytic uraemic syndrome in 18.2%, oncologic pathologies in 18.2% and cardiac surgery 11.4% 11 . In a study by Kari JA sixty-six children had chronic renal failure (CRF) over a period of 4 years whereas we had only 8 cases over 6 years.Congenital abnormalities of the renal system were the major cause of CRF (50%) followed by neurogenic bladder (19.6%), either idiopathic (6%) or associated with neural tube defects (13.6%).Hereditary conditions were the cause in 12% and glomerular disease in 13.6% 12 .In our study Glomerulonephritis, Obstructive Uropathy and Nephrotic Syndrome were the causes of CRF. Coming to management part the cost of investigations and treatment of these children is expensive and many patients in the developing world cannot afford it.Children with renal disease may require simple treatment in the outpatient department or may be so serious that they may require treatment in a paediatric renal or intensive care (PICU) unit, or may require continuous renal replacement therapies such as haemofi ltration (HF).The special procedures that was carried out was micturating cystourethrogram (MCU) in 5.3% cases which helped us to diagnose obstructive uropathy.Renal biopsy was done only in 5 (2.2%) cases probably due to the expensive cost and also due to lack of a good histopathological support.Other specialized procedures are not available in this part of the country.Many of our cases were managed in the outpatient department and the inpatient ward of the paediatric department.Some required PICU care and a 12 (5.3%)cases required dialysis.In a study by Kari JA 10.6% received hemodialysis and 21.2% cases had received peritoneal dialysis 12 .Yet in another study peritoneal dialysis (PD) was done in 23%, hemodialysis (HD) in 15%, HF in 28% 13 .With recent advances, stone management has changed from an open surgical approach to a less invasive procedure such as extracorporeal shock-wave lithotripsy and endoscopic techniques.Herein, in our study all cases of renal stone underwent successful surgery.Finally 4 (1.8%) cases were referred to higher centers for management in a nephrology unit.Five patients (2.2 %) died.Mortality reported by others were from 18%-20% 12,13 . Conclusion The early detection of renal diseases in childhood leads to better treatment and reduction in the mortality and morbidity.Our study, which is the fi rst from the western region in Pokhara attempts to show the incidence and prevalence of renal disease in children.The diffi culty of determining these relates to frequent under diagnosis.Many tests are available but in developing countries most cannot be done as these are unavailable or are too expensive.This constitutes a big public health problem and as facilities for treatment are expensive or not available, many children die before getting optimal treatment. Table 1 : Age and Sex Distribution at Presentation Table 2 : Pattern of Renal Diseases All cases of AGN are planned for follow up for 1-½ yrs and among them three patients required biopsy till date.All cases of NS are under regular follow-ups out of which two have undergone biopsy.Renal stones were operated successfully.All cases of acute and chronic renal failures had required dialysis.Out of 3 (25%) chronic renal failures 2 with End stage renal disease expired after repeated hemodialysis and one is still requiring dialysis. Table 3 : Causes of Renal Failure Table 4 : Signs and Symptoms at Presentation Table 5 : Special Procedures and Management Required
2018-12-11T01:30:40.512Z
2009-01-30T00:00:00.000
{ "year": 2009, "sha1": "dce5dcbaf071129bee2e39d60c6ee066cfb17705", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/JNPS/article/download/1414/1389", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "dce5dcbaf071129bee2e39d60c6ee066cfb17705", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
79398424
pes2o/s2orc
v3-fos-license
Role of Doppler Navigation in Minimally Invasive Procedures under Ultrasound Guidance Aim: To highlight the possibilities of Doppler methods to optimize navigation and 5000, Toshiba Aplio 500), probes - linear (7.5 MHz), convex (3.5-5.0 MHz). Paramount was the safety of minimally invasive procedure in visualizing instrument used. Results: It was established that optimal was the power Doppler mode for navigation in percutaneous interventions, prevent hemorrhagic complications, select a safe acoustic window, which improved the location of fluid motion in the hollow needle and visualization of the biopsy instrument based on initiation of twinkling artifact. The use of developed methods to improve ultrasound visualization in minimally invasive treatment of liver abscesses significantly reduced pain - 12.9%, avoided haemorrhage and leakage of pus into peritoneal cavity, reduced the number of inadequate drainages - 36.6%, reduced the duration of inpatient treatment - 2.2 times. Conclusion: It was established that use of power Doppler mode enhances the effectiveness and safety of percutaneous ultrasound-guided interventions. Thus, the studies confirmed the clinical use of optimization techniques of visualization in minimally invasive procedures under ultrasound guidance. INTRODUCTION Minimally invasive procedures under ultrasound guidance are widely used in various branches of clinical medicine. Spectrum of possibilities in interventional ultrasound is extremely wide: diagnostic biopsy, aspiration and drainage of abscesses and cysts, local destruction of tumors, etc [1,2,3,4]. Often, however, the use of ultrasound is limited due to the risk of post biopsy complications, especially damage to the blood vessels and the course of bleeding from the trajectory puncture channel [4,5,6]. Development of complications may also be due to trauma as a result of difficulties in visualizing the distal end of the biopsy instrument -needle, trocar, drain, etc. It is necessary to develop methods and utilize them to optimize ultrasound visualization of biopsy instrument to prevent complications in minimally invasive interventions. Development of Doppler techniques, in particular, color Doppler (CD) and power Doppler (PD) imaging, displayed in color motion in visualized structures, expands the possibility of ultrasound navigation. Aim This study was aimed at highlighting the possibilities of Doppler methods to optimize navigation and control of percutaneous echocontrolled minimally invasive interventions. MATERIALS AND METHODS Ten years of experience in carrying out percutaneous minimally invasive interventions under ultrasound guidance using Doppler modes is highlighted. The results and after effect of 25,543 diagnostic and therapeutic minimally invasive interventions (fine needle aspiration and core needle biopsy, aspiration, drainage, laser and ethanol destruction of lesions, local hyperthermia), on the abdominal organs (liver, biliary system, pancreas, spleen, non-organic lesions), chest (mediastinum, lungs), retroperitoneal space, thyroid and mammary glands, musculoskeletal system, ear, nose and throat organs, skin and subcutaneous tissue, lymph nodes of different localizations (Table 1). Minimally invasive interventions were performed under continuous ultrasound guidance with ultrasound scanners (Dornier AI 5200, Philips HDI 5000, Toshiba Aplio 500) with selected probes corresponding to the depth of the intervention area (linear and convex). The instrument introduced was placed at one end of the probe so that it is in longitudinal visualized position. The free hand technique was used. Prerequisite condition considered was the visualization of distal end of the instrument throughout the procedure. When difficulties arose in visualization, we used the original methods to optimize it -Doppler methods. To assess the effectiveness of optimization methods of visualization, we analyzed the results of their use in minimally invasive treatment of liver abscesses (LA) under US control in 86 patients of the main group (MG), (12 aspirations, 74 drainages). The comparison group (CG) included 159 patients who had similar interventions (17 aspirations, 142 drainages). The groups did not differ significantly by age and sex composition, the volume of liver abscess, and the severity of the clinical condition of patients. Severity of pain on a 10-point visual analog scale was assessed, significantly severe was considered to be from 5 points and above. Frequency of development of hemorrhagic complications, frequency and reasons of inadequate drainage were recorded, as well as comparison of duration of inpatient treatment. The results were analyzed using conventional parametric and non-parametric statistical methods (student t-test and chi square -χ2). RESULTS AND DISCUSSION Analysis of possibilities of use of Doppler in the navigation of puncture interventions made it possible to highlight several aspects. Important significance in color Doppler and power Doppler is choosing a safe acoustic window for visualization in color of blood flow in vessels of medium and small calibers, which may not be seen in B-mode [7]. In particular, during interventions on liver, especially in presence of biliary hypertension, relevance is the differentiation of blood vessels from bile ducts, which may be done by visualizing the blood flow. Visualization in color of blood vessels of medium and small calibers makes it possible to avoid traumatizing them in the process of minimally invasive procedures. Color Doppler and Power Doppler modes may be used in controlling and avoiding post procedural complications (Fig. 1). It was established that in echolocation in color Doppler mode, the overall quality of the ultrasound image is reduced by redistributing the volume of post processing signal for color and gray-scale image, which makes it difficult for echographic control of the procedure. Vessel with blood in it is visualized as color structure with blurred shapes, sizes which are slightly higher than the actual dimensions (Fig. 1a). With echolocation of the same zone in the power Doppler mode (Fig. 1b), the images are clearer, blood vessels are visualized in color structure, real anatomical dimensions and boundaries are corresponding. The direction of flow is of no fundamental importance for navigation during the procedure. Therefore, power Doppler as a method to control minimally invasive procedure in our view is preferred due to more accurate and rapid visualization of blood vessels. Power Doppler mode may also be used in difficult situations for visualization of the puncture needle or drainage in B-mode. We have proposed several techniques to optimize the ultrasound visualization of instrument. Simple and reliable method is to visualize fluid motion in the hollow of the needle or drain in Power Doppler mode, which is reflected in color on the screen (Fig. 2). The moving fluid may be physiologic solution, anesthetic agent, content of aspirated fluid from a fluid collection or cavity. Besides fluid motion, color Doppler may directly reflect puncture instrument. We are postulating a better method of visualization of catheter drain by manual initiation of low amplitude vibration which can be visualized in power Doppler mode (Fig. 3). Twinkling artifact is a phenomenon of color structure directly behind the stationary object, which creates the appearance of movement. Features of twinkling artifact are its appearance on the boundary separating media of different densities. Appearance of twinkling artifact explains the collision of ultrasonic beams with multiple scattered reflectors constituting a heterogeneous surface; while increasing the resultant pulse duration perceived by ultrasound scanner as Doppler frequency shifts. Twinkling artifact can manifest in different modes: spectral, power Doppler, color Doppler, B-mode [8,9,10,11,12]. In the power Doppler mode, twinkling artifact appears as a monochrome color. Its intensity can vary from single unstable color signal to highly stable colored structures with higher density and posterior acoustic shadow [13,14]. The density of the object and the condition of its surface has effects on the frequency and intensity on the formation of twinkling artifact. We have developed a method for improving the visualization of instruments based on artificial initiation of twinkling artifact. The method is to be used as a new informative parameterappearance of artificially initiated twinkling artifact, using power Doppler to indicate the presence of an object more than that of surrounding tissue density -puncture or biopsy instrument. The study was carried out as follows: B-mode was used to reveal structures having similar echogenic characteristics with those of biopsy instruments. Thus, the drain in B-mode was visualized as two parallel linear hyperechogenic structures. Probe was positioned so that the scanning area covers the intended zonal location of structures of interest. The power Doppler mode was turned on, positioning the structures in the energy sector scan. The minimum power emanated increased the appearance of artifacts (noise). Ultrasonic probe was clutched to the body surface, by initiating its progressive-returnable movements; frequency and amplitude were selected empirically, causing vibration of tissue located underneath. At the interface of different densities (the instrument and its surrounding tissues) twinkling artifact appears. It was determined and visualized in power Doppler mode as bright color locus, displaceable when scanning angle changes with surface of hyperechogenic structure, remaining close to the source of ultrasound rays (Fig. 4). In some cases it is possible to use several techniques to improve visualization of the puncture or biopsy instrument during minimally invasive interventions. Thus, Fig. 5 shows an example of visualization in power Doppler mode, fluid motion in drain placed in liver abscess cavity in longitudinal and transverse sections, showing the distal end of the "pig tail" drain. At the tortuous zone, at the initiation of vibrations twinkling artifact appears from the drain walls. Combination of twinkling artifact with posterior acoustic shadowing effect confirms distal location of a foreign body (drain). Thus, the use of Doppler modes in conjunction with the original techniques improved visualization in all cases in the differentiation of puncture instrument, which increased the safety of minimally invasive procedures under ultrasound guidance. The developed techniques and methods are universal and can be used in echo-controlled minimally invasive procedures on any organs and tissues. Comparative analysis of the quantity and quality of complications in the study groups showed that the improvement in visualization of instruments significantly lowered the frequency of pain (p = .05) requiring relief (6 or more points on a visual analogue scale): in the main group (MG) -6 (7.0 %), in the comparison group (CG) -30 (18.9%). This reduction, in your opinion, is associated with improved visualization of the distal end of the instrument, more careful control Fig. 4. Initiated twinkling artifacts from the walls of drain placed in liver abscess cavity of its displacement and consequently, lesser trauma to intercostal nerve and the liver proper in the process of puncture. Application of the developed innovation allows one to completely avoid in the main group complications such as haemorrhage and subcapsular haematoma, as well as leakage of pus into the peritoneal cavity, as observed in the comparison group in 3 (1.9%), 1 (0.6%) and 4 (2.5%) respectively. These facts are explained in the confidence in visualization of the distal part of the instrument and safely direct it into the cavity of liver abscess. Improving the quality of ultrasound imaging has significantly reduced the number of cases of inadequate drainage in 36.6%, from 45.8±4.2% in CG to 18.9±4.6 in the MG (P <0.001), including: fallout of drain in 20.8% (P = .05), migration of drain in 7.2% (P = .05), delayed evacuation of content due to the inflection in 6.1% (P = .05) ( Table 2). The reduction was achieved, in our opinion due to careful control of the location of the drain in liver abscess cavity which was carried out by means of ultrasonic visualization using the original methods to optimize it. Also, the analysis of the duration of hospital treatment of patients of the study groups was carried out. In the comparison group (CG), duration of hospitalization varied from 5 to 24 days, averaging 18.2±9.9 days. In the main group (MG), duration of hospitalization ranged from 5 to 10 days, on average 8.3±4.4 days, which was significantly lower (p <0.001). The results obtained indicate a statistically significant reduction in the duration of inpatient treatment in the use of the proposed measures to improve ultrasound visualization. The reduction was achieved mainly due to the reduction of cases of inadequate drainage, requiring replacement of drain and prolonging the healing process. Thus, the studies confirmed the clinical use of optimization techniques of visualization in minimally invasive procedures under ultrasound guidance. CONCLUSION The use of power Doppler mode in navigation during percutaneous minimally invasive procedure improves visualization. The use of power Doppler mode made it possible to visualize blood vessels of small and medium caliber, select a safe acoustic window and prevents possible haemorrhagic complications in minimally invasive procedures. Power Doppler mode can be used for percutaneous minimally invasive procedures to improve visualization of puncture or biopsy instruments, including artificial initiation of twinkling artifact. The use of developed methods to improve ultrasound visualization in minimally invasive treatment of liver abscesses significantly reduces the number of painful complications by 12.9%, avoid haemorrhagic complications and leakage of pus into the abdominal cavity, reduces the number of cases of inadequate drainage by 36.6%, reduces the duration of hospital treatment by 2.2 times. CONSENT Each procedure was carried out after obtaining consent of the patient by signing the consent form. ETHICAL APPROVAL The Bioethical Committee of M. Gorky Donetsk National Medical University approved the conduct of this research with approval No. 122/16 dated November 20, 2012.
2019-03-16T13:11:57.448Z
2016-01-10T00:00:00.000
{ "year": 2016, "sha1": "15c89f929a8d0333c49343d0c6763d2da14faeed", "oa_license": "CCBY", "oa_url": "https://doi.org/10.9734/bjmmr/2016/23076", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "59f431cd33b4888f33d453d8236a89a104dcf23e", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
264344996
pes2o/s2orc
v3-fos-license
Toxigenic Clostridium difficile-mediated diarrhoea in hematopoietic stem cell transplantation in-patients: rapid diagnosis and efficient treatment Allogenic hematopoietic stem cell transplant (HSCT) recipients are susceptible to any kind of infectious agents including Clostridium difficile. We studied 86 allogenic-HSCT patients who faced diarrhoea while receiving antibiotics. DNA from stool samples were explored for the presence of C. difficile toxin genes (tcdA; tcdB) by multiplex real-time PCR. Results showed nine toxigenic C. difficile amongst which seven were positive for both toxins and two were positive for tcdB. Six of toxigenic C. difficile organisms harbouring both toxin genes were also isolated by toxigenic culture. Clostridium difficile infection was controlled successfully with oral Metronidazole and Vancomycin in the confirmed infected patients. Introduction Leukemic patients undergoing hematopoietic stem cell transplantation (HSCT) receive significant risk factors of graft-versus-host disease, including immunosuppressant drugs, chemotherapy and broad-spectrum antibiotics (to avoid infectious complications). Empirical antibiotic therapy in HSCT patients increases the risk of developing Clostridium difficile infection (CDI) during hospitalisation [1]. Considerable CDI rates of 4-27% in HSCT recipients force the early detection and control of CDI in bone marrow transplanted patients [2]. Accurate and quick detection of toxin genes in samples could be applied for patients with antibiotic-associated diarrhoea, especially those who undergo transplantation or chemotherapy. Conventionally, detection of C. difficile is based on a two-staged culture and PCR algorithms that confirm C. difficile-specific gene gluD (glutamate dehydrogenase (GDH); diagnosis may be completed by detecting toxins by either ELISA or tracing toxin genes with PCR-based methods. The time-consuming procedure has been suggested in guidelines as the gold standard method for approaching CDI diagnosis [3]. Clostridium difficile pathogenesis depends on producing toxins A and/or B (TcdA and TcdB, respectively) which are interesting targets for the detection of pathogenic types. The main objective of the present study was to evaluate the CDI rates in leukaemic patients receiving stem cell transplantation by targeting tcdA and tcdB in a rapid multiplex real-time PCR method. This procedure may be suggesting a short-cut method for considerably rapid and accurate detection of toxigenic C. difficile. Amongst 86 diarrhoeic patients, 42 had acute lymphoblastic leukaemia (ALL; 48.8%) as an underlying disease, followed by 26 chronic myelogenous leukaemia (CML; 30.2%) and 18 acute myeloid leukaemia (AML; 20.9%). According to the development of febrile periods mostly due to neutropenia, pulmonary and other relevant infections, the patients empirically received Imipenem (36%), Meropenem (24%), Ciprofloxacin (72%) and Cefixime (31%) as the most predominant antibacterial components. The history of antibiotic therapy starts from one day to one month before the diarrhoea happening. Multiplex real-time PCR Stool samples were spiked with internal control of RealStar® C. difficile PCR Kit (altona, Germany) before extraction of total DNA by QIAamp® DNA Stool Mini Kit (Qiagen, Germany). DNA from stool samples were investigated for the presence of tcdA and tcdB by RealStar® C. difficile PCR Kit using both LightCycler @ 96 (Roche, Germany) and Rotor-gene Q (Qiagen) machines simultaneously to ensure the accuracy of real-time PCR results. Multiplex real-time PCR assessment of the stool samples showed toxigenic CDI in nine patients (10.46%) amongst which seven (8.1%) were positive for both tcdA and tcdB and two (2.32%) were only positive for tcdB. Toxigenic culture Stool samples were collected; one spike (∼1 g) of each specimen was treated by alcoholic shock for 1 h. Another one was transferred into the Clostridium difficile Brucella broth for 1 h under strict anaerobic conditions [4]. Treated specimens were inoculated onto the Cycloserine-Cefoxitin Fructose Agar, enriched by vitamin K1 (1 μg/ml) and hemin (5 μg/ml) and incubated in anaerobic condition for 2-5 days at 37°C. Recovered colonies were confirmed by microbiological characteristics and the presence of C. difficile species-specific GDH gene (gluD) by PCR [5]. The presence of tcdA and tcdB genes were explored by RealStar® C. difficile PCR Kit as described above. Toxigenic culture yielded 10 isolates, which were confirmed as C. difficile by positive PCR results for gluD. Real-time PCR assessment of the C. difficile isolates for tcdA and tcdB showed six (6.97%) having both tcdA and tcdB. Four other isolated C. difficile were non-toxigenic. Overall results showed that the culture method missed three toxigenic C. difficile and so the calculated sensitivity and specificity of the culture method are 76.92% (95% CI 46.19-94.96%) and100.00% (95% CI 95.26-100.00%), respectively. Five patients who were diagnosed with the toxigenic C. difficile had ALL (55.5%), two had CML (22.2%) and the other two had AML (22.2%). Pairwise two-tailed regression showed no significant correlation between the antibiotics used nor leukaemia type and the CDI in subject patients. Treatment After the development of diarrhoea in patients who were receiving antibiotics, a single dose of Vancomycin and Metronidazole were administered orally (following stool sample collection) [6]. Oral treatment with Metronidazole and Vancomycin was continued for the individuals for at least 7 days upon real-time PCR confirmation of the toxigenic CDI. CDI was successfully controlled and no mortality was recorded due to the toxigenic C. difficile during this study. Discussion and conclusion Leukaemic patients receiving allogeneic transplantation are amongst the most susceptible patient to all types of infections. Recent studies show the increasing rates of CDI from 5% to 20% and even rising higher to above 50% after the HSCT which may ruin the stem cell transplantation success [7]. Allogeneic HSCT patients are exposed to many risk factors making them susceptible to the CDI such as broad-spectrum antibiotics, chemotherapy, immunosuppression and proton pump inhibitor medications. Our study showed that 10.4% of patients with bone marrow transplantation were infected with the toxigenic C. difficile; the infection rate will be increased to 15.11% if four non-toxigenic C. difficile isolates are included. The history of antibiotic therapy for CDI patients was from one week to one month in the posttransplantation period. We did not detect or isolate C. difficile from any patient who received antibiotic therapy or being hospitalised less than a week and it is compatible with the known risk of prolonged antibiotic therapy as one of the most important predisposing factors for CDI. However, we could not correlate the antibiotics used in the subject patients with the incidence of CDI. Toxins A and Bthe C. difficile enterotoxin and cytotoxin, respectivelyare the main pathogenicity factors of the organism. Toxigenic types of C. difficle cause the same clinical signs of illnesses ranging from mild diarrhoea to life-threatening inflammation of colon, toxic megacolon and deadly infectious colitis [8]. Detecting tcdA and tcdB directly in fesses leads to the diagnosis of pathogenic C. difficile rather than targeting gluD gene that only confirms the presence of C. difficile without distinguishing between pathogen and non-pathogen types. Timing of toxigenic culture and the isolation of C. difficile is unfavourable when the urgency of HSCT patient cases comes into account. However, in the current study, toxigenic culture results confirm the sensitivity and specificity of the multiplex-real-time PCR detection of the pathogen. In addition, microbiological isolation and characterisation of the toxigenic C. difficile may fuel further investigations such as epidemiological tracing of the pathogen transmission and the infection control measurements. Antibiotic susceptibility testing of the C. difficile isolates also reported to be considered [9]. Oral Metronidazole and Vancomycin were administered after collecting stool samples from HSCT patients with diarrhoea [6]. This blind treatment continued if the results approve the infection with toxigenic C. difficile. Multiplex real-time PCR results were obtained on the same day (fewer than four hours) and helped the precise decision on the proper use of antibiotics. Although there are widely used serological methods for screening of the C. difficile toxins in stool samples, real-time PCR detection of the pathogen is significantly more sensitive, specific and rapid. It remarkably improves the proper management of critically infection-susceptible HSCT patients [10]. We assume that using real-time PCR for detection of the toxigenic C. difficile in leukaemic/HSCT patients with antibioticassociated diarrhoea significantly increases the accuracy of diagnosis in a short time. This leads to efficient management and control of the disease and reduces mortality due to CDI.
2021-08-11T06:17:32.289Z
2021-08-10T00:00:00.000
{ "year": 2021, "sha1": "f3287dcc76f227ca7ac96faf590ecf1839d150fc", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/56B256E185D7BCAAFFC5BC1AA95B8672/S095026882100162Xa.pdf/div-class-title-toxigenic-span-class-italic-clostridium-difficile-span-mediated-diarrhoea-in-hematopoietic-stem-cell-transplantation-in-patients-rapid-diagnosis-and-efficient-treatment-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f801a684009996e8d7cae6d81c5acb3b08597d23", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
3558190
pes2o/s2orc
v3-fos-license
Deep learning in radiology: an overview of the concepts and a survey of the state of the art Deep learning is a branch of artificial intelligence where networks of simple interconnected units are used to extract patterns from data in order to solve complex problems. Deep learning algorithms have shown groundbreaking performance in a variety of sophisticated tasks, especially those related to images. They have often matched or exceeded human performance. Since the medical field of radiology mostly relies on extracting useful information from images, it is a very natural application area for deep learning, and research in this area has rapidly grown in recent years. In this article, we review the clinical reality of radiology and discuss the opportunities for application of deep learning algorithms. We also introduce basic concepts of deep learning including convolutional neural networks. Then, we present a survey of the research in deep learning applied to radiology. We organize the studies by the types of specific tasks that they attempt to solve and review the broad range of utilized deep learning algorithms. Finally, we briefly discuss opportunities and challenges for incorporating deep learning in the radiology practice of the future. Introduction The field of deep learning encompasses a group of artificial intelligence methods which employ a large number of simple interconnected units to perform complicated tasks. Deep learning algorithms, rather than using a set of pre-programmed instructions, are capable of learning from large amounts of data. The tasks solved by these algorithms include localizing and classifying objects in images, understanding language, playing games, and many others. While the flagship of deep learning, convolutional neural networks, were first introduced decades ago, it is only the last 5 years that showed an astonishing success of these algorithms elevating their status from an interesting but mostly impractical idea to the go-to algorithm in artificial intelligence. In recent years, not only have deep learning algorithms been able to surpass performance of other methods in artificial intelligence [1] but in some tasks, they have shown performance superior to humans [2,3,4]. Arguably, the most well-known achievement of deep learning to date is its performance in the Im-ageNet competition. ImageNet is a database of more than 14,000,000 annotated natural images containing real world objects such as cars, animals, and buildings (image-net.org). One of the goals of the competition, started in 2009, is to assign each image to one of 1 000 predefined categories. When a deep learning-based algorithm first appeared in the competition in 2012, it dramatically improved the error rate from 0.258 in the previous year (image-net.org/challenges/LSVRC/2011/results) to 0.153 (image-net.org/challenges/LSVRC/2012/results). The performance of deep learning algorithms for image classification has been improving since then and is now considered to be comparable to or better than human performance. Other areas relevant to the topic of this article, where deep learning algorithms have seen impressive results, is in the automatic generation of sophisticated captions for images that consist of full sentences [5] as well as localization and outlining of objects in images [6,7]. To illustrate the capacity of current detection network, Figure 1 shows the result of a deep neural network approach applied to detect objects in an image. There are likely three reasons for the recent success of deep learning algorithms: availability of data, increased processing power, and rapid development of algorithms. These are highly connected: availability of large datasets of images and computing power made it possible to demonstrate the strength of the basic concepts of deep learning and, motivated the development of further datasets and algorithms. Increasing ease in applying algorithms and affordable graphical processing units have allowed for larger scientific and technical communities to get involved, and develop even more powerful algorithms which further advanced the field. As the primary strength of deep learning has been in image analysis, the potential applications in radiology have become very quickly apparent. The development of algorithms for radiology has shown some inertia due to the time needed for acquisition of the appropriate expertise in the medical imaging community as well as limited availability of large medical imaging datasets. However, the last 2-3 years have seen remarkable productivity in the field. It is now well recognized by both researchers and clinicians that deep learning will play a significant role in radiology. In this paper, we begin with a general overview of radiology as the application domain and consider where deep learning could have the most significant impact. Then, we introduce the general concepts of deep learning. This is followed by an overview of the recent work in the field. The article closes with remarks regarding the future of deep learning in radiology. The practice of radiology Radiology is a branch of medicine that focuses on using medical images for detection, diagnosis, and characterization of disease (diagnostic radiology) as well as guiding procedural interventions (interventional radiology). In the United States, a typical radiologist undergoes 14-15 years of education after high school including 4 years of college, 4 years of medical school, 1 year of internship, 4 years of radiology residency, and generally 1-2 years of fellowship training. While medical image interpretation work is centered in radiology practices and academic departments, some of it is also performed within other branches of medicine including cardiology, orthopedics, and surgery. In diagnostic radiology, personal interaction between a radiologist with patients and other physicians is often limited. The primary duty of a radiologist (particularly outside of an academic institution) is to view an image delivered to his/her reading station and generate a written report of findings. This well-structured and isolated nature of the radiologist's work makes is a particularly attractive application of artificial intelligence algorithms. Deep learning techniques (and artificial intelligence algorithms in general) have a tremendous potential to influence the practice of radiology. Unlike most other facets of medicine, all or nearly all of the primary data utilized in imaging is digital, lending itself to analysis by artificial intelligence algorithms. In this section, we describe some of the primary challenges that a radiologist faces in his/her daily diagnostic radiology practice and briefly point to opportunities for deep learning to address them. While this is not intended to be an exhaustive description of every tasks that radiologists perform in their practice, it reflects majority of diagnostic radiology practice. We conclude this section with a description of some medical image interpretation tasks that are currently not performed by radiologists, but could be incorporated in radiology practice using deep learning. Disease Detection One of the most challenging tasks in the interpretation of imaging is the rapid differentiation of abnormalities from normal background anatomy. For example, in the interpretation of mammography, each radiograph contains thousands of individual focal densities, regional densities, and geometric points and lines that must be interpreted to detect a small number of suspicious or abnormal findings. In most cases, the entire mammogram should be interpreted as normal or negative, adding further complexity to the interpretive task. In order to be useful, a computer algorithm does not have to detect all objects of interest (e.g., abnormalities) and be perfectly specific (i.e., not mark any normal locations). For example, in screening mammography, approximately 80% of screening mammograms should be read as negative according to the ACR BI-RADS guideline, and of the 20% of examinations that trigger additional evaluation, many will ultimately be categorized as negative or benign [8]. An algorithm that could successfully categorize even half of screening mammograms as definitely negative would dramatically reduce the effort required to interpret a large batch of examinations. Disease Diagnosis and Management Once an abnormality has been detected, the often-complex task of determining a diagnosis and the disease management implications is undertaken. For focal masses generically, a large number of features must be integrated in order to decide how to appropriately manage the finding. These features can include size, location, attenuation or signal intensity, borders, heterogeneity, change over time, and others. In some cases, simple criteria have been established and validated for the management of focal findings. For example, most focal lesions in the kidney can be characterized as either simple or minimally complex cysts, which are almost uniformly benign. On the other hand, most lesions in the kidney that are solid are considered to have high malignant potential. Finally, a minority of focal kidney lesions is considered indeterminate and can be managed accordingly. While for some types of abnormalities making the diagnostic and disease management decision follows straightforward guidelines, for other types of abnormalities, management algorithms are much more complex. In the BI-RADS guideline for assessing focal lesions in the breast, a mass is categorized according to its shape (oval, round, or irregular), margin (circumscribed, obscured, microlobulated, indistinct, or spiculated), and its density (higher, equal to, or lower density than the glandular tissue, or fat-containing) [8]. Based on the constellation of features, the radiologist must then decide whether a mass is likely benign or requires follow-up or biopsy. In the LI-RADS criteria for assessing focal liver lesions in patients at risk for developing hepatocellular carcinoma, five major features, and up to 21 ancillary features, are assessed to risk-stratify lesions and determine their management [9]. Deep learning algorithms have the potential to assess a large number of features, even those previously not considered by radiologists, and arrive at a repeatable conclusion in a fraction of the time required for a human interpreter. Perhaps most promisingly, these algorithms could be used to categorize large amounts of existing imaging data and correlate features with downstream health outcomes, a process that is currently extremely laborious and time-consuming when human interpretation is required. Workflow While detection, diagnosis, and characterization of disease receive the primary attention among algorithm developers, another important area where artificial intelligence could contribute is in facilitating the workflow of the radiologists while interpreting images. With the widespread conversion from printed films to centralized Picture Archiving and Viewing Systems (PACS) as well as the availability of multi-planar, multi-contrast, and multi-phase imaging, radiologists have seen exponential growth in the size and complexity of image data to be analyzed. Additionally, interpretations must often be rendered in the context of a multitude of prior examinations. The simple task of finding and presenting these data is complex, and artificial intelligence systems may be well-suited for this role. An example of a highly complex workflow is that for many cancer patients. Such patients are not uncommonly afflicted with more than one primary tumor, metastatic disease to numerous sites, and may have undergone a variety of biopsies, locoregional therapies, and systemic therapies with varying results. In the simplest scenario, interpretation of a follow-up imaging examination requires colocalization of all relevant sites of disease between the current and prior examinations. Measurements of size are performed, and in some cases functional features, such as tumor perfusion or diffusion restriction, are assessed either subjectively or objectively. Most radiology practices utilize imaging equipment of different types, generations, and often different vendors, thus simply identifying the appropriate image sets in prior examinations can be very challenging. After the appropriate images have been identified, the radiologist must colocalize disease sites and attempt to obtain precise repeated measurements in order to ensure that the values obtained from the current and prior examinations can be compared. Each of the above tasks is time-consuming and does not necessarily require the full skill of a radiologist. However, standard PACS systems are not able to reliably present all of the above data for a variety of reasons, including the variability in labeling the types and components of imaging examinations, the variability in patient positioning and anatomy between examinations, the variability in modalities used to image the same portion of the anatomy, as well as other factors. In principle, an artificial intelligence algorithm could assess a patient's prior imaging, bring forward examinations that include the relevant body part(s), detect the image modality and contrast type, and determine the location of the area of interest within the relevant anatomy to reduce the radiologist's effort in performing these relatively mundane tasks. Image interpretation tasks that radiologists do not perform but deep learning could In addition to performing tasks that are a part of the current radiological practice, computer algorithms could perform medical image interpretation tasks that radiologists do not perform on a regular basis. The research toward this goal has been underway for some time, mostly using traditional machine learning and image processing algorithms. One example is radiogenomics [10], which aims to find relationships between imaging features of tumors and their genomic characteristics. Examples can be found in breast cancer [11], glioblastoma [12], low grade glioma [13], and kidney cancer [14]. Radiogenomics is not a part of the typical clinical practice of a radiologist. Another example is prediction of outcomes of cancer patients with applications in glioblastoma [12,15], lower grade glioma, [13], and breast cancer [16]. While imaging features have a potential to be informative of patient outcomes, very few are currently used to guide oncological treatment. Deep learning could facilitate the process of incorporating more of the information available from imaging into the oncology practice. 3 An introduction to deep learning Terminology To understand deep learning, it is helpful to first understand the related concepts of artificial intelligence and machine learning. Artificial intelligence is a set of computer algorithms that are able to perform complicated tasks or tasks that require intelligence when conducted by humans. Machine learning is a subset of artificial intelligence algorithms which, to perform these complicated tasks, are able to learn from provided data and do not require pre-defined rules of reasoning. The field of machine learning is very diverse and has already had notable applications in medical imaging [17]. Deep learning is a sub-discipline of machine learning that relies on networks of simple interconnected units. In deep learning models, these units are connected to form multiple layers that are capable of generating increasingly high level representations of the provided input (e.g., images). Below, in order to explain the architecture of deep learning models, we introduce the artificial neural network in general and one specific type: the convolutional neural network. Then, we detail the "learning" process of these networks, which is the process of incorporating the patterns extracted from data into deep neural networks. Artificial Neural Networks Artificial neural networks (ANNs) are machine learning models based on basic concepts dating as far as 1940s, significant development in 1970s and 1980s and a period notable popularity in 1990s and 2000s followed by a period of being overshadowed by other machine learning algorithms. ANNs consist of a multitude of interconnected processing units, called neurons, usually organized in layers. A traditional ANN typically used in the practice of machine learning contains 2 to 3 layers of neurons. Each neuron performs a very simple operation. While many neuron models were proposed, a typical neuron simply multiplies each input by a certain weight, then adds all the products for all the inputs and applies a simple nondecreasing function at the end. Even though each neuron performs a very rudimentary calculation, the interconnected nature of the network allows for the performance of very sophisticated calculations and implementation of very complicated functions. Convolutional Neural Networks Deep neural networks are a special type of an ANN. The most common type of a deep neural network is a deep convolutional neural network (CNN). A deep convolutional neural network, while inheriting the properties of a generic ANN, has also its own specific features. First, it is deep. A typical number of layers is 10-30 but in extreme cases it could exceed 1 000. Second, the neurons are connected such that multiple neurons share weights. This effectively allows the network to perform convolutions (or template matching) of the input image with the filters (defined by the weights) within the CNN. Other special feature of CNNs is that between some layers, they perform pooling which makes the network invariant to small shifts of the images. Finally, CNNs typically use a different activation function of the neurons as compared to traditional ANNs. Figure 2 shows an example of a small architecture for a typical CNN. One can see that the first layers are the convolutional ones which serve the role of generating useful features for classification. Those layers can be thought of as implementing image filters, ranging from simple filters that match edges to those that eventually match much more complicated shapes such as eyes, or tumors. Further from the network input are so called fully connected layers (similar to traditional ANNs) which utilize the features extracted by the convolutional layers to generate a decision (e.g., assign a label). A variety of deep learning architectures have been proposed, often driven by characteristics of the task at hand (e.g., fully convolutional neural networks for image segmentation). Some of these are described in more detail in the section of this paper that reviews the current state of the art. The learning process in convolutional neural networks Above, we described general characteristics of traditional neural networks and deep learning's flagship: the convolutional neural network. Next, we will explore how to make those networks perform useful tasks. This is accomplished in the process referred to as learning or training. The learning process of a convolutional neural network simply consists of changing the weights of the individual neurons in response to the provided data. In the most popular type of learning process, called supervised learning, a training example contains an object of interest (e.g., an image of a tumor) and a label (e.g., the tumor's pathology: benign or malignant). In our example, the image is presented to the network's input, and the calculation is carried out within the network to produce a prediction based on the current weights of the network. Then, the network's prediction is compared to the actual label of the object and an error is calculated. This error is then propagated through the network to change the values of the network's weights such that the next time the network analyzes this example, the error decreases. In practice, the adaptation of the weights is performed after a group of examples (a batch) are presented to the network. This process is called error backpropagation or stochastic gradient descent. Various modifications of stochastic gradient descent algorithm have been developed [18]. In principle, this iterative process consists of calculations of error between the output of the model and the desired output and adjusting the weights in the direction where the error decreases. The most straightforward way of training is to start with a random set of weights and train using available data specific to the problem being solved (training from scratch). However, given the large number of parameters (weights) in a network, often above 10 million, and a limited amount of training data (common in medical imaging), a network may overfit to the available data, resulting in poor performance on test data. Two training methods have been developed to address this issue: transfer learning [19] and off-the-shelf features (a.k.a. deep features) [20]. A diagram comparing training from scratch with transfer learning and off-the-shelf deep features is shown in Figure 3. In the transfer learning approach, the network is first trained using a different dataset, for example an ImageNet collection. Then, the network is "fine-tuned" through additional training with data specific to the problem to be addressed. The idea behind this approach is that solving different visual tasks shares a certain level of processing such as recognition of edges or simple shapes. This approach has been shown successful in, for example, prediction of survival time from brain MRI in patients with glioblastoma tumor [21] or in skin lesion classification [22]. Another approach that addresses the issue of limited training data is the deep "off-the-shelf" features approach which uses convolutional neural networks which have been trained on a different dataset to extract features from the images. This is done by extracting outputs of layers prior to the network's final layer. Those layers typically have hundreds or thousands of outputs. Then, these outputs are used as inputs to "traditional" classifiers such as linear discriminant analysis, support vector machines, or decision trees. This is similar to transfer learning (and is sometimes considered a part of transfer learning) with the difference being that the last layers of a CNN are replaced by a traditional classifier and the early layers are not additionally trained. Deep learning vs "traditional" machine learning Increasingly often we hear a distinction between deep learning and "traditional" machine learning (see Figure 4). The difference is very important, particularly in the context of medical imaging. In traditional machine learning, the typical first step is feature extraction. This means that to classify an object, one must decide which characteristics of an object will be important and implement algorithms that are able to capture these characteristics. A number of sophisticated algorithms in the field of computer vision have been proposed for this purpose and a variety of size, shape, texture, and other features were extracted. This process is to a large extent arbitrary since the machine learning researcher or practitioner often must guess which features will be of use for a particular task and runs the risk of including useless and redundant features and, more importantly, not including truly useful features. In deep learning, the process of feature extraction and decision making are merged and trainable, and therefore no choices need to be made regarding which features should be extracted; this is decided by the network in the training process. However, the cost of allowing the neural network to select its own features is a requirement for much larger training data sets. In this section, we give an overview of applications of deep learning in radiology. We organized this section by the tasks that the deep learning algorithms perform. Within each subsection, we describe different methods applied, and when possible, we systematically discuss the evolution of these methods in the recent years. Classification In a classification task, an object is assigned to one of the predefined classes. A number of different classification tasks can be found in the domain of radiology such as: classification of an image or an examination to determine the presence or an absence of an abnormality; classification of abnormalities as benign or malignant; classification of cancerous lesions according to their histopathological and genomic features; prognostication; and classification for the purpose of organization radiological data. Deep learning is becoming the methodology of choice for classifying radiological data. The majority of the available deep learning classifiers use convolutional neural networks with a varying number of convolutional layers followed by fully connected layers. The availability of radiological data is limited as compared to the natural image datasets which drove the development of deep learning techniques in the last 5 years. Therefore, many applications of deep learning in medical image classification have resorted to techniques meant to alleviate this issue: off-the-shelf features and transfer learning [23] discussed in the previous section of this article. Off-the-shelf features have performed well in a variety of domains [20], and this technique has been successfully applied to medical imaging [24,25]. In [24], the authors combined the deep off-the shelf features extracted from a pre-trained VGG19 network with hand-crafted features for determining malignancy of breast lesions in mammography, ultrasound, and MRI. In [25], long-term and short term survival was predicted for patients with lung carcinoma. The transfer learning strategy, which involves fine tuning of a network pre-trained on a different dataset, has been applied to a variety of tasks such as classification of prostate MR images to distinguish patients with prostate cancer from patients with benign prostate conditions [26], identification of CT images with pulmonary tuberculosis [27], and classification of radiographs to identify hip osteoarthritis [28]. Most of the studies which apply the transfer learning strategy replace and retrain the deepest layer of a network, whereas shallow layers are fixed after the initial training. A variation of the transfer learning strategy combines fine-tuning and deep features approach. It fine-tunes a pre-trained network on a new dataset to obtain more task-specific deep feature representations. An example of this is the study [29], which performed ultrasound imaging-based thyroid nodule classification using features extracted from a fine-tuned pre-trained GoogLeNet. An ensemble of fine-tuned CNN classifiers was shown to predict radiological image modality in the study [30]. A comparison of approaches using deep features and transfer learning with fine tuning was shown in the study [31] identifying radiogenomic relationships in breast cancer MR images and in [32] for predicting the upstaging of ductal carcinoma in situ to invasive breast cancer from breast cancer MR images. In both of these problems, deep features performed better than transfer learning with the fine tuning approach. However, both of these studies faced the issue of a small size of the training set. When sufficient data are available, an entire deep neural network can be trained from a random initialization (training from scratch). The size of the network to be trained depends on task and dataset characteristics. However, the commonly used architecture in medical imaging is based on AlexNet [1] and VGG [33] with modifications that have fewer layers and weights. Examples of training from scratch can also be found in various studies such as: assessing for the presence of Alzheimer's disease based on brain MRI using deep learning [34,35], glioma grading in MRI [36], and disease staging and prognosis in chest CT of smokers [37]. Recent advances in the design of CNN architectures has made networks easier to train and more efficient. They have more layers and perform better while having fewer trainable parameters [38] which reduces the likelihood of overtraining. The most notable examples include Residual Networks (ResNets) [39] and the Inception architecture [40,41]. A shift to these more powerful networks has also taken place in applications of deep learning to radiology both for transfer learning and training from scratch. Three different ResNets were used to predict methylation of the O6-methylguanine methyltransferase gene status from pre-surgical brain tumor MRI [42]. In [43], the InceptionV3 network was fine-tuned and served as a feature extractor instead of previously used GoogLeNet. In another work using chest X-ray images [4], the authors fine-tuned a DenseNet with 121 layers for the classification of miscellaneous pathologies, achieving radiologist-level classification performance for identifying pneumonia. In another approach, auto-encoder (AE) [44] or stacked auto-encoder (SAE) [45,46] networks have been trained from scratch, layer by layer in unsupervised way. A stacked denoising autoencoder with backpropagation was used in [47] to determine the presence of Alzheimer's disease. AEs and SAEs can also be used to extract feature representations (similarly to the deep features approach) from hidden layers for further classification. Such feature representation has been used in the classification of lung nodules into benign and malignant classes in CT [48], and in the identification of multiple sclerosis lesions in MRI [49]. Apart from the classification of radiological images, analysis of radiological text reports plays a significant role [50]. The most prominent approach in this type of classification is deep learningbased natural language processing (NLP) [51], which is based on the seminal work for obtaining vector representation of phrases using an unsupervised neural model [52]. An example of application of this architecture can be found in [53] where the authors classified CT radiology reports as representing presence or absence of pulmonary embolism (PE), as well as type (chronic or acute) and location (central or subsegmental) of PE when present. They showed an improvement as compared to a non-deep learning algorithm. The same architecture was used in [54] for classifying head CT reports of ICU patients with altered mental status as having different degrees of severity according to each of five criteria: severity of study, acute intracranial bleed, acute mass effect, acute stroke, acute hydrocephalus. Radiology reports using the International Coding of Diseases (ICD) were auto-encoded in [55] using a publicly available dataset. A third application of the same architecture can be found in [55] where radiology reports were classified according to the International Coding of Diseases9 (ICD9) using a publicly available dataset. Segmentation In an image segmentation task, an image is divided into different regions in order to separate distinct parts or objects. In radiology, the common applications are segmentation of organs, substructures, or lesions, often as a preprocessing step for feature extraction and classification [34,56]. Below, we discuss different types of deep learning approaches used in segmentation tasks in a variety of radiological images. The most straightforward and still widely used method for image segmentation is classification of individual pixels based on small image patches (both 2-dimensional and 3-dimensional) extracted around the classified pixel. This approach has found usage in different types of segmentation tasks in MRI, for example brain tumor segmentation in [57,58,59], white matter segmentation in multiple sclerosis patients [60], segmentation of 25 different structures in brain [61], and for rectal cancer segmentation in pelvis MRI [62]. It allows for using the same network architectures and solutions that are known to work well for classification, however, there are some shortcomings of this method. The primary issue is that it is computationally inefficient, since it processes overlapping parts of images multiple times. Another drawback is that each pixel is segmented based on a limited-size context window and ignores the wider context. In some cases, a piece of global information, e.g. pixel location or relative position to other image parts, may be needed to correctly assign its label. One approach that addresses the shortcomings of the pixel-based segmentation is a fully convolutional neural network (fCNN) [63]. Networks of this type process the entire image (or large portions of it) at the same time and output a 2-dimensional map of labels (i.e., a segmentation map) instead of a label for a single pixel. Example architectures that were successfully used in both natural images and radiology applications are encoder-decoder architectures such as U-Net [64,65,66] or Fully Convolutional DenseNet [67,68,69]. Various adjustments to these types of architectures have been developed that mainly focus on connections between the encoder and decoder parts of the networks, called skip connections. Applications of fCNNs in radiology include prostate gland segmentation in MRI [70], segmentation of multiple sclerosis lesions and gliomas in MRI [71], and ultrasound-based nerve segmentation [72]. Moreover, loss functions have been explored that account for class imbalance (differences in the number of examples in each class), which is typical in medical datasets, e.g. weighted cross entropy was used in [73] for brain structure segmentation in MRI or Dice coefficient-based loss for brain tumor segmentation in MRI [74]. In order to segment 3-dimensional data, it is common to process data as 2-dimensional slices and then combine the 2-dimensional segmentation maps into a 3-dimensiaonal map since 3D fC-NNs are significantly larger in terms of trainable parameters and as a result require significantly larger amounts of data. Nevertheless, these obstacles can be overcome, and there are successful applications of 3D fCNNs in radiology, e.g. V-Net for prostate segmentation from MRI [75] and 3D U-Net [76] for segmentation of the proximal femur in MRI [77] and tumor segmentation in multimodal brain MRI [78]. Finally, a deep learning approach that has found some application in medical imaging segmentation is recurrent neural networks (RNNs). In [79], the authors used a Boundary Completion RNN for prostate segmentation in ultrasound images. Another notable application is in [80], where the authors applied a recurrent fully convolutional neural network for left-ventricle segmentation in multi-slice cardiac MRI to leverage inter-slice spatial dependencies. Similarly, [81] used Long Short-Term Memory (LSTM) [82] type of RNN trained end-to-end together with fCNN to take advantage of 3D contextual information for pancreas segmentation in CT and MR images. In addition, they proposed a novel loss function that directly optimizes a widely used segmentation metric, the Jaccard Index [83]. Detection Detection is a task of localizing and pointing out (e.g., using a rectangular box) an object in an image. In radiology, detection is often an important step in the diagnostic process which identifies an abnormality (such as a mass or a nodule), an organ, an anatomical structure, or a region of interest for further classification or segmentation [84,85,86]. Here, we discuss the common architectures used for various detection tasks in radiology along with example specific applications. The most common approach to detection for 2-dimensional data is a 2-phase process that requires training of 2 models. The first phase identifies all suspicious regions that may contain the object of interest. The requirement for this phase is high sensitivity [87] and therefore it usually produces many false positives. A typical deep learning approach for this phase is a regression network for bounding box coordinates based on architectures used for classification [88,89]. The second phase is simply classification of sub-images extracted in the previous step. In some ap-plications, only one of the two steps uses deep learning. The classification step, when utilizing deep learning, is usually performed using transfer learning. The models are often pre-trained using natural images, for example for thoraco-abdominal lymph node detection in [90] and pulmonary embolism detection in CT pulmonary angiogram images [23]. In other applications, the models are pre-trained using other medical imaging dataset to detect masses in digital breast tomosynthesis images [91]. The same network architectures can be used for the second phase as in a regular classification task (e.g., VGG [33], GoogLeNet [92], Inception [40], ResNet [39]) depending on the needs of a particular application. While in the 2-phase detection process the models are trained separately for each phase, in the end-to-end approach one model encompassing both phases is trained. An end-to-end architecture that has proved to be successful in object detection in natural images, and was recently applied to medical imaging, is the Faster Region-based Convolutional Neural Network (R-CNN) [7]. It uses a CNN to obtain a feature map which is shared between region proposal network that outputs bounding box candidates, and a classification network which predicts the category of each candidate. It was recently applied for intervertebral disc detection in X-ray images [93] and detection of colitis on CT images [94]. A domain specific modification that uses additional preprocessing before the region proposal step was used by [95] for detection of architectural distortions in mammograms. Another approach to detection is a single-phase detector that eliminates the first phase of region proposals. Examples of popular methods that were first developed for detection in natural images and rely on this approach are You Only Look Once (YOLO) [96], Single Shot MultiBox Detector (SSD) [97] and RetinaNet [6]. In the context of radiology, a YOLO-based network called BC-DROID has been developed by [98] for region of interest detection in breast mammograms. SSD has been employed, for example in [99], for breast tumor detection in ultrasound images, outperforming other evaluated deep learning methods that were available at the time. The authors of [100] applied the same network for detection of pulmonary lung nodules in CT images. In the examples above, 2-dimensional data was used. For 3-dimensional imaging volumes, common in medical imaging, results obtained from 2-dimensional processing are combined to produce the ultimate 3-dimensional bounding box. As an example, in [101] the authors performed detection of 3D anatomy in chest CT images by processing data slice by slice in one direction. Combining output from different planes was performed in several studies. Most of the them [102,103,104] used orthogonal planes of MRI and CT images performing detection in each direction separately. The results can then be combined in different ways, e.g. by an algorithm based on output probabilities [101] or using another machine learning method like random forest [100]. An alternative method for 3D detection has been proposed for automatic detection of lymph nodes using CT images by concatenating coronal, sagittal and axial views as a single 3-channel image in [87]. Other Tasks in Radiology While the majority of the applications of deep learning in radiology have been in classification, segmentation, and detection, other medical imaging-related problems have found some solutions in deep learning. Due the variety of those problems, there is no unifying methodological framework for these solutions. Therefore below, we organize the examples according to the problem that they attempt to address. Image Registration: In this task two or more images (often 3D volumes), typically of different types (e.g., T1-weighted and T2-weighted MRIs) must be spatially aligned such that the same location in each image represents the same physical location in the depicted organ. Several approaches can be taken to address the problem. In one approach, it is necessary to calculate similarity mea-sures between image patches taken from the images of interest to register them. The authors of [105] used deep learning to learn a similarity measure from T1-T2 MRI image pairs of adult brain and tested it to register T1-T2 MRI interpatient images of the neonatal brain. This similarity measure performed better than the standard measure, called mutual information, which is widely used in registration [106]. In another deep learning-based approach to image registration, the deformation parameters between image pairs are directly learned using misaligned image pairs. In [107], the authors trained a CNN-based model to learn the sequence of movements that resulted in the misalignment of the image pairs of CT and cone-beam CT examinations of the abdominal spine and heart. In another study [108], chest CT follow-up examinations were registered by training a CNN to predict three-dimensional displacement vector fields between the fixed and moving image pairs. A CNN-based network was trained to correct respiratory motion in 3D abdominal MR images by predicting spatial transforms [109]. All of these techniques are supervised regression techniques as they were trained using ground truth deformation information. In another approach [110], which was unsupervised, a CNN was trained end-to-end to generate a spatial transformation which minimized dissimilarity between misaligned image pairs. Image generation: Acquisition parameters of a radiological image strongly affect the visual quality and detail of the images obtained using the same modality. First, we discuss the applications that synthesize images generated using different acquisition parameters within the same modality. In [111], 7T like images were generated from 3T MR images by training a CNN with patches centered around voxels in the 3T MR images. Undersampled (in k-space) cardiac MRIs were reconstructed using a deep cascade of CNNs in [112]. A real-time method to reconstruct compressed sensed MRI using GAN was proposed by [113]. In another approach [114] in order to synthesize brain MRI images based on other MRI sequences in the same patient, convolutional encoders were built to generate a latent representation of images. Then, based on that representation a sequence of interest was generated. Reconstruction of "normal-dose" CT images from low-dose CT images (which are degraded in comparison to normal-dose images) has been performed using patch-by-patch mapping of low-dose images to high-dose images using a shallow CNN [115]. In contrast, a deep CNN has been trained with low-dose abdominal CT images for reconstruction of normal-dose CT [116]. In another study, CT images were reconstructed from a lower number of views using an U-Net inspired architecture [117]. Deep learning has also been applied to synthesizing images of different modalities. For example, CT images have been generated using MRIs by adopting an FCN to learn an end-to-end non-linear mapping between pelvic CTs and MRIs [118]. Synthetic CT images of brain were generated from one T1-weighted MRI sequence in [119]. In another application to aid a classification framework for Alzheimer's disease diagnosis with missing PET scans, PET patterns were predicted from MRI using CNN [120]. Image enhancement: Image enhancement aims to improve different characteristics of the image such as resolution, signal-to-noise-ratio, and necessary anatomical structures (by suppressing unnecessary information) through various approaches such as super-resolution and denoising. Super-resolution of images is important specifically in cardiac and lung imaging. Three dimensional near-isotropic cardiac and lung images often require long scan times in comparison to the time the subject can hold his or her breath. Thus, multiple 2D slices are acquired instead and the super-resolution methodology is applied to improve the resolution of the images. An example of using deep learning in super-resolution in cardiac MRI can be found in [121], where the authors developed different models for single image super-resolution and for generating high resolution three-dimensional image volumes from two-dimensional image stacks. In another study using CT, a single image super-resolution approach based on CNN was applied in a publicly available chest CT image dataset to generate high-resolution CT images, which are preferred for interstitial lung disease detection [122]. In this study, upscaled bicubic-interpolated images were first passed through one convolutional layer to generate low-resolution features. Then, a non-linear transformation of those features was mapped to generate high resolution image features for the reconstruction. An example of an application of deep learning in denoising can be found in [123] where the authors performed denoising of DCE-MRI images of a brain (for stroke and brain tumors) by training an ensemble of deep auto-encoders using synthesized data. Removal of Rician noise in MR images using a deep convolutional neural network aided with residual learning was performed in [124]. In an attempt to enhance the visual details of lung structure in chest radiographs, the effect of bone structures (ribs and clavicles) were suppressed. Bone structure has been estimated by conditional random field based fusion of the outputs of a cascaded architecture of CNNs at multiple scales [125]. Metal artifacts (caused by prosthetics, dental procedures etc.) have also been suppressed by using a trained CNN model to generate metal-free images using CT [126]. Content-based image retrieval: In the most typical version of this task, the algorithm, given a query image, finds the most similar images in a given database. To accomplish this task, in [127], a deep CNN was first trained to distinguish between different organs. Then, features from the three fully connected layers in the network were extracted for the images in the set from which the images were retrieved (evaluation dataset). The same features were then extracted from the query image and compared with those of the evaluation dataset to retrieve the image. In another study, a method was developed to retrieve, arrange, and learn the relationships between lesions in CT images [128]. Objective image quality assessment: Objective quality assessment measures of medical images aim to classify an image to be of satisfactory or unsatisfactory quality for the subsequent tasks. Objective quality measures of medical images are important to improve diagnosis and aid in better treatment [129]. Image quality of fetal ultrasound was predicted using CNN in a recent study [130]. Another study [131], attempted to reduce the data acquisition variability in echocardiograms using a CNN trained on the quality scores assigned by an expert radiologist. Using a simple CNN architecture, T2-weighted liver MR images were classified as diagnostic or non-diagnostic quality by CNN in [132]. Future of deep learning in radiology There is a general agreement that deep learning will play a role in the future practice of radiology. Some predict that it will conduct mundane tasks leaving radiologists with more time to focus on intellectually demanding challenges. Some believe that radiologists and deep learning algorithms will work hand-in-hand to deliver performance superior to either of them alone. Finally, some predict that deep learning algorithms will replace radiologists (at least in their image interpretation capacity) altogether. Incorporation of deep learning in radiology will be associated with multiple challenges. First, and currently foremost, is the technological challenge. While deep learning has shown extraordinary promise in other image-related tasks, the results in radiology are far from showing that deep learning algorithms will replace a radiologist in the entire scope of their diagnostic work. Some recent studies [133,134,135,136,137,138,139,4] indicate performance of these algorithms comparable to expert humans, but these results are only applicable to a very small minority of the tasks that radiologists perform. This is likely to change in upcoming years given the rapid progress in implementing the deep learning algorithms in the realm of radiology. Implementation of deep learning in radiology practice also poses legal and ethical challenges. Primarily: who will be responsible for the mistakes that a computer will make? While this is a difficult question, similar questions have been posed and resolved when other technologies were introduced including elevators and cars. Since artificial intelligence penetrates various areas of human activity, questions of this type will likely be studied and answers proposed in the coming years. Other challenges will include patient acceptance or non-acceptance of a radiologist's not being involved in the process of interpreting their images (regardless of the performance) as well as regulatory issues. Finally, an important practical issue is how to incorporate deep learning algorithms into the radiology workflow in order to improve, rather than disrupt the radiology practice. Conclusion In summary, in this paper we have discussed the principles of deep learning as well as the current practice of radiology to elucidate how these new algorithms can be incorporated into radiology workflow. We have discussed the progress and state of art in the field. Finally, we have discussed some challenges and questions related to implementation of deep learning in the current practice of medicine. All signs show that deep learning will play a significant role in radiology. The next 5 years will be a very exciting time in the field that may see many questions stated in this article answered through a collaboration of machine learning scientists and radiologists.
2018-02-10T04:00:55.000Z
2018-02-10T00:00:00.000
{ "year": 2018, "sha1": "f38b9ca303fc7b2c81470ce0ca2963d8b50474de", "oa_license": null, "oa_url": "https://europepmc.org/articles/pmc6483404?pdf=render", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "420b79c23b82ae54e36df1c4f422f6e60216afa6", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Medicine" ] }
264146735
pes2o/s2orc
v3-fos-license
Optimized nanodevice fabrication using clean transfer of graphene by polymer mixture: Experiments and Neural Network based simulations In this study, we investigate both experimentally and computationally the molecular interactions of two distinct polymers with graphene. Our experimental findings indicate that the use of a polymer mixture reduces the transfer induced doping and strain in fabricated graphene devices as compared to conventional single polymer wet transfer. We found that such reduction is related to the decreased affinity of mixture of polymethyl methacrylate and angelica lactone polymer for graphene. We investigated changes in binding energy (BE) of polymer mixture and graphene by considering energy decomposition analysis using a pre-trained potential neural network. It was found that numerical simulations accurately predicted two-fold reduction of BE and order of magnitude reduction of electrostatic interaction between polymers. Introduction Among 2D materials, graphene has been the most extensively studied due to remarkable properties [1][2][3][4] and promising applications.The chemical vapor deposition (CVD) method remains the most reliable to produce high quality and large area graphene on Cu [5,6], Ni [7] or Pt [8,9] surfaces.Even though the CVD-grown graphene usually consists of a single or sometimes multiple layers of graphene, [10][11][12] it is unusable on the growth metallic surface, and thus clean transfer on to target substrate such as Si/SiO2 [13], glass [14], polyethylene terephthalate [15] or paper [16] is crucially important for various applications ranging from biomedical [17] to nanoelectronics and quantum computing [18].Several transfer methods are in use [19][20][21][22][23][24][25], but the polymer support method is a promising one because it could be easily scale up for industrial implementations [26,27].The polymer of interest includes polycarbonate [28], polydimethylsiloxane, [7] but the most popular one is polymethylmethacrylate (PMMA) [29,30].However, inconsistency in the quality of graphene transferred with PMMA has limited its application in device fabrication.This inconsistency is attributed to the presence of carbonyl functional groups (C=O) [31] and long chain structures [32] which are contributing to high binding energy of PMMA to graphene and cause incomplete removal from 2D surface after transfer to device substrate (here we are not discussing graphene imperfections such as defects, grain boundaries, edges, etc. [27]) Additional aggressive solvent treatment (either hot or fuming acetone) [24,33] or thermal annealing [34] did not significantly improve PMMA removal.Other cleaning methods based on either UV/ozone treatment [35] or argon beam bombardment [36] have been employed but cause graphene quality reduction.There are reports of other less aggressive alternative methods requiring complicated equipment setups and involving the use of two layers of PMMA [34,37] which further cause appearance of additional wrinkles and cracks in graphene during transfer [35].The efficiency of graphene transfer can be improved by blending PMMA with a polymer having a low binding energy to graphene [36,37].In this work we demonstrated large area, clean graphene transfer using PMMA and an additive, the polyfuranone chain products produced from biomass-derived angelica lactone via C-C coupling reaction, which we will call ALP for simplicity (Fig. 1 c, d) [38].Understanding the physical mechanisms behind binding polymer molecules on graphene is a challenging computational problem.Indeed, the binding cannot be described as a single global minimum of a potential energy since polymer molecules are not covalently attached to graphene surface.To address this challenge, we used a potential neural network-based approach to calculate minimal energy configurations of graphene and polymer mixture by considering multiple initial conditions for positions of polymer atoms (high throughput cycle as shown in Fig. 2).This method was chosen to circumvent time related deficiency of electron configuration calculations typical for DFT based simulations. Experimental Approach and Results of Experiments Figure S1 shows the procedure of the proposed process for transferring CVD graphene.In the conventional transfer method, PMMA is typically spin-coated on the graphene-on-growth substrate.In this work we mixed the solutions of PMMA and ALP at different weight concentration ratios of ALP:PMMA, as [1:1], [1:2], [1:4], [1:6], [1:0] and then spin-coated on CVD graphene grown on Cu foil.It has to be noted that ALP stays as a jelly like substance after all solvent removal, even after cooling down polymer to 4 0 C, therefore we could not use ALP alone as a sacrificial layer in transfer procedure.The rotation speed was adjusted to get thickness of polymer film approximately 1 μm.After spin-coating, samples were dried at room temperature for 24 h and then soft-baked at temperature 95°C for 5 min to evaporate solvent.Cu foil was delaminated by applying the "bubbling procedure" which is basically a water electrolysis process, in details described previously [39,40].We observed that the polymer graphene stack was detached from the Cu foil very effectively and fast (3-5 seconds) for ALP:PMMA concentration of [1:4], leaving behind clean grown substrate.After cleaning with de-ionized water, the floating polymer-graphene "sandwich" was deposited on Si/SiO2 substrate and dried gradually at 90 -135°C for 30 min.Finally, the sacrificial layer made of polymer mixture was removed by acetone in Soxhlet extractor to prevent any contamination from solvent side [20].We applied multiple characterization techniques to compare quality of transferred material.Scanning electron microscopy (SEM) images of graphene transferred using ALP:PMMA [1:4] showed fewer defects and polymer residues (Fig. 3 a, b) in comparison to graphene transferred with PMMA only.Results for other concentrations of ALP:PMMA could be found in SI.The concentration of polymer mixture ALP:PMMA [1:4] will be used for all future considerations and will be compared to the PMMA-only transferred graphene.To analyze polymer residues on graphene surface, we used atomic force microscopy (AFM).With great consistency to previous reports [41][42][43] the PMMA transferred sample showed the presence of a few intermittent cracks in graphene and moderate polymer residues (RMS=1.98 ± 0.47 nm).The higher quality of the ALP:PMMA transferred sample (Fig. 3 c, see line profile in insert) with RMS= 0.96 ± 0.43 nm could be attributed to (1) softening of polymer blend by adding jelly-like ALP, allowing to evenly distribute introduced by polymer strain and so minimizing appearance of graphene cracks; (2) decreasing of adhesion of polymer blend vs. PMMA, resulting in easier removal sacrificial layer from graphene surface during the final transfer step.Of course, we cannot prevent attachment of polymer residues at defected sites of graphene which in turn would result in the formation of strong covalent bonding between graphene and polymer molecules as shown by Leong et.al. [31].The proposed novel ALP:PMMA transfer method significantly reduces graphene damage, thus only noncovalent interactions between graphene and polymer sacrificial film are expected and numerical calculations will provide more details on these.The adsorbed polymer residues can significantly reduce charge carrier mobility of graphene [43][44][45].Therefore, the transport properties of graphene can be improved by minimizing the polymer residues [41][42][43][47][48][49].The Kelvin Probe Force Microscopy (KPFM) characterization reveals homogeneous surface potential distribution over large area of graphene transferred with ALP:PMMA (Fig. 3 e and insert).Whereas twice higher variations in the surface potential distribution were observed in the PMMA-transferred graphene sample (Fig. 3 f and insert) reflecting introduced parasitic graphene doping (Fermi level shift).Kim et.al., reported that the origin of this inhomogeneity is directly related to PMMA residues [50]. Doping (c) and strain (d) map of graphene transferred by PMMA method. The correlation plot of 2D band center vs width (e). 2D band center vs G band center (f). scale for(a)-(d) is 0.5µm per pixel. Hyperspectral Raman characterization is known to be a powerful tool to examine quality, layer number as well as quantify local doping and strain variations over large area of graphene [51][52][53].We performed Raman mapping for samples transferred by polymer blend and PMMA.The spectra were fit using least squares minimization of Lorentzian peaks.The position, broadening and shift of the Raman characteristic peaks of graphene (D, G, and 2D) were analyzed.From the correlation plots in the Figure 4 e, f, we clearly see that the PMMA-transferred sample has a wider 2D peak indicating reduce charge carrier doping, and a significant shift in both the G and 2D peaks when compared with ALP:PMMA-transferred sample.In accordance with [51], strain and doping induce shifts in the characteristic peaks (2D and G) of graphene.The G peak is particularly responsive to doping, while the 2D peak is influenced by strain.Plotting the 2D against the G peak provides a visual representation of the level of strain and doping.Points at the intersection indicate zero strain and doping.As strain is applied, peak positions shift along the red curve (Fig. 4 f), with both G and 2D peaks moving to lower wave numbers for tensile strain and higher for compressive strain.Increased p-doping shifts the peak band along the magenta curve.It's important to note that this procedure is only applicable to monolayer graphene, hence a few-layer graphene area in the ALP:PMMA-transferred sample was excluded as an outlier from the scatter plots.The corresponding doping maps in Figure 4 a, c strongly aligns with the KPFM results, showing a substantial reduction in parasitic p-doping in the ALP:PMMA-transferred graphene compared to PMMA-transferred graphene.In Figure 4 a, c, green hexagons denote multilayer graphene areas.Additionally, in Figure 4 c, d, we observe both high and low strain areas in both samples, with the ALP:PMMA-transferred sample exhibiting a more uniformly distributed strain. Introduction to numerical methodology As mentioned above we have developed a new approach to perform high throughput cycle atomic simulations to quantify nonlocal interactions on 2D interface between graphene and non-covalently bonded molecules.To create a matrix of atomic parameters (positions, masses, energies and forces) we used an atomic simulation environment (ASE) [54] (Fig. 5 a).At each step of matrix update the atomic level forces have been determined using the potential neural network (PNN), ANI-1ccx [55].After that the positions were updated to determine dynamic parameters of nuclei by using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization predictor-corrector numerical process (Fig. 5 b) [56].The atomic charges qi of each molecule were calculated separately with respective atomic parameters and relaxed geometries [54] using the full Geometry-dependent Atomic Charges (GDAC) [57] method (Fig 5 c).We utilized Multiwft [58] software package to perform energy decomposition analysis using classical force field (EDA-FF) similar to that described in [59].The binding energy (BE) of each system (Fig. 1) was normalized per unit area of van der Waals overlap.The van der Waals area of overlap is the region where the binding energy can be approximated by a Lennard-Jones potential.This region is where the polymer atoms are in close proximity of the 2D interface.The values we used for the van der Waals radii: C, H, O atoms are 1.77, 1.2, 1.52 A respectively [58]. High throughput cycle description The matrix of atomic parameters (which will be called matrix from here on) is initialized before going through the PNN (Fig. 5).We initialized the initial positions using a series of rotations relative to a single input configuration (Fig 2 a).The rotations are performed using Euler rotations, where phi, theta, and psi are the Euler angles, and the molecules center of masses are the center of rotation.In order to generate 576 configurations of equal-proportion displacements (Fig. 2), we generated 6 sets of conformations (representing each face of a 6-sided cube), with each set consisting of 4 conformations (representing 90 0 rotations along each face of the cube). Atomic charge calculations To perform EDA-FF calculations one needs to determine the charge of each atom.The PNN does not provide these charges, so we implemented a geometry dependent atomic charge (GDAC) method [60]. Energy Decomposition Analysis using Force Fields (EDA-FF) The converged matrix positions and the charges from the cycle (Fig 5 c) are used to calculate the binding energy (BE) through Energy Decomposition Analysis using Force Fields (EDA-FF).By calculating the binding energies (∆) of the respective polymer-graphene interacting system ∑ and energies after disassociation , , . . ., , a numerical comparison of the adhesion ability of the polymer(s) and graphene are obtained.The binding energy between N molecular fragments: ∆ , , . . ., ∑ 1 Strongest adhesion is directly proportional to larger negative values of the BE.Energy Decomposition Analysis using Force Fields (EDA-FF) is an attractive method due to its requirement of only optimized structures and atomic charges as inputs.This feature makes it computationally efficient, requiring negligible resources (< 5 seconds per calculation on a single CPU).EDA-FF calculates BE as three separate terms: electrostatic (∆ ), short-range (exchange) repulsion (∆ and long-range dispersion (∆ ). ∆𝐸 ∆𝐸 ∆𝐸 ∆𝐸 2 where the electrostatic energy (Coulomb potential) between atoms A and B is: and the van der Waals interaction energy (Lennard Jones potential) between atoms A and B is the sum of the repulsive interaction due to Pauli repulsion: and the attractive dispersive interaction (dispersion). The is the well-depth of interatomic van der Waals interaction potential, while 0 is the van der Waals radius.The is the distance between atom A and atom B. The interatomic parameters and 0 are provided by the trained force fields and the values are commonly defined for each atom type: where and * are parameters defined by the AMBER atom types which are available here [46]. Van der Waals sphere half area of overlap Using information about the van der Waals radii and relative positions of each atom in a system we are able to calculate the overlapping van der Waals area between the polymer(s) and the graphene surface.First, we listed the graphene atoms and grouped them with the PyVista [60] spheres that represent them.Next, we checked for any overlapping spheres between the graphene and polymer atoms by comparing their distances and van der Waals radii.If we found any overlapping spheres, we recorded the indices of those atoms.We then merged all the overlapping graphene atoms and polymer atoms together into a single mesh object.We took the boolean intersection of these two new object meshes, and calculated the area of that intersection.This method allowed us to avoid overcounting the area in cases where two or more van der Waals spheres overlapped with a single atom.Finally, we take a factor of 1/2 to avoid double counting the area. Constraints over configurational space calculations To obtain meaningful results when exploring a large range of configurations, constraints are necessary.We excluded configurations that did not meet our selection criteria, which included a van der Waals sphere half area of overlap greater than 2Å 2 and a negative binding energy.Only the systems that satisfied these criteria were considered for analysis, and the minima energies were reported in Table 1.We chose this selection criteria because a positive binding energy does not have physical meaning, while a van der Waals sphere half area of overlap less than 2Å 2 suggests that the molecules are too far apart to interact nonlocally, resulting in an unphysical system. Comparison of polymer-polymer and polymer(s)-graphene interactions Quantification of the interactions at the polymer-graphene interface is represented by binding energy per unit area of van der Waals overlap.We employ a gaussian fit over all possible energies of the relaxed geometries within the constraints (Fig. 6).The labeling convention for each model in the computational results corresponds to ALP, PMMA and graphene as A, P and G respectively.Furthermore, the subscripts G, A, P correspond to the C0, C1 values that correspond to G, A or P of each model used in the calculation of eq. 1. Table 1 shows the results of different models and their corresponding contributions to the binding energy values per unit area (∆E).The models include (i.e.interaction of GP for model GAP ), , GP (i.e.interaction of GP for model GP), AP, PP, and AA .All the results have negative total energy values, indicating that the systems are stable.The largest negative total energy value is −2.5 [meV/particle/Å 2 ] in the GP model, while the smallest is −1.2 [meV/particle/Å 2 ] in the model/dimmer.This significant reduction in the binding energy per unit area is indicative to the mix of polymers having a strong decrease in their binding energy with graphene when compared to just PMMA alone. In terms of the energy components, electrostatic and dispersion energies are always negative, while repulsion energies are positive, which is true for all stable systems. Comparing the models, the model has the lowest repulsion energy, while the model has the highest.The model has the highest dispersion energy, while the model has the lowest. Overall, the table provides a useful summary of the energy values for different models and can help guide further analysis and understanding of the systems being studied.PMMA has a significant reduction in BE (1.3 meV) when both polymers are on graphene (i.e.GAPGP).This is consistent to the experimental observations of lower residue concentration on the polymer blend used for graphene transfer.This table provides a more in-depth picture to the successful transfer we observed and the less amount of residues and uniform properties observed.In the simulations of the graphene surface, a periodic array consisting of 5 × 7 graphene unit cells was utilized, with all carbon atoms of the graphene being constrained in all directions.We simulated PMMA as a fragment with n = 2 and ALP as a dimmer of 2 lactone rings. Conclusions We demonstrated clean and large area graphene transfer by using polymer blend with optimized ratio of two polymers.In addition, we designed an algorithm for numerical calculations follows the experimental trend and provides a novel and effective approach for quantifying non-local interactions in multimolecular systems.This suggests that considering the van der Waals sphere half area of overlap reveals a better representation of the underlying physical interactions in molecular systems on the surface of graphene.The approach allows for reporting energy units to energy/area, which is consistent with the trend in experimental results. Instrumentation The samples 'morphologies were obtained with a scanning electron microscope (Zeiss Auriga FIB/FESEM, Jena, Germany) at an accelerating voltage of 5 kV.The surface roughness was obtained using Oxford Research AFM (MFP-3D infinity, Santa Barbara, Ca, USA) in the tapping mode at ambient conditions.A Si tips coated with Al (TAP300AL-G probe, Budget Sensors) was used for the topological probing.The amplitude modulation mode in Kelvin probe force microscopy (AM-KPFM) was employed for the measurement of contact potential difference (CPD) of the transferred graphene.A conductive probe consisting of Pt/Ir-coated tip (EFM, Nanoword) was used while silver paint served as the ground.Raman spectra were measured using a Horiba XploRa Raman Confocal system (Kyoto, Japan) with an excitation wavelength of 532 nm and a 1200 L mm-1 diffraction grating.The mapping of the total coverage of graphene (4 µm x 4 µm) resulting in 2000 data points were collected using a 100x objective in x-y-z directions. Raman spectrum . Raman spectra of graphene transferred by polymer blend (blue) and PMMA (red). Optimizing the appropriate ratio of ALP and PMMA Polymer preparation and characterization of polyfuranone chain products: The polyfuranone chain products (PCP) were derived from the reaction of AL at 80°C for 5 min.Under K2CO3 catalysis, the reaction was spontaneous thus achieving complete conversion to its corresponding polyfuranone -dimers (64 %), trimers (34 %) and trace quantity of tetramers Blending of polymers has been a favorable practice in the polymer industries but rarely used in graphene transfer processes, and in most cases, it is used to confer a certain physical/chemical property on a polymer.In graphene transfer, PMMA is commonly used, but due to its strong interaction with graphene [1], clean transfer is rarely achieved [2].In this work, we altered the strong chemical interaction between graphene and PMMA with a low molecular weight polymer, ALP, and an appropriate blending ratio to yield a clean graphene was investigated with AFM and SEM. As seen in Table S1, the mean roughness (RMS) calculated from AFM characterization of the PMMAtransferred graphene was higher than the RMS value of the polymer blend transferred sample.Results are provided for the ALP:PMMA ratio of 1:4 -1:6 having the lowest RMS of 0.96 and 0.748 nm, respectively.The SEM images are shown in Fig. S5 and they confirmed that ratios 1:4 -1:6 have fewer defective features.Factors that contributed to their improved surface roughness and morphology could be the enhancement in support strength of the blended PMMA thus preventing wrinkles or tear of graphene during transfer processes.In other to understand the chemistries of the polymer adducts, we studied the glass transition temperature (Tg) of the material (ALP:PMMA = 1:4) in respect to the standalone ALP and PMMA (Supplementary Fig. S6).Despite the high miscibility of the two polymers, a physical blending at room temperature (RT) does not results in the formation of new type of polymer and further proof was obtained by non-appearance of new functional groups (Fig. S7).However, the shift in absorption bands at line a (C=O stretching mode) and the disappearance of line b in ALP (-OH stretching due to H2O physisorption) further confirmed a change in polymer geometry.Based on the calorimetric measurement, the Tg of the polymer blend is within the Tg of PMMA irrespective of the mixing ratios indicating that the two polymers do not bind strongly together. Surface energy and surface tension calculation: To estimate the binding energy experimentally we performed contact angle measurements, where surface energy values were calculated for each polymer, polymer mixture and graphene on Si/SiO2 (The graphene partial transparency theory implies that graphene surface energy is dependent on supporting substrate, Si/SiO2 in our case).To measure free surface energy of graphene, we used equation (1), resulted from the Girifalco-Good-Fowke's Young equation [3,4]: Where, is the surface tension of the water drop, is the free surface energy of graphene (solid surface), is water dispersive interactions, and cosθ is the contact angle between the liquid-vapor interface and the solid surface.The relation between interfacial tension of solid surface and the solidliquid interface can determine whether contact angle (θ) is either less or greater than 90 o , which is interpretation of the wettability of the surface.If 0 <θ< 90 o , the liquid partially wets the solid and the surface is said to be hydrophilic.The hydrophobicity rises as the contact angle of the droplets with the surface increases.Hence, hydrophobic surfaces have contact angles larger than 90 o .where is surface tension of polymer, Ps is molecular parachor, V is molar volume, M is molecular weight and ρ is density.Knowing the surface energy of graphene and the surface tension of the support polymers, one can calculate the interfacial energy between graphene and support polymers and subsequently the adhesion energy by using a relation proposed by Girifalco, Good, and Fowkes [6,7]: , , and EA are the surface free energy of graphene (phase 1), the surface free energy of polymer (phase 2) and the interfacial tension between graphene and polymer, and the adhesion energy, respectively (Table S2). Figure 1 : Figure 1: The gas phase relaxed molecules.P MMA (a) and (c) ALP polymer.Structural formulas of PMMA (b), ALP (d).5x7 unit cell of graphene was used as the graphene model top view (e) with periodic boundary conditions shown in dashed line. Figure 2 . Figure 2. Schematic representation of the molecular rotation algorithm used to generate confirmations.The orange cube represents one polymer molecule while the blue represents the second polymer molecule. Figure 3 : Figure 3: the SEM images of graphene transferred with (a) ALP:PMMA and (b) PMMA; area of scan is 135 µm2; the dark spots observed on both images are atmospheric water molecules adsorbed on the surfaces of graphene.AFM maps of graphene transferred with (c) ALP:PMMA (RMS =0.96 ± 0.43 nm) and (d) PMMA (RMS =1.98 ± 0.47 nm); the inserts show the line profiles and surface roughness.KPFM maps of graphene transferred with (e) ALP:PMMA and (f) PMMA and line profile of surface contact potential.Scale bars are 1 µm. Figure 4 : Figure 4: Analysis of the quality of graphene using Raman Spectroscopy.Doping (a) and strain (b) map of graphene transferred by polymer blend method; blue dotted region is few layers of graphene.Doping (c) and strain (d) map of graphene transferred by PMMA method.The correlation plot of 2D band center vs width (e).2D band center vs G band center (f).scale for(a)-(d) is 0.5µm per pixel. Figure 5 . Figure 5. Schematic representation of the algorithm used.PNN/BFGS iterates until energy convergence of 0.05 meV.Outputs of atomic positions (x, y, z), charges (q) and the van der Waals area are computed from the optimized structures. Figure 6 . Figure 6.Gaussian fit of binding energies [eV /Å ] for the models: GP and GAP GP (labeled as GAP here). (2 %) (Xin et al., 2014; Ayodele et al., 2017).The product was purified by DI water treatment.The functional groups were determined by Fourier transform infrared spectroscopy (Agilent 670 FTIR Spectrometer w/ATR (Santa Clara, CA, USA) (SI).The glass transition state of the PCPs was obtained by differential scanning calorimetry (Perkin Elmer DSC6000, Waltham, MA, USA).The measurement was obtained by heating the sample from 0 to 150°C (heating rate of 10°C/min) in an inert atmosphere (SI). Figure S6 . Figure S6.Contact angle schematic.The surface tension of polymer can be estimated by the molar parachor, which was introduced by Sugden (1924), who defined a list of atom-groups' contributions[5]; Table S1 . Comparative analysis of the surface roughness of graphene obtained using the blended polymers
2023-10-17T06:43:00.555Z
2023-10-16T00:00:00.000
{ "year": 2023, "sha1": "777288f27a7bb956991ba929cb0634b35c1ca1b0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "777288f27a7bb956991ba929cb0634b35c1ca1b0", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259287430
pes2o/s2orc
v3-fos-license
$\gamma$-ray Angular Distributions in Single Nucleon Transfer Reactions with Exotic Strontium Isotopes $\gamma$-ray angular distributions help assign spin and parity to excited energy levels in nuclei. The spectroscopy of $^{94,96}$Sr studied through the single neutron transfer reactions with $^{95}$Sr beam in inverse kinematics [$^{95}$Sr(d,p)$^{96}$Sr; $^{95}$Sr(d,t)$^{94}$Sr] revealed a rich nuclear structure with many excited states. While the spin-parities were assigned to low lying states through techniques like particle angular distributions and angular momentum transfer, those for higher lying states were ambiguous. The goal of this project is to develop the Gamma-ray angular distributions and correlations techniques to assign spin-parity to these states. In this work, the $\gamma$-ray angular distributions for 815 keV transition from the first excited state to ground state $(2^{+} \rightarrow 0^{+})$ in $^{96}$Sr are measured. The alignment of this state is calculated from these measurements and compared to the theoretically calculated alignment for this state which is determined by coupling the spins of reactants and the orbital angular momentum transfered. The values agree within experimental limits justifying the technique. It is then applied to some other transitions and important results are discussed. Angular correlations cannot be performed with this data since statistics turn out to be limiting factor at the crystal level. Motivation Nuclear shell model is by far the most successful in describing the atomic nucleus. It can accurately predict the magic numbers, ground state spin-parity of even as well as odd mass number nuclei, etc. Moreover, the shell structure of nuclei plays an important role in the propensity for nuclei to deform. Even a small number of valence nuclei outside of a closed shell can drive the whole nucleus into a deformed shape. This is evident from Figure 1.1 which shows the predicted ground state quadrupole deformation across the nuclear chart (Moller et al. 1995). The region around Z 40 and N 60 has protons as well as neutrons outside the closed shell and hence, high deformation. Another interesting feature of this region in the nuclear chart is the shape coexisting 0 + states. These states are quantum superposition of spherical and deformed 0 + states which can interact strongly with each other. An experimental signature of this phenomenon is the enhanced monopole (E0) transition strengths which can proceed only through internal conversion electrons mechanism. The shape-coexisting states in these nuclei can be analyzed through a very simple yet accurate two-level mixing model. It assumes that ground states of spherical and deformed nuclei initially exist with a separation ∆E u . An interaction V acts between them in a way that pushes the two levels further apart with final separation ∆E p . Their wavefunctions mix with the mixing amplitude a such that A large number of theoretical calculations exist for this region with various models predicting different results. However, this part of the nuclear chart remained experimentally unexplored. With the advancement in rare isotope beam production FIGURE 1.1: Finite Range Drop Model (FRDM) calculations of the ground-state deformation across the nuclear chart by Moller et al. (1995). capabilities, the experimental results from the spectroscopy of this region of nuclear chart is indispensable to validate the theoretical models. Transfer Reactions Transfer reactions are a well-established tool for studying single particle structure of nuclei. In these reactions, a small number of valence nucleons are transferred from either the target to the projectile or vice-versa. This leads to two types of transfer reactions: stripping and pickup. Stripping reactions are when some nucleons are stripped by the projectile from the target whereas pickup is when some nucleons are picked up by the projectile from target nucleus. Transfer reactions are direct reactions which means that they proceed through a single step. The timescales involved for these reactions are of the order of 10 −22 − 10 −23 seconds which is roughly the time required for the projectile travelling at speeds close to the speed of light to travel the distance of dimensions of the radius of the target. This is in contrast to compound nuclear reactions where a compound nucleus is formed in the intermediate state and the reaction proceeds with no information about how that state was formed. The timescales for these reactions are also significantly higher as compared to transfer reactions. Only the outer valence nucleons take part in transfer reactions and rest of the core remains undisturbed. This makes them a useful tool to study single particle states. Transfer reactions can be depicted as shown in Figure to the core. A + a is referred to as the entrance channel of the reaction and B + b as the exit channel. The kinematics of this reaction can be fully described by two-body kinematics since the initial as well as the final state contain only two bodies each. The most commonly used transfer reactions involve single nucleon transfer. This is particularly useful to avoid the complications arising from multi-nucleon transfer configurations. Moreover, the cross-sections for single nucleon transfer are much higher which is important to consider when performing reactions with radioactive ion beams. The transfer of a single neutron is more probable than that of a proton due to couloumbic forces. Reactions like (d, p) and (d, t) were carried out for this work which involve both the stripping and pick-up of a single neutron. Inverse Kinematics The (d, p) and (d, t) reactions have been used since a very long time to study the single particle structure of stable and long-lived isotopes using deuteron beams. The reactions carried out in this manner are said to be in normal kinematics since the target is much heavier than the beam. However, in recent times, when more exotic isotopes have to be studied, it is impossible to fabricate their targets since they are extremely short-lived. We then have to resort to inverse kinematics where deuteron is used as a target and the beam consists of the isotope to be studied. There arise several problems, of which two prominent ones are discussed below. 1. Inverse kinematics experiments have a large centre of mass motion. This leads to forward drift in laboratory frame and hence, less favourable angular coverages for ejectiles. 2. The beam currents for rare isotope beams used in inverse kinematics are significantly lower than that for stable beams. This enforces thick targets to be used to get measurable cross-sections for the reaction. This causes problems like huge energy losses for reaction products and poor energy resolution as the reaction can take place anywhere within the target. Even after these problems, experiments in inverse kinematics experiments have made possible the spectroscopy of exotic isotopes and many useful results have been successfully achieved. Angular Distributions Overview The aim of angular distribution measurements is to estimate the multipolarity of γradiation from which, spin-parity assignments to excited nuclear states can be made (Ferguson 1965). It is simply a measurement of variation of γ-ray intensity measured as a function of angle between the direction of γ-ray emission and another fixed direction in space. This direction can be chosen in an experiment to be the direction of incident beam which populates the initial state or the direction of emission of another particle of even a γ-ray which aligns the decaying state. Angular distribution measurements can be done in a laboratory by using a γ detector of finite solid angle to count the no. of photons emitted in a given amount of time. These measurements can be repeated at different orientations of this detector with respect to the beam direction for similar times and intensity can be plotted as a function of θ. This function is represented as for reasons explained in Appendix A. These coefficients then depend on the total intensity of the transition as well as the angular dependence. To get rid of the total intensity dependence, these coefficients are normalized by dividing throughout by B 0 . Thus the final expression reads as The final goal of these measurements is then to find the coefficients a k 's which truly quantify the angular dependence. Outline of Thesis In summary, the goal of this work is to develop the technique for measuring angular distributions with our set-up. This will help to assign and confirm the spins of several states in current experiments as well as those to be carried out in future as we constantly push the limits for creating newer exotic beams for nuclear experiments. In this thesis we start, in Chapter 2, with the description of experiments carried out for this work [ 95 Sr(d,p) 96 Sr; 95 Sr(d,t) 94 Sr]. The whole set-up including beam, target, detectors, etc. is described in detail. In the next chapter, we move to the theoretical model for calculating the expected angular distributions and discuss the effect of alignment of nuclei on these measurements. In Chapter 4, the techniques for the analysis of data and the results obtained are discussed. Methods used like Doppler Corrections, addback, headlight effect, etc. are . Finally, we conclude in Chapter 5 with the summary of this work and prospects for further applications. General Overview A detailed spectroscopy of States in 94 Sr and 96 Sr was carried out using the data from single nucleon transfer reactions in inverse kinematics. A 5.5 MeV/u 95 Sr beam was impinged on a CD 2 target to look for single neutron stripping (d,t) and single neutron pickup (d,p) reactions, respectively. Both the reactions were carried out simultaneously using γ-particle coincidence techniques which was possible through a combination of position-sensitive particle detection (SHARC) and γ detection (TI-GRESS). A schematic diagram of the experiment is shown in Figure 2.1. These experiments were carried out at TRIUMF, Canada's national laboratory for nuclear and particle physics research during June 2014 with the Isotope Separator and Acceleration (ISAC-II) facility. This was one of the very first experiments exploring high mass region (A>30) with re-accelerated secondary beams and it marks an important step in the laboratory's ability to produce heavy ion exotic beams for performing nuclear reactions. The various components in the experimental setup are described in brief throughout this chapter. Starting with beam delivery, the two main techniques for producing exotic beams are described followed by the description of beam delivery for this experiment. Subsequently, the deuteron target for this experiment is outlined. Finally, the detector systems for particle and γ measurements are described. Beam Delivery The two main ways of producing radioactive ion beams (RIBs) or rare isotope beams, as they are called, are: In-flight fragmentation and Isotope Separation OnLine (ISOL). Both are complementary to one another and each technique has certain pros and cons. However, they often access different areas of nuclear chart and both types of facilities are required for exhaustively producing new exotic isotopes. Facilities like the NSCL in the US, RIKEN in Japan and GSI in Germany are in-flight fragmentation The in-flight fragmentation facility uses a high energy (>50 MeV/u) heavy-ion beam impinged upon a thin target which results in very less energy loss in the production target. The heavy ion undergoes fragmentation as a result of interaction with the target and produces a fast radioactive ion cocktail beam. The main advantage of this method is that we can study extremely short-lived isotopes with half-lives up to an order of a nanosecond as the initial heavy-ion beam energy is carried forward by the RIB. However, this requires highly efficient mass separation and generally, the energy distribution of the beam is quite broad. The cross-sections also tend to be low which results in poor beam quality. In contrast to this, the ISOL facility used a light ion beam, usually proton or α particles impinged upon a thick target. The primary beam looses all of its energy in the production target which maximizes the production yield. However, it is challenging to efficiently extract the exotic ions produced in the target. It is for this reason that the production targets are operated at temperatures of several thousand Kelvin to diffuse the exotic ions to the surface. The secondary beams have to be re-accelerated to perform nuclear experiments which result in additional inefficiencies. Also, the isotopes with half-lives less than 10 ms cannot be produced currently with this approach. However, we can get good quality beams with precise final energies. For these experiments, the TRIUMF 500 MeV cyclotron was used to produce a high intensity beam of protons with beam current of upto 10µA. The beam was then sent to ISAC facility where it impinged on a thick Uranium Carbide (UC x ) target. Proton-induced uranium fission and spallation within the target produced a yield consisting of a wide variety of nuclei. These isotopes were extracted from the UC x Target The deuterated polyethylene was used as a deuteron target in these experiments. The target, however, wasn't fully deuterated. Elastic scattering measurements with deuteron and hydrogen (d,d) and (p,p) were used to determine the target deuteration which was found to be 92(1)%. The nominal thickness was chosen to be 5.0 µm during manufacturing as a trade-off between total reaction cross-section and energy broadening effects. Thicker targets produce high yields but the reaction can take place anywhere in the target. So, there is substantial energy loss for some nuclei and none for others giving broad energy distributions. The target thickness measurements were performed by using a triple-α source containing 239 Pu, 241 Am, 244 Cm. Knowing the deuteration factor, energy loss of α particles were measured and thickness was determined as the difference between their range in the target material before and after the target. The experimentally measured value was 4.4(4) µm. Detector Systems The following section describes the various detectors used for the experiments described in this thesis along with their operational principles. Auxiliary Detectors In addition to the detector systems for reaction products, two other auxiliary detectors are used to monitor the beam. TBragg is located upstream before the beam interacts with the target whereas Trifoil is a counter located downstream from the target. Both of these are discussed in brief in this section. TBragg The TBragg is a gas-filled ionization chamber used for identifying heavy-ions present in the radioactive cocktail beam. As the beam enters the gas chamber, it ionizes the gas producing free ions and electrons. The pressure of the gas is chosen so that the beam is completely stopped in the chamber and full energy measurements can be made. The electrons are drifted towards the anode owing to the longitudinal electric field. The strength of this field determines how quickly and efficiently electrons can be collected. A pulse-shaped analysis of the electrical signal is used to extract information about the incident ion: the slope of the signal is related to the stopping power of the ion whereas the total amplitude is related to the incident energy of the ion. Figure 2.3 shows the components of mass 95 beam in the TBragg spectrometer. Trifoil The Trifoil scintillator is an auxiliary system consisting of BC400 foils connected to a set of three photomultipliers. Due to the very fast scintillating light signals from the BC400 foil, it can be used as a RIB counter. It is set-up downstream of the target chamber as shown in Figure 2.4. During the reaction, the mass 95 radioactive beam can undergo fusion with the target and quickly evaporate several light particles into SHARC. This can interfere with our reaction products and make the analysis more difficult. This problem can be solved by using an Aluminium degrader in front of trifoil which blocks the heavy fusion products. Thus only the reaction products can reach trifoil and the signal can be used in coincidence with those from our main detector systems. we see that the product ∆E and E is equal to kz 2 M (assuming that the factors in the bracket will be constant for non-relativistic case), where ze is the charge of the particle, M is its mass and k is a constant depending on the absorber material. Plotting the value of energy loss in thin counter, ∆E against the total energy E gives us a family of mass hyperbolas corresponding to different values of z 2 m which in turn gives us the particle identification (PID). Figure 2.6 shows one such measurement. Another method is the time-of-flight method which is similar to this method but uses TAC to measure the time taken by particles to fly between two counters and then plot the mass hyperbolas. TIGRESS TIGRESS stands for TRIUMF-ISAC Gamma-Ray Escape Suppressed Spectrometer which is an array of High-purity Germanium (HPGe) Clover detectors. The main advantage of these detectors is the excellent energy resolution which is of prime importance in γ-ray spectroscopy as the nuclei can have tightly spaced energy levels. TIGRESS is purpose-built for use in reaction studies where photons are emitted from recoiling nuclei. Hence, excellent angular resolution is a must feature in these types of detectors for precise Doppler Correction. Each TIGRESS Clover detector is made up of four closed-ended n-type coaxial HPGe crystals. The individual crystals have an eight-fold segmentation: four quadrants and a lateral divide which produces a 32-fold segmentation in a single clover. This gives very good angular resolution so that precise Doppler Correction can be made. Good angular resolution also helps to improve the quality of our data through addback algorithm which is described in the next chapter. Figure 2.7 shows the segmentation within a single clover. The clovers are arranged into constant θ rings with 4 clovers each at 45 deg and 135 deg and 8 clovers at 90 deg with respect to the beam axis. There is close to full φ coverage in each of these rings. In this experiment, 45 deg ring was excluded since the additional space was required for SHARC pre-amplifiers and the remaining 12 Compton Suppressor Shields which can be used as a veto for Compton-Scattered γrays. That is whenever there is a hit in the BGO shields, we discard that event since full energy is not deposited in the crystal. This greatly reduces the background and makes the data clean. Figure 2.8 shows a photograph of TIGRESS array as it was used in this work. There are two operational modes for the array: optimized peak-to-total and highefficiency. In the optimized peak-to-total mode, the BGO shields are brought forward so that they are flush with γ-rays at the front face of the detector. This induces maximum reduction in Compton-scattered events. In the high-efficiency mode, the BGO shields are pulled back and the crystals form a continuous face. This greatly increases the solid angle coverage and hence the overall efficiency of the detector. Each TIGRESS clover contains an in-built pre-amplifier and high voltage supply for each segment and also one for the core. These are all connected to a shared cryostat which is maintained at 77K using a liquid Nitrogen LN 2 reservoir. Measurements Angular Distribution measurements demand position sensitive detection of γ-rays. Since TIGRESS contains decent solid angle coverage with excellent angular resolution, it makes a very good system to perform these measurements. Each segment can be thought of as a stand-alone detector at a different angle and measuring the intensities of particular transitions can give us the angular distributions. However, the statistics at the segment level are very poor to identify the peak and measure its intensity. As a result, we perform our analysis at the crystal level with much better statistics. However, all the crystals do not correspond to different angles. Each clover has four crystals which form groups of two, corresponding to a constant angle with respect to the beam axis. Thus, all the crystals from a particular clover ring are grouped into two constant angle rings of detectors on each side of the clover ring. The two clover rings used in this experiment give us a total of four constant angle rings of crystals as shown in Figure 2.9. We can thus perform angular distribution measurements with four data points and much better statistics as compared to the crystal level. Theory This chapter includes the theoretical formulation of γ-ray angular distributions. A brief overview of γ-ray transitions and the selection rules is given followed by the conditions for anisotropic angular distributions. Thereafter, the alignment conditions and its effect on the a 2 and a 4 coefficients is described. Finally, the theory for mixing of multipolarities is described with a technique for finding the experimental mixing ratio. γ-Transitions γ-radiation is one of the most abundant form of radioactive decay. It often follows other decay modes which leave the daughter nucleus in it's excited state. The energy difference between the two level is emitted as a γ-ray photon of the corresponding energy. These photons have a total angular momentum associated with them which defines the multipolarity of the transition. Each nuclear state is also characterized by it's total angular momentum and law of conservation of angular momentum leads us to the first selection rule for γ-ray emission whereas the law of conservation of parity gives the second selection rule. Let the total angular momentum and parity of the initial and final states be represented by J i , π i and J f , π f respectively, and let l be the multipolarity of the emitted γ-ray. Then the parity of the operator for electric transition is (−1) l and for magnetic transitions is (−1) l+1 . The selection rules can be represented as (Krane and Halliday 1988): , so that they satisfy the triangle inequality and the total angular momentum difference between the two levels is carried away by the γ photon. 2. π i π f = −1 for odd electric and even magnetic multipoles. π i π f = 1 for even electric and odd magnetic multipoles. When either of J i or J f is 0, the γ-transition has a unique multipolarity and there cannot be any mixing whatsoever. However, when both J i and J f are 0, the selection rules demand an l = 0 transition which is not possible since the photon is an elementary particle with inherent spin of 1 and cannot have a total angular momentum less than 1. Hence, such transitions proceed through internal conversion electrons which are often accompanied by the emission of X-rays as the electrons cascade down to the empty place left by the conversion electron. A stretched transition is one in which the photon carries angular momentum equal to the algebraic difference between the angular momentum of initial and final state i.e. all three vectors are collinear. Usually, the lowest permitted multipoles are the most preferred modes of emission since the expected intensity falls down by a factor of approximately 10 −5 with each increasing order of multipole. However, if the sub-state m = 0 is preferentially populated, then F 0 1 = 3sin 2 (θ) which gives us a sin 2 (θ) dependence for this dipole transition. Similarly, it gives a cos 2 (θ)sin 2 (θ) dependence for a quadrupole transition (2 + → 0 + ) when only the m = 0 sub-state is populated. Angular Distributions Theory Let us define a population parameter w(m i ) such i w(m i ) = 1. Then the case for equal sub-state population will be defined by w(m i ) = 1 2J+1 for all i. There are departures form this case that gives anisotropic angular distributions, viz. polarization and alignment. Polarization is when (w(m i ) = w(−m i )) whereas alignment is when w(m i = w(−m i ) = 1 2J+1 ) for some values of i. The following three conditions are then quite apparent: 1. Transitions from spin 0 state will always be isotropic since it can neither be aligned nor polarized. 2. Spin 1 2 state cannot be aligned and transitions from those will only be anisotropic if the state is polarized. 3. States with spin greater than 1 can either be aligned or polarized. The next section defines a more rigorous alignment parameter, the statistical alignment tensor and gives a theoretical formalism for obtaining the expected values of the coefficients a k 's given the alignment, spins of initial and final states and the multipolarity of transition. Theoretical Formalism The angular distribution coefficients for a pure transition can be represented in terms of a statistical alignment tensor, which as the name suggests depends upon the alignment of decaying state and a coupling parameter which depends on the angular momenta of participating states and the multipolarity of γ-transition. The statistical tensor is defined as where P m (J) can written as Thus, we see that σ is the only free parameter describing the alignment of decaying state which is approximated to be a Gaussian distribution about m = 0 state. In practice, this is quite accurate representation since m = 0 state is most likely to be populated. The coupling constant F k (J f LLJ i ) can be written as where < ....|.. > is the Clebsch-Gordon Coefficient and W (....; ..) is the Racah-Coefficient. The values for F k 's for different combinations of J i , J f and L were read from the Angular Distribution tables by Yamazaki (1966). The angular distributions for unmixed transitions can then be represented as When two multipolarities L 1 and L 2 mix together by an amount δ such that the angular distribution coefficients can be written as The statistical parameter is still as defined earlier, and the coupling constants become FIGURE 3.1: Attenuation factors at varying alignment parameters for The definition of our statistical tensor is invalid for full alignment i.e. when only the m = 0 state is populated since σ = 0 in this case. Hence, for full alignment, it can be defined as Then, for this ideal case, In actual cases when the alignment is partial, The variation of coefficients a 2 and a 4 with increasing alignment for J = 2 state is shown in Figure 3.1. In reality, the coefficients a k 's are further attenuated because of the fact that the detectors we use for counting the γ-rays have finite solid angle coverage. Thus, all the points on the detector face do not correspond to similar θ. An easy way to combat this effect, as per Krane (1972), is that the whole detector area is divided into infinitesimal patches and the coefficients are calculated for each patch with the corresponding angle θ. Finally, the results are integrated and compared to the coefficients assuming constant θ. Calculations are done using FORTRAN code as described in Krane (1972). The final result is additional coefficient attenuation factors β k 's such that the above equation becomes The inputs to run the computer code include geometrical parameters of the detector system and the γ-ray absorption coefficient in the detector material. Plugging the values for our system, we find that It should be noted that higher order coefficients attenuate faster than lower order coefficients. In the case of angular correlation measurements when the γ is emitted following another γ in a cascade, the statistical alignment parameter of the initial state for both the transitions are related. If the state J f is formed only through the preceding transition J i → J f , the statistical tensor for state J f is expressed in terms of that of state J i as follows: This sums up the theoretical formulation used for calculating the expected angular distribution coefficients in this work. Experimental Mixing Ratio As we can see from Equation 3.5, the angular distribution coefficients depend on alignment σ and the mixing ratio δ. σ can be measured experimentally from the experimental angular distribution coefficients for unmixed transitions and using Equation 3.4. Once we know the value of σ for a state which is constant for a given reaction channel, we can plot a graph of χ 2 vs δ where χ 2 is given by Where ∆a 2,4 are the uncertainties in the experimentally observed angular distribution coefficients. a 2,4 (theo) are calculated using Equation 3.5 keeping δ as a free parameter so that χ 2 is a function of δ. Minimizing this χ 2 gives us the value of experimental mixing ratio for that transition. This is known as the χ 2 minimization procedure which was given by Singh (1992). The error in this quantity can be found by taking the value of δ at χ 2 min ± 1. In practice, the graph is plotted against tan −1 δ as opposed to δ so that the nature of the graph is cyclic and it is easy to trace the minimum value. Alignment in Transfer Reactions The alignment in reactions involving beam impinging on a target can be approximated to be Gaussian in nature. This is because there is a large probability for the m = 0 substate to be populated followed be decreasing probability for occupancy of higher m-substates. The exact values for the occupation probability can, however, be calculated by a rather simple analysis involving Clebsch-Gordon coefficients and σ can be estimated from those values. More information on the Clebsch-Gordon coefficients and their physical interpretation is given in Appendix B. They represent the probability amplitude of finding a particular configuration of spins in a total spin state. The square of these coefficients, hence, represent the occupation probability. In this section, this type of analysis is presented for the directly populated 2 + 1 state in 96 Sr and results for similar analysis of other states is presented. Consider the reaction 95 Sr(d,p) 96 Sr with reference to Figure 3.2. Among the reactants, deuteron has spin 1 and 95 Sr has ground state spin-parity of 1 2 + . We are interested in finding the relative occupation probability of different m substates. Next, we take into account the orbital angular momentum imparted to the system by the beam. Theoretically, the maximum angular momentum it can impart to the system is calculated to be 14 units by semiclassical calculations using L = r x p. The radius of 95 Sr can be calculated by R = R o A 1 3 and the momentum can be calculated from the beam energy. However, we encounter the states with maximum total angular momentum of 4 in our analysis. Hence, we can safely assume that states with all spin values are present and only consider those which lead us to the required spin value for our state under consideration. The orbital angular momentum imparted by the beam will lie in the xy plane as depicted in Figure 3.3. Hence, there will be no z-component for this part and the occupation values for m-substates remain unchanged. At this stage in our analysis, we have J = 1 2 , 3 2 , 5 2 , . . . with the maximum z-component of m = 3 2 . In the last step, a free proton (or triton) is emitted with a ground state spin of 1 2 . This has to be included in our m-diagram to calculate the final occupation probabilities of our required states. While doing so, the results of the previous stage have to be taken into consideration where m = ± 1 2 state is twice likely as populated as m = ± 3 2 state. For the J = 2 state, the analysis of possible m-substates is as follows: • For m = −2: Occupation probability = < 5 2 , − 3 2 ; 1 2 , − 1 2 |2, −2 > 2 + < 3 2 , − 3 2 ; 1 2 , − 1 2 |2, −2 > 2 = 7 6 FIGURE 3.3: Orbital angular momentum transferred by the beam to the system having no z-component. • For m = −1: Occupation probability = < 5 2 , − 3 2 ; 1 2 , 1 2 |2, −1 > 2 + < 3 2 , − 3 2 ; 1 2 , 1 2 |2, −1 > 2 +2 * [< 5 2 , − 1 2 ; 1 2 , − 1 2 |2, −1 > 2 + < 3 2 , − 1 2 ; 1 2 , − 1 2 |2, −1 > 2 ] = 37 Chapter 4 Analysis and Results In this chapter, the analysis procedure will be outlined. The analysis was carried out using the GRSISort Software package (Bender, Bildstein, and Dunlop 2012) based on the ROOT Framework. Details of the specific programs used and developed for this work are outlined in Appendix C. Calibrations A 152 Eu γ-ray source was used to calibrate the energies and efficiencies of TIGRESS Array. The energies and intensities of the strongest transitions as per the data from National Nuclear Data Center (NNDC) were used for the same. The main advantage of using this source is the vast range of energies of γ emissions from this source ranging from around 100 keV up to 1400 keV which is largely overlapping with our region of interest for this work. Figure 4.1 shows the corresponding energy spectra in a single crystal. The energy calibrations were done using a linear fit which was sufficiently accurate for our purpose. The relative efficiencies at each energy were calculated by dividing the total number of counts in each peak by the relative intensities of each transition. In principle, these efficiencies would be different for each crystal or a constant angle array for the purpose of our analysis. Calibration spectra at crystal level were added to get those at constant angle array and relative efficiencies at each such array are calculated for each energy along with the corresponding errors. Doppler Corrections The reaction products will be recoiling due to the incident beam energy. γ-rays from these recoiling nuclei will be Doppler shifted due to the motion of their source. This effect depends on the energy of γ photon emitted, the speed of recoiling nuclei and the angle between the motion of source and γ photon emission (θ). The measured Where β and γ are the usual Lorentz factors encountered in relativity. The velocity of recoiling nuclei was calculated on an event by event basis by reaction kinematics and hence, the Lorentz factors were calculated. To calculate θ, the following two assumptions had to be made: 1. The γ-rays are emitted as soon as the recoiling nuclei are generated since the excited states of these nuclei often have half-lives of the order of picoseconds in which the recoiling nuclei can come out of the target but cannot travel much. 2. The motion of the recoiling nuclei is almost parallel to the beam axis as they are quite heavy and scatter less than 1deg. This leads us to the conclusion that γ-ray is emitted at the central position and θ in the above equation is equal to θ T IG i.e. the angle of the point of interaction in the TIGRESS detector to the beam axis. This means the Doppler correction in a particular constant angle array will be the same. Addback A γ-ray will often scatter multiple times before being fully absorbed by the detector material. This will record a series of simultaneous energy signals across detector channels. If they are recorded as separate events, it will result in a huge background and have adverse effects on the photo-peak efficiency. However, if these events are taneously adding to the photo-peak counts thereby increasing the photo-peak efficiency. The improvement is larger for higher energies since the probability of interaction by photoelectric effect which results in full-energy peak in a single event is low and that of scattering is high in this region. The performance of this method depends on the actual algorithm used which resorts to parameters like segmentation, detector geometry of our system as well as the time difference between the signals. The hits were ordered with respect to their energies (highest to lowest) and it was considered to be the γ-ray track since the energy deposited by the γ ray during scattering is proportional to the incident energy. This means that the position of the hit with highest energy was assumed to be the interaction point of incoming photon. This is not strictly true but is statistically most probable. This algorithm was benchmarked using calibration source data and it was found that the photo-peak efficiency of TIGRESS could be improved by up to 40% with addback. SHARC Similar to TIGRESS, analysis of SHARC was also performed. It included calibrations, gain-matching, particle identification, etc. It was done by Cruz (2017) and is not discussed here for the sake of brevity. However, it was necessary for particle-γ coincidence to tag the γ-rays detected to a particular recoiling nuclei. Angular Distribution Analysis γ-ray angular distribution was done by integrating the counts in the peak of a transition and normalizing them with the efficiencies calculated for each constant angle at that energy. Plotting the efficiency corrected counts v/s cosine of the angle of emission gives us a distribution. Fitting this distribution to Equation 1.3 gives us the value of the coefficients a 2 and a 4 which quantify the angular distribution. However, due to inverse kinematics, the system has a large center of mass frame motion which is almost relativistic. Thus, the angle values and the counts have to be corrected by transforming these values measured in laboratory frame to the values measured in the center of mass frame (Celik 2014). This is done as explained in the next subsections. Angle Corrections Consider a light source moving with velocity v in frame S. In the frame S which is moving with velocity v away from S, then light source is at rest. Consider a photon travelling at angle θ as measured in S . Clearly, the x-component of the velocity of the photon is u x = c.cosθ . In S, the x-component is u x = c.cosθ. From the velocity transformations: where θ is the laboratory frame angle and θ is the center of mass angle. Thus, the four angles for the constant angle array in laboratory frame were converted into center of mass frame by using Equation 4.1. The value of β was nicely peaked around 10% with variation as little as 0.5%. Hence, it was assumed to be constant in our analysis. Solid Angle Corrections Due to the relativistic motion of the recoiling nuclei, it will register a different solid angle in the center of mass frame than the laboratory frame and hence, different counts in each constant angle array. However, integrating over the whole solid angle, the total counts registered in both the frames should be same since it is the same physical process viewed from two different frames of reference. We can write this as Transitions The first step to develop this technique was to reproduce the expected angular distributions for states with assigned spin values. While choosing a transition for the same, the following points had to be considered: 1. The peak of that transition is clearly identified in the spectra and is free of any other contaminants. It should contain sufficient counts to minimize error bars. Hence, some of the strongest transitions were chosen. Of the many transitions analyzed based on the above conditions, two of the most important ones are detailed in the following subsections. 815 keV in 96 Sr This transition from the first excited state to the ground state in 96 Sr is the most intense transition. Being a 2 + 1 → 0 + 1 transition, it is expected to be a pure E2 quadrupole from the selection rules. As we can see from the level scheme of 96 Sr, the initial state in this transition is fed by almost all higher excited states. For performing angular distribution measurements, only the directly populated nuclei were considered and hence, strict cuts on excitation energy were made. Using the analysis techniques described above, the angular distribution measurement for this transition is as shown in Figure 4.4. It can be seen that the distribution has a quadrupole nature but the coefficients are highly attenuated. One of the reasons for this can be poor alignment of nuclei. The alignment parameter σ/J was back-calculated from the experimental values for a 2 and a 4 , using Equation 3.10 and the average value turned out to be σ/J = 0.64 ± 0.15. This is in agreement with the theoretical value of σ/J calculated in Section 3.3 thus justifying our technique. 1577 keV in 94 Sr This 3 − → 2 + transition in 94 Sr is interesting since the initial state is a negative parity state which is unusual in our experiments. However, this transition was detected inbeam as confirmed by the Doppler uncorrected spectrum for this transition in Figure 4.5. The parity assignment for this state is a bit vague and our analysis suggests otherwise. According to the selection rules for γ-ray transitions, it is expected to be a mixture of E1 + M 2. Putting strict excitation energy cuts and following our analysis procedure, the angular distribution for this transition was as shown in Figure 4.6. The quadrupole nature is highly surprising. The alignment parameter σ/J for J = 3 state was theoretically calculated to be 0.4358 in Section 3.3. Using this, the experimental mixing ratio was calculated for this transition as outlined in Section 3.2.2. The χ 2 v/s δ plot for the same is shown in Figure 4.7. χ 2 is minimum at δ = 0.91 which shows a 91% mixing of M 2 giving the quadrupole nature. However, according to Weiskopf's estimates, such high mixing is more probable in case of M 1 + E2 transition which strongly suggests that the initial state should have positive parity which also checks with our experiments. Further measurements with lower uncertainties are required to confirm the same. Conclusions and future prospects The technique for γ-ray angular distribution measurements is developed in this work. It works well within experimental uncertainties for the states with already assigned spins. This instilled the confidence to apply this technique to other transitions of interest. Further transitions that can be analyzed include transitions like the 2084 keV in 96 Sr or the 1868 keV in 94 Sr. The 2084 keV transition in 96 Sr is from a higher excited state to ground state (0 + ). The spin of the higher excited state is unassigned but this transition is a pure as J f = 0. Thus, we can easily assign J i based only on the nature of angular distribution. The limiting factor for this work was the low statistics which lead to huge errors in experimentally calculated values. Higher statistics are especially required for angular distribution measurements as compared to spectroscopy since the analysis is performed at the crystal level. Another reason for the huge errors was that the coefficients B k 's are fitted to the angular distribution plots from where the coefficients a k 's are calculated as a k = B k B 0 . This leads to propagation of errors in our calculations. Higher statistics will also be required in future to perform angular correlation measurements as it was not possible in our current analysis. Another important factor to consider for future experiments is to choose low spin beam and target nuclei wherever possible. As seen in our analysis, the high spin of deuteron target is, to an extent, responsible for the poor alignment of states in recoil nuclei as states with large m-values are populated. This, in turn, gives poor angular distributions. The TIGRESS calibrations in this work were done only with 152 Eu. However, they also need to be done with appropriate sources for high energy transitions instead of extrapolating the efficiency curves from simulations since our measurements are extremely sensitive to the efficiency calibrations.
2023-06-30T06:43:10.076Z
2023-06-29T00:00:00.000
{ "year": 2023, "sha1": "b080d8660646c97beb7b4da225be2788442a2c91", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b080d8660646c97beb7b4da225be2788442a2c91", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257775516
pes2o/s2orc
v3-fos-license
Mechanism of NLRP3 inflammasome intervention for synovitis in knee osteoarthritis: A review of TCM intervention Objective: This paper briefly reviews the structure and function of NLRP3 inflammasomes, signaling pathway, relationship with synovitis in KOA, and intervention of traditional Chinese medicine (TCM) in NLRP3 inflammasomes as a means to improve its therapeutic potential and clinical application. Method: Literatures about NLRP3 inflammasomes and synovitis in KOA were reviewed to analyze and discuss. Result: NLRP3 inflammasome can activate NF-κB mediated signal transduction, which in turn causes the expression of proinflammatory cytokines, initiates the innate immune response, and triggers synovitis in KOA. The TCM monomer/active ingredient, decoction, external ointment, and acupuncture regulating NLRP3 inflammasomes are helpful to alleviate synovitis in KOA. Conclusion: The NLRP3 inflammasome plays a significant role in the pathogenesis of synovitis in KOA, TCM intervention targeting the NLRP3 inflammasome can be a novel approach and therapeutic direction for the treatment of synovitis in KOA. Introduction KOA is a degenerative joint condition that is brought on by a number of reasons and frequently coexists with synovitis (Mathiessen and Conaghan, 2017). Although the pathophysiology of synovitis in KOA is not entirely clear, related research has revealed that the innate immune response plays a crucial part in the disease's pathogenesis ). An essential PRRs in the innate immune system, the NLRP3 inflammasome can activate the NF-κB signaling pathway by identifying pathogen-related molecular patterns (PAMPs) and damage-related molecular patterns (DAMPs), inducing an innate immune response, activating or accelerating the transmission of downstream signaling molecules, and leading to synovitis . In order to provide a theoretical foundation and point of reference for the diagnosis and treatment of synovitis in KOA, this article reviews and analyzes historical data regarding the role of the NLRP3 inflammasome for synovitis in KOA as well as the research status of TCM interventions on the NLRP3 inflammasome. The structure and function of NLRP3 inflammasome The inflammasome is a multiple proteins complex that exists in the cytoplasm of cells. It was first proposed by Martinon et al. (2002). It is mainly formed during the activation of caspase-1 by nucleotide-binding oligomerization domain (NOD) like receptors in PRRs. NOD-like receptors play an important role in innate immunity, among which NLRP3 inflammasome is the most deeply studied (Zhang et al., 2021a). NLRP3 consists of an amino-terminal pyridine domain (PYD), a central NACHT domain, and a carboxyl-terminal leucine-rich repeat (LRR) (Gaul et al., 2021). Studies have shown that the NACHT domain has ATP binding activity to promote the oligomerization of NLRP3, LRR and NACHT domains form a mutual inhibitory effect, and the PYD domain allows NLRP3 to interact with other inflammasome proteins (Hafner-Bratkovič et al., 2018). NLRP3 exists in the cytoplasm and participates in innate immunity as PRRs. It is activated by recognizing PAMPs and DAMPs (Zhao et al., 2022a). NLRP3 inflammasome consists of NLRP3 (nucleotide-binding domain leucine-rich repeat (NLR) and pyrin domain containing receptor 3), ASC (apoptosisassociated speck-like protein containing a caspase recruitment domain), and Pro-caspase-1 (Zahid et al., 2019). NLRP3 is considered to be the site of the sensing activation signals. ASC is the adaptor protein of NLRP3 inflammasome, which connects NLRP3 and Pro-caspase-1. The phosphorylation of ASC promotes the activation of the inflammasome. Pro-caspase-1 has no catalytic activity, but it can be activated into the effector protein Caspase-1 of the NLRP3 inflammasome by its shearing. Caspase-1 can transform inactive Pro-IL-1β and Pro-IL-18 into mature IL-1β and IL-18 . It has been found that NLRP3 is easily activated in dendritic cells, macrophages, and neutrophils (Zhao et al., 2022b). The NLRP3 inflammasome pathway belongs to the classical inflammasome pyroptosis pathway (Caspase-1 mediated). In addition, there are non-classical inflammasome pyroptosis pathways (Caspase-4, Caspase-5, Caspase-11 mediated) and apoptotic protein Caspase-3 mediated pyroptosis pathway (Moretti et al., 2022;Fu et al., 2021;Zhang et al., 2021b). The role of NLRP3 inflammasome for synovitis in KOA is a hot topic in recent years, many studies have shown that NLRP3 inflammasome is a potential mechanism of synovitis in KOA, but it needs to be further studied. NLRP3 inflammasome signaling pathway NLRP3 inflammasome mainly senses stimulation signals in cells and can be activated by a variety of internal and external factors, such as PAMPs and DAMPs, including lipopolysaccharide (LPS), amyloid β, cholesterol crystals, monosodium urate crystals (MSU), adenosine triphosphate (ATP), fatty acids, and hyaluronic acid. Some bacteria and fungi can also activate NLRP3 as PAMPs. In addition to the above factors, crystal or granular structures such as silica, asbestos, and alum can also activate NLRP3 and cause inflammatory cascade amplification (Kelley et al., 2019;Swanson et al., 2019;McGettrick et al., 2020). It has been found that there are two signal models for NLRP3 inflammasome activation (Figure 1): the first step is initiated at the transcriptional level, in which Toll-like receptors recognize PAMPs or DAMPs to activate NF-κBmediated signal pathway, which increases the production of pro-IL-1β, pro-IL-18 and NLRP3 proteins. The second step is the activation signal, which initiates NLRP3 oligomerization and causes NLRP3, ASC, and Pro-caspase-1 to form inflammasomes. Subsequently, Pro-caspase-1 is self-sheared and activated to Caspase-1 p10 and Caspase-1 p20. After Caspase-1 is activated, pro-IL-1β and pro-IL-18 can be sheared into mature IL-1β and IL-18 Mu et al., 2022;Zhang et al., 2022;Zhang et al., 2018;Pei et al., 2022). Then released to the outside of the cell, and more inflammatory cells (HMGB1, leukotrienes, prostaglandins, etc.) were collected, which led to the cascade of inflammation. The molecular mechanisms of NLRP3 inflammasome activation mainly include potassium outflow, calcium signaling, lysosomal destruction, mitochondrial dysfunction, and Golgi. Potassium ion outflow causes a decrease in intracellular potassium levels under the stimulation of ATP, pore-forming toxins, crystals, particles, etc. Then directly binds and activates NLRP3 under the action of NIMAassociated kinase 7 (Nek7) (Sun et al., 2022). Plant-derived dietary lectins are internalized, then escaped from the lysosome and are transported to the endoplasmic reticulum. Endoplasmic reticulum-loaded lectins trigger calcium ion release and mitochondrial damage. It was found that blocking the flow of calcium ions can inhibit NLRP3 inflammasome components and activation. Promoting calcium ion release can aggravate mitochondrial damage, and mediated mitochondrial damage can cause NLRP3 inflammasome activation. And promoting the release of calcium ions can aggravate the injury of mitochondria, and calcium ion-mediated mitochondrial damage could cause the activation of NLRP3 inflammasome (Murakami et al., 2012). Lysosomal damage releases cathepsin B directly binds to the NLRP3 inflammasome and promotes the activation of the NLRP3 inflammasome . The release of mitochondrial ROS (mt ROS) and mitochondrial DNA (mt DNA) caused by mitochondrial dysfunction is another important cause of NLRP3 inflammasome activation. For example, after the increase of ROS caused by NLRP3 agonist, the redox stress mediated by thioredoxin interacting protein (TXNIP) can activate the NLRP3 inflammasome . It was found that the Golgi apparatus is involved in NLRP3 inflammasome activation through protein kinase D signaling on mitochondria-associated endoplasmic reticulum membranes (Zhang et al., 2017). In addition, some infectious microorganisms have been shown to activate the NLRP3 inflammasome (Giraud et al., 2019). In conclusion, NLRP3 inflammasome is a key host immune defense mechanism for the body to face PAMPs or DAMPs. With the Frontiers in Genetics frontiersin.org deepening of research, NLRP3 inflammasome will provide more ideas for the treatment of many diseases. The role of NLRP3 inflammasome for synovitis in KOA The expression of NLRP3 inflammasome in KOA synovium Synovitis is one of the important causes of cartilage degeneration (Oka et al., 2021). IL-1β involved in cartilage degradation may be produced by synovial cells rather than chondrocytes . Synovitis is relatively more studied in rheumatoid arthritis (RA). It has been found that NLRP3 inflammasome is highly activated in the synovium of RA patients and collagen-induced arthritis mice. The activation of NLRP3 inflammasome mainly occurs in infiltrating monocytes/macrophages in the synovium. The NLRP3 inhibitor MCC950 can significantly inhibit the activation of NLRP3 inflammasome in the synovium and reduce the production of IL-1β (Guo et al., 2018). Clavijo-Cornejo et al. found that the protein expression of NLRP3 in the synovium of KOA patients increased 5.4-fold with respect to normal patients (Clavijo-Cornejo et al., 2016). Sakalyte et al. found that NLRP3 inflammasome existed in synovial fibroblast cell of KOA patients and showed high expression (Sakalyte et al., 2022). The activation of NLRP3 inflammasome promotes synovitis, which participates in the whole process of KOA and promotes the progress of KOA. NLRP3 inflammasome mediates synovitis in KOA The course of synovitis in KOA often involves the participation of immune cells, and innate immunity is an important barrier for the human body to prevent the invasion of pathogens. PRRs can recognize and perceive DAMPs or PAMPs, and combine with them to form ligand polymer, which can cause and promote synovitis in KOA after activating the innate immune response (Leung et al., 2015). NLRP3 inflammasome, as a PRRs, can activate the NF-κB signal pathway after combining with DAMPs and PAMPs expressed or secreted in the synovium, causing the expression of proinflammatory cytokines and inflammatory mediators, then leading to synovitis. Which can promote synovial cell proliferation, and aggravate synovitis (Zhang et al., 2019a). In KOA synovial macrophages, NLRP3 inflammasomes are induced and released into the synovial fluid and surrounding tissues under the action of different DAMPs. Which increased the expression levels of IL-1β and IL-18 in a series of inflammatory reactions involving synovial macrophages and chondrocytes . Eventually, this led to synovitis and cartilage degeneration. Chen et al. found that the Nrf2/HO-1 signal in the synovium of KOA patients and model rats may be an important way to activate the NLRP3 inflammasome. Oxidative stress induced by ROS may be the main reason for the activation of NLRP3 inflammasome and the subsequent release of downstream pro-inflammatory factors in the development of KOA (Chen et al., 2019). The activation of FIGURE 1 Mechanism of NLRP3 activation requires two signals. The first priming signal is provided through the interaction of PAMPS/DAMPs with TLRs. This initiates NF-κB signaling, which upregulates the production of pro-IL-1β, pro-IL-18, and inactive NLRP3 protein. The second step is an activation signal which causes NLRP3, ASC, and Pro-caspase-1 to come together. Pro-caspase-1 is then converted into active caspase-1, along with NLRP3 and ASC forms the NLRP3 inflammasome complex. Active caspase-1 cleaves pro-IL-1β and pro-IL-18 causing their activation, which converts into IL-1β and IL-18, subsequent release to extracellular. The molecular mechanisms of NLRP3 inflammasome activation mainly include potassium outflow, calcium signaling, lysosomal destruction, mitochondrial dysfunction, and Golgi. Frontiers in Genetics frontiersin.org NLRP3 inflammasome can induce the secretion of proinflammatory cytokines IL-1β and IL-18, leading to the aggravation of downstream inflammatory response and accelerating the occurrence of synovitis in KOA. In addition to ROS, the ectopic deposition of hydroxyapatite (HA) crystals in joints are related to the pathogenesis of synovitis in KOA. HA crystals induce macrophages to secrete IL-1 and IL-18 in an NLRP3 inflammasome-dependent manner. In addition, calcium crystals in the synovial fluid of KOA patients showed NLRP3 inflammasome stimulating activity in vitro (Jin et al., 2011). It was found that the level of uric acid was positively correlated with the expression of IL-18 and IL-1β in synovial fluid of KOA patients, while uric acid could activate NLRP3 inflammasome and increase the expression of IL-18 and IL-1β, then led to the aggravation of synovitis. This indicates that there was a close relationship between NLRP3, uric acid, and proinflammatory cytokines (Aibibula et al., 2016). HA crystal, MSU crystal, calcium pyrophosphate, and calcium phosphate also were inflammasome activators (Busso and So, 2012). Zhao et al. found that NLRP3 inflammasome in the synovium of KOA patients was involved in synovial fibroblast cell inflammation and pyroptosis. Inhibition of NLRP3 inflammasome can significantly reduce the expression of apoptosis-related cytokines . Xiao et al. found that NLRP3 inflammasome mediated synovial fibroblast cell pyroptosis can enhance the secretion of high mobility group protein B1 (HMGB1), and HMGB1 has a proinflammatory effect and aggravates synovitis . Zhang et al. found that hypoxia in the synovium of KOA model rats led to an increase in hypoxia-inducible factor 1α (HIF-1α), resulting in an increase in the expression of NLRP3, Caspase-1and GSDMD. Thereby aggravating synovitis and fibrosis in KOA (Zhang et al., 2019b). TCM interventions on the NLRP3 inflammasome for synovitis in KOA The intervention effect of TCM on the NLRP3 inflammasome for synovitis in KOA via TCM monomer/active ingredient, Decoction, External ointment, and Acupuncture (Table 1). TCM monomer/active ingredient Casticin Casticin is a compound purified from the TCM Viticis Fructus. In rats KOA model induced by monoiodoacetic acid (MIA) and the inflammation of primary FLS stimulated by Agnuside Agnuside is a non-toxic natural small molecule isolated from the extract of Vitex negundo. In MIA-induced rats KOA model and LPS-induced FLS inflammation model, it was found that Agnuside could effectively alleviate local hypoxia in the synovium, reduce the mRNA and protein levels of HIF-1α, caspase-1, ASC, and NLRP3. Meantime downregulate the expression of NLRP3 inflammasome downstream factors IL-1β and IL-18, also fibrosis markers TGF-β, TIMP1, and VEGF. It is indicated that Agnuside reduces synovitis and fibrosis in experimental KOA by inhibiting the activation of HIF-1α/ NLRP3 inflammasome (Zhang et al., 2021c). Chrysin Chrysin is a natural flavonoid found in Scutellaria baicalensis Georgi. In the rats KOA model induced by MIA, Chrysin can not only reduce synovitis but also reduce the secretion of pain-related factors, and increase the mechanical pain threshold and cold pain threshold of rat. Chrysin alleviates synovitis by inhibiting NLRP3 inflammasome activation and IL-1β expression. It is suggested that Chrysin can reduce synovitis in KOA and improve pain behavior in rats, which may be related to the ability to inhibit the activation of NLRP3 inflammasome (Liao et al., 2020). Vanillic acid Vanillic Acid is a monomer from Chinese herbal medicine. It was found that Vanillic acid decreased the expression of caspase-1, ASC, and NLRP3 in rats KOA model both in vivo and vitro and also reduced the levels of IL-1β and IL-18, which reduced synovium fibrosis and alleviated pain-related behaviors in rats KOA model. The expression of pain mediators CGRP, NGF, and TrkA in FLS was downregulated. It shows that Vanillic Acid reduces synovitis and pain-related behaviors in rats KOA model (Ma et al., 2021). Nodakenin Nodakenin is the main coumarin active ingredient in Angelicae Pubescentis Radix. It was found that Nodakenin could increase trabecular bone score in subchondral bone, reduce the level of serum inflammatory factors and alleviate synovitis in mice KOA model after Nodakenin intervention. In vitro, it was found that Nodakenin inhibited the phosphorylation of kinesin-related protein 1 (Drp1) and ROS production in chondrocytes stimulated by LPS through DRP1-dependent mitochondrial division. In addition, Nodakenin inhibited the mRNA levels of inflammatory factors (COX 2, IL-1β, and TNF-α), NLRP3 inflammasome, and MMP13 in activated chondrocytes. It indicated that Nodakenin alleviates cartilage degradation and synovitis in KOA by regulating the mitochondrial Drp1/ROS/NLRP3 axis (Yi et al., 2022). Isochlorogenic acid A Isochlorogenic acid A, as a natural product of quinic acid and caffeic acid by esterification and condensation, mostly exists in Lonicera japonica, Celastrus angulatus, L. japonica, and other plants. Isochlorogenic acid A can significantly reduce the expression of NLRP3, caspase-1, NF-κB p65, p-NF-κB p65, p-IκB, and RANKL in the synovium of collagen-induced arthritis rats, downregulate plasma IL-1β, IL-6, TNF-ɑ, CRP, IFN-γ and IL-18, and reduce the swelling of rats toes. Isochlorogenic acid A has a good anti-inflammatory effect on collagen-induced arthritis, and its anti-inflammatory activity may be related to decreasing the activation of NLRP3 inflammasome and NF-κB phosphorylation expression (Liu et al., 2019). Xanthotoxol Xanthotoxol is a coumarin compound extracted from Chinese herbal medicine's common cnidium fruit. In the rats KOA model established by papain, xanthotoxol can significantly reduce joint swelling, synovial hyperemia, and synoviocyte proliferation, meantime reduce synovium inflammatory cell infiltration and vascular proliferation. It can significantly reduce the levels of IL-6, IL-1β, and TNF-α in synovial fluid, and reduce the content of NLRP3 protein and NF-κB phosphorylated protein in synovium. Xanthotoxol inhibits the infiltration of inflammatory factors and downregulates the activity of the NF-κB signal pathway by inhibiting the activation of NLRP3 inflammasome. Thereby inhibiting the expression of inflammatory factors, relieving synovitis in KOA, and exerting a protective effect on osteoarthritis . Andrographolide Andrographolide is the main active ingredient of the natural plant Andrographis paniculata. Andrographolide can reduce the infiltration of inflammatory cells in synovium, and inhibit the inflammatory response in mice KOA model established by anterior cruciate ligament transection (ACLT). It can inhibit the proliferation, apoptosis, and inflammation of chondrocytes induced by LPS stimulation. Andrographolide inhibits the progression of osteoarthritis by regulating the circ_Rapgef1/ miR-383-3p/NLRP3 signaling axis . Decoction of TCM Xibining Xibining (patent number: CN201010514325) is a TCM compound developed by Professor Peimin Wang aiming at KOA clinical treatment with the therapeutical principle of warming channels and activating blood circulation. Medicine composition and dosage: Radix Aconiti Carmichael 15 g, Processed cibotium barometz 15 g, human placenta 10 g, Cornus officinalis 1 5g, Wilson cinnamon bark 15 g, Morinda officinalis 10 g, Jobstears seed 10 g, Tuber fleece flower root 10 g, Medicinal cyathia root 10 g, Radix glycyrrhiza 5 g. In the rats KOA model established by sodium iodoacetate, after xibining treatment, the infiltration of inflammatory cells in the synovium of rats KOA decreased. The infiltration of inflammatory cells, Frontiers in Genetics frontiersin.org mRNA, and protein expression of HIF-1α, NLRP3, ASC, GSDMD, and Caspase-1 in synovium were decreased. Meantime the levels of IL-1β and IL-18 in synovium decreased. Thereby Xibining can effectively improve the hypoxia condition of the synovium in KOA, reduce the expression of HIF-1α, reduce the activation of the NLRP3 inflammasome, and reduce synovitis in KOA . Du Huo Ji Sheng Tang Du Huo Ji Sheng Tang (DHJST) is a TCM formula, which is a classic prescription for the treatment of KOA. The levels of serum IL-1β, IL-6, IL-10, TNF-α, NLRP3, ASC, Caspase-1, p-NF-κB-P65, and p-IκBa were decreased in KOA patients after DHJST treatment. In the rats KOA model established by Papain Enzyme, after DHJST intervention, the swelling volume of the right hind foot of the rats was significantly reduced, and the levels of IL-1β, IL-6, and TNF-α in synovial fluid of the knee joint were downregulated, meantime the expression of NLRP3, ASC, Caspase-1, p-NF-κB-P65, and p-IκBa in the synovium of the knee joint was decreased, and the pathological changes such as synovitis and cartilage degeneration of the knee joint were alleviated. DHJST alleviated KOA by suppressing NLRP3/NF-κB inflammatory signals in rats (Chen et al., 2020). External ointment of TCM Layers Adjusting External Application Layers Adjusting External Application (Patent No: ZL200820185241.8) is a TCM ointment for external use, which is composed of Chinese medicines for warming meridians and activating blood circulation. Layers Adjusting External Application can improve the Krenn score of synovitis in rats KOA model, downregulate the expression of serum IL-1β and TNF-α, downregulate the expression of NLRP3, ASC, Caspase-1 protein and mRNA in the synovium, meantime downregulate levels of MMP-1 and MMP-13 in cartilage. Layers Adjusting External Application may inhibit synovitis in KOA by down-regulating the expression of NLRP3 and Caspase-1, reducing the level of cartilage MMPs, and playing a role in protecting cartilage (Li et al., 2020b). "Sanse Powder" "Sanse Powder" is the core component of Layers Adjusting External Application (Patent No: ZL200820185241.8). It is a hospital preparation of the Department of Orthopedics and Traumatology of the Affiliated Hospital of Nanjing University of Traditional Chinese Medicine. It is one of the representative prescriptions for warming meridians and activating blood circulation. In rats synovitis in KOA model and FLS stimulated by LPS, "Sanse Powder" Essential Oils Nanoemulsion can inhibite ERS/TXNIP/NLRP3 signaling axis to regulate the excessive production of IL-1β and IL-18 . In KOA inflammatory cell model established by LPS, "Sanse Powder" Volatile Oil can downregulate the protein and mRNA expression of NLRP3, caspase-1, and ASC, meantime reduce the levels of IL-1β and IL-18 in cell supernatant. It may play a role in improving synovitis in KOA by inhibiting the activation of NLRP3 inflammasome in FLS and reducing the downstream inflammatory cascade . Acupuncture In the KOA model of SD rats established by Papain Enzyme, after electroacupuncture stimulation of "Neixiyan" (EX-LE4) and "Dubi" (ST35), the pathological score of synovium, serum IL-1β, and IL-18 levels, synovium NLRP3, ASC, Caspase-1, IL-1β, IL-18 mRNA and protein expression levels were decreased, meantime the expression of GSDMD mRNA and GSDMD-N protein was also decreased. Electroacupuncture can reduce the inflammatory response of knee joint synovium in rats, which may be related to inhibiting the NLRP3 inflammasome signaling pathway and reducing pyroptosis . In the guinea pigs KOA model, after electroacupuncture treatment, the mechanical withdrawal threshold of guinea pigs was downregulated, the articular cartilage structure was improved, and the fibrosis on the cartilage surface was reduced. Electroacupuncture can inhibit the activation of the NLRP3 inflammasome, and inhibit the protein expression levels of caspase-1 and IL-1β in cartilage tissue. Electroacupuncture alleviates KOA pain by suppressing the activation of NLRP3 inflammasome . Wang et al. observed the effect of moxibustion combined with ultrashort wave on elderly patients with KOA. The results showed that the total effective rate of the observation group was 90.48%. After treatment, the VAS and WOMAC scores of the observation group decreased, and the Lysholm knee joint scores increased. The serum IL-1β, TNF-α, SOD, MDA, miR-155, and NLRP3 were all lower than those before treatment. The results show that moxibustion combined with an ultrashort wave can effectively improve the knee joint pain and function of elderly KOA patients, reduce oxidative stress response, and the potential mechanism may be through Suppressing NLRP3 inflammasome signaling pathway (Wang et al., 2022b). Conclusion The NLRP3 inflammasome plays a significant role in the pathogenesis of synovitis in KOA, and innate immunity is activated during the pathogenesis of this condition. The NF-κB signaling pathway, pro-inflammatory factor production, inflammatory mediator secretion, synovitis in KOA, and synovial cell proliferation can all be brought on by the activation of the NLRP3 inflammasome. The pathophysiology of synovitis in KOA can be further understood by analysis of the role of the NLRP3 inflammasome. Targeting the NLRP3 inflammasome can be a novel approach and therapeutic direction for the treatment of synovitis in KOA. The research conclusions are mostly from animal or in vitro experiments. The effectiveness and safety of clinical applications are not completely clear. Further depth research is needed. Frontiers in Genetics frontiersin.org
2023-03-29T13:08:10.895Z
2023-03-29T00:00:00.000
{ "year": 2023, "sha1": "e651f20d383fbdeeee80b08dd29cbf5cb08072f5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "e651f20d383fbdeeee80b08dd29cbf5cb08072f5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
231924751
pes2o/s2orc
v3-fos-license
Learning image quality assessment by reinforcing task amenable data selection In this paper, we consider a type of image quality assessment as a task-specific measurement, which can be used to select images that are more amenable to a given target task, such as image classification or segmentation. We propose to train simultaneously two neural networks for image selection and a target task using reinforcement learning. A controller network learns an image selection policy by maximising an accumulated reward based on the target task performance on the controller-selected validation set, whilst the target task predictor is optimised using the training set. The trained controller is therefore able to reject those images that lead to poor accuracy in the target task. In this work, we show that the controller-predicted image quality can be significantly different from the task-specific image quality labels that are manually defined by humans. Furthermore, we demonstrate that it is possible to learn effective image quality assessment without using a ``clean'' validation set, thereby avoiding the requirement for human labelling of images with respect to their amenability for the task. Using $6712$, labelled and segmented, clinical ultrasound images from $259$ patients, experimental results on holdout data show that the proposed image quality assessment achieved a mean classification accuracy of $0.94\pm0.01$ and a mean segmentation Dice of $0.89\pm0.02$, by discarding $5\%$ and $15\%$ of the acquired images, respectively. The significantly improved performance was observed for both tested tasks, compared with the respective $0.90\pm0.01$ and $0.82\pm0.02$ from networks without considering task amenability. This enables image quality feedback during real-time ultrasound acquisition among many other medical imaging applications. Introduction Image quality assessment (IQA) has been developed in the field of medical image computing and image-guided intervention as it is important to ensure that the intended diagnostic, therapeutic or navigational tasks can be performed reliably. It is intuitive that low-quality images can result in inaccurate diagnoses or measurements obtained from medical images [1,2], but there has been little evidence that such corroboration can be quantified, between completion of a specific clinical application and a single general-purpose IQA methodology. Chow and Paramesran [3] also pointed out that measures of image quality may not indicate diagnostic accuracy. We further argue that a general-purpose approach for medical image quality assessment is both challenging and potentially counterproductive. For example, various artefacts, such as reflections and shadows, may not be present near regions of clinical interest, yet a "good quality" image might still have inadequate field-of-view for the clinical task. In this work, we investigate the type of image quality which indicates how well a specific downstream target task performs and refer to this quality as task amenability. Current IQA approaches in clinical practice rely on subjective human interpretation of a set of ad hoc criteria [3]. Automating IQA methods, for example, by computing dissimilarity to empirical references [3], typically can provide an objective and repeatable measurement, but requires robust mathematical models to approximate the underlying statistical and physical principles of good-quality image generation process or known mechanisms that reduce image quality (e.g. [4,5]). Recent deep-learning-based IQA approaches provide fast inference using expert labels of image quality for training [2,6,7,8]. However, besides the potentially high variability in these human-defined labels, to what extent they reflect task amenability -i.e. their usefulness for a specific task -is still an open question. In particular, a growing number of these target tasks have been modelled and automated by, for example, neural networks, which may result in different or unknown task amenability. In this work, we focus on a specific use scenario of the task-specific IQA, in which images are selected by the measured task-specific image quality, such that the selected subset of high-quality images leads to improved target classification or segmentation accuracy. This image selection by task amenability has many clinical applications, such as meeting a clinically-defined accuracy requirement by removing the images with poor task amenability and maximising task performance given a predefined tolerance on how many images with poor amenability can be rejected and discarded. The rejected images may be re-acquired immediately in applications such as the real-time ultrasound imaging investigated in this work. The IQA feedback during scanning also provides an indirect measure of user skills, though skill assessment is not discussed further in this paper. Furthermore, we propose to train a controller network and a task predictor network together for selecting task amenable images and for completing the target task, respectively. We highlight that optimising the controller is dependent on the task predictor being optimised. This may therefore be considered a meta-learning problem that maximises the target task performance with respect to the controller-selected images. Reinforcement learning (RL) has increasingly been used for meta-learning problems, such as augmentation policy search [9,10], automated loss function search [11] and training data valuation [12]. Common in these approaches, a target task is optimised with a controller which modifies parameters associated with this target task. The parameter modification action is followed by a reward signal computed based on the target task performance, which is subsequently used to optimise the controller. This allows the controller to learn the parameter setting that results in a better performed target task. The target application can be image classification, regression or segmentation, while the taskassociated parameter modification actions include transforming training data for data augmentation [9,10], selecting convolution filters and activation functions for network architecture search [13] and sampling training data for data valuation [12]. Among these recent developments, the data valuation approach [12] shares some interesting similarities with our proposed IQA method, but with several important differences in the reward formulation by weighting/sampling validation set, the availability of "clean" high-quality image data, in addition to the different RL algorithms and other methodological details described in Sec. 2. For medical imaging applications, the RL-based meta-learning has also been proposed, for instance, to search for optimal weighting between different ultrasound modalities for the downstream breast cancer detection [14] and to optimise hyper-parameters for a subsequent 3D medical image segmentation [15], using the REINFORCE algorithm [16] and the proximal policy optimization algorithm [17], respectively. In this work, we propose using RL to train the controller and the task predictor for assessing medical image quality with respect to two common medical image analysis tasks. Using medical ultrasound data acquired from prostate cancer patients, the two tasks are a) classifying 2D ultrasound images that contain prostate glands from those that do not and b) segmenting the prostate gland. These two tasks are not only the basis of several computational applications, such as 3D volume reconstruction, image registration and tumour detection, but are also directly useful for navigating ultrasound image acquisition during surgical procedures, such as ultrasound-guided biopsy and therapies. Our experiments were designed to investigate the following research questions: -Can the task performance be improved on holdout test data selected by the trained controller network, compared with the same task predictor network based on supervised training and non-selective test data? -Does the trained controller network provide a better or different measure of task amenability, compared with human labels of image quality that are intended to indicate amenability to the same tasks? -What is the trade-off between the quantity of rejected images and the improvement in task performance? The contributions are summarised as follows: We 1) propose to formulate task-specific IQA to learn task amenable data selection; 2) propose a novel RL-based approach to quantify the task amenability, using different reward formulations with and without the need for human labels of task amenability; and 3) present experiments to demonstrate the efficacy of the proposed IQA approach using real medical ultrasound images in two different downstream target tasks. Image quality assessment by task amenability The proposed IQA consists of two parametric functions, task predictor and controller, illustrated in Fig. 1. The task predictor f (·; w) : X → Y, with parameters w, outputs a prediction y ∈ Y for a given image sample x ∈ X . The controller h(·; θ) : X → [0, 1], with parameters θ, generates an image quality score for a sample x, measuring task amenability of the sample. X and Y denote the image and label domains specific to a certain task, respectively. Let P X and P XY be the image distribution and the joint image-label distribution, with probability density functions p(x) and p(x, y), respectively. The task predictor's objective is to minimise a weighted loss function L f : Y × Y → R ≥0 : where L f measures how well the task is performed by the predictor f (x; w), given label y. It is weighted by the controller-measured task amenability on the same image x, as mistakes (high loss) on images with lower task amenability ought to be less weighted -with a view to rejecting them, and vice versa. The controller's objective is to minimise a weighted metric function L h : Y × Y → R ≥0 : such that the controller is encouraged to predict lower quality scores for images with higher metric values (lower task performance), as the weighted sum is minimised. The intuition is that making correct predictions on low-quality images tends to be more difficult. The constraint prevents the trivial solution h ≡ 0. Thus, the overall objective to learn the proposed task-specific IQA can be assembled as the following minimisation problem: To facilitate a sampling or selection action (see Sec. 2.3) by controller-predicted task amenability scores, Eq. (4) is re-written as: where the data x and (x, y) are sampled from the controller-selected or -sampled distributions P h X and P h XY , with probability density functions The reinforcement learning algorithm In this work, an RL agent interacting with an environment is considered as a finite-horizon Markov decision process with (S, A, p, r, π, γ). S is the state space and A is a continuous action space. p : S × S × A → [0, 1] is the state transition distribution conditioned on state-actions, e.g. p(s t+1 | s t , a t ) denotes the probability of the next state s t+1 ∈ S given the current state s t ∈ S and action a t ∈ A. r : S × A → R is the reward function and R t = r(s t , a t ) denotes the reward given s t and a t . π(a t | s t ) : S × A ∈ [0, 1] is the policy represents the probability of performing action a t given s t . The constant γ ∈ [0, 1] discounts the accumulated rewards starting from time step t: Q π (s t , a t ) = T k=0 γ k R t+k . A sequence (s 1 , a 1 , R 1 , s 2 , a 2 , R 2 , . . . , s T , a T , R T ) is thereby created with the RL agent training, with the objective to learn a parameterised policy π θ which maximises the expected return Two different algorithms have been considered in our experiments, REIN-FORCE [16] and Deep Deterministic Policy Gradient (DDPG) [18]. Based on initial results indicating little difference in performance between the two, all the results presented in this paper are based on DDPG, with which a noticeably more efficient and stable training was observed. Further investigation into the choice of RL algorithms remains interesting in future work. While the REINFORCE computes policy gradient to update the controller parameters directly, DDPG is an actor-critic algorithm, with an off-policy critic Q(s t , a t ; θ Q ) : S × A → R and a deterministic actor µ(s t ; θ µ ) : S → A. To maximise the performance function J(θ µ ) = E µ [Q π (s t , µ(s t ; θ µ ))], the variance-reduced policy gradient is used to update the controller: , which can be approximated by sampling the behaviour policy β(s t ) = µ(s t ; θ µ ): where the critic Q(s t , a t ; θ Q ) is updated with respect to minimising: In our implementation, copies of the actor Q (s t , a t ; θ Q ) and the critic µ (s t ; θ µ ) are used for computing moving averages during parameter updates, θ Q ← τ θ Q + (1 − τ )θ Q and θ µ ← τ θ µ + (1 − τ )θ µ , respectively. Additionally, a random noise N is added to µ(s t ; θ µ ) for exploration. Here, τ = 0.001 and N is the Ornstein-Uhlenbeck process [19] with the scale and the mean reversion rate parameters set to 0.2 and 0.15, respectively. Image quality assessment with reinforcement learning In this section, the IQA in Eq.(5) is formulated as a RL problem and solved by the algorithm described in Sec. 2.2. The pseudo-code is provided in Algorithm 1. A finite dataset together with the task predictor is considered the environment. At time step t, the observed state from the environment s t = (f (·; w t ), B t ) consists of the predictor f (·; w t ) and a mini-batch of samples The agent is the controller h(·; θ) that outputs sampling probabilities {h( for training the predictor. The policy π θ (a t | s t ) is thereby defined as: The unclipped rewardR t is calculated based on the predictor's performance samples, by removing the first s rej ×100% samples, after sorting h j in decreasing order. It is important to note that, for the first reward definitionR avg,t without being weighted or selected by the controller, the validation set requires preselected "high-amenability" data. In this work, additional human labels of task amenability were used for generating such a clean fixed validation set (details in Sec. 3). During training, the clipped reward R t =R t −R t is used with a moving averageR t = α RRt−1 + (1 − α R )R t , where α R is a hyper-parameter set to 0.9. Algorithm 1: Image quality assessment by task amenability Data: Training dataset Dtrain and validation dataset D val . Experiment Transrectal ultrasound images were acquired from 259 patients, at the beginning stages of the ultrasound-guided biopsy procedures, as part of the SmartTarget: THERAPY and SmartTarget: BIOPSY clinical trials (clinicaltrials.gov identifiers NCT02290561 and NCT02341677 respectively). For each subject, a range of 50-120 2D frames were acquired with the side-firing transducer of a bi-plane transperineal ultrasound probe (C41L47RP, HI-VISION Preirus, Hitachi Medical Systems Europe), during manual positioning a digital transperineal stepper (D&K Technologies GmbH, Barum, Germany) or rotating the stepper with recorded relative angles, for navigating ultrasound view and scanning entire gland, respectively. For the purpose of feasibility in manual labelling, the ultrasound images were further sampled at approximately every 4 degrees, resulting in 6712 images in total. Prostate glands were segmented in all images by three trained biomedical engineering researchers, in which the prostate gland is visible. Two sets of task Fig. 2: Examples of ultrasound images in this study. Top-left (green): taskamenable images that contain prostate gland (shaded in red); Bottom-left (red): images with poor task amenability that are difficult to recognise prostate glands (for classification) and their boundaries (for segmentation); Top-right (yellow), images that are likely to contain prostate glands (blue arrows) but identifying the complete gland boundaries for segmentation is challenging; and Bottom-right (blue): images that contain visible noise and artefacts (orange arrows), but may be amenable to both classification and segmentation tasks. labels were curated for individual images: classification labels (a binary scalar indicating the presence of prostate) and segmentation labels (a binary mask of the gland). In this work, a single label for each of the classification and segmentation tasks was obtained by consensus over all three observers, based on majority voting at image-level and pixel-level, respectively. As discussed in Sec. 1, the task-specific image quality of interest for the classification task and the segmentation task can be different. Therefore, additional two binary labels were assigned for each image to represent the human label of task amenability, based on the observer assessment of whether the image quality adversely affects the completion of each task (see examples in Fig. 2). The labelled images were randomly split, at the patient-level, into train, validation, and holdout sets with 4689, 1023, and 1000 images from 178, 43, and 38 subjects, respectively. The proposed RL framework was evaluated on both tasks. The three reward definitions proposed in Sec. 2.3 were compared together with two non-selective baseline networks for classification and segmentation trained on all training data. For comparison purposes, they share the same network architectures and training strategies as the task predictors in the RL algorithms. For the classification tasks, Alex-Net [20,21] was trained with a cross-entropy loss and a reward based on classification accuracy (Acc.), i.e. classification correction rate. For segmentation tasks, U-Net [22] was trained with a pixel-wise cross-entropy loss and a mean binary Dice score to form the reward. For the purpose of this work, the reported experimental results are based on empirically configured networks and RL hyperparameters that were unchanged, unless specified, from the default values in the original Alex-Net, U-Net and DDPG algorithms. It is perhaps noteworthy that, based on our initial experiments, changing these configurations seems unlikely to alter the conclusions summarised in Sec. 4, but future research may be required to confirm this and further optimise their performance. Based on the holdout set, a mean Acc. and a mean binary Dice were computed to evaluate the trained task predictor networks, in classification and segmentation tasks, respectively, with different percentages of the holdout set removed according to the trained controller networks. Selection is not applicable to the baseline networks. Standard deviation (St.D.) is also reported to measure the inter-patient variance. Paired two-sample t-test results at a significance level of α = 0.05 are reported for comparisons. Result To evaluate the trained controllers, the 2×2 contingency tables in Fig. 3 compare subjective task amenability labels with controller predictions. For the purpose of comparison, 5% and 15% of images were removed from the holdout set by the trained controller, for the classification and segmentation tasks, respectively. The results of the selective rewardR sel,t with s rej = 5% and s rej = 15% are used as examples, for the two respective tasks. Thereby, agreement and disagreement are quantified between images assessed by the proposed IQA and the same images assessed by the subjective human labels of task amenability, denoted as predicted low/high and subjective low/high, respectively. In classifying prostate presence, the rewards based on fixed-, weighted-and selective validation sets resulted in agreed 75%, 70% and 43% low task amenability samples, with Cohen's kappa values of 0.75, 0.51 and 0.30, respectively. In the segmentation task, the three rewards have 65%, 58% and 49% agreed low task amenability samples, with Cohen's kappa values of 0.63, 0.48 and 0.37, respectively. To evaluate the task performances on the trained-controller-selected holdout set, the Acc. and Dice are summarised in Table 1. The average training time was approximately 12 hours on a single Nvidia Quadro P5000 GPU. In both tasks, all three proposed RL-based IQA algorithms provide statistically significant improvements, compared with the non-selective baseline counterparts, with all p-values<0.001. For both tasks, the results from the reward definition based on the selective validation set led to relatively inferior performances compared with the other two reward definitions, with statistical significance (p-values<0.001 ). Interestingly, no statistical significance was found between the reward definitions based on fixed-and weighted validation sets, for the classification (p-value=0.06 ) or segmentation (p-value=0.49 ) tasks, despite the disagreement summarised in Fig. 3. Fig. 4a and 4b plot mean performance against (holdout) rejection ratio for the three reward computation strategies. The peak classification Acc. are 0.935, 0.932 and 0.913 at 5%, 10% and 5% rejection ratios, for the fixed-, weighted-and selective reward formulations, respectively, while the peak segmentation Dice are 0.891, 0.893 and 0.866 at 20%, 15% and 20% rejection ratios, respectively. Discussion and Conclusion An interesting observation when inspecting Fig. 4 is that, in both tasks, the task performance peaked before decreasing as more samples were discarded for most tested methods. This seems counter-intuitive as the controller was trained to select task amenable data. While it remains an open question, we consider the following potential contributing factors: the variance of predictions, the possible over-fitting of the RL algorithms, the potentially non-monotonic relation between the optimal predictions conditioned on different values of s rej , and the limitation of the datasets which may be considered of above-average quality (therefore higher amenability that limits potential performance improvement). Importantly, the significant improvement over the non-selective baseline networks demonstrated the efficacy of the proposed IQA approach. The proposed weighted and selective reward formulations learned effective IQA without human labels of task amenability, which can be subjective and costly. Although the selective strategy performed moderately in this experiment, it may not be a general case for different datasets or applications and potentially provides a means to specify the desirable rejection rate. In summary, this paper has formulated IQA as a measure of task amenability, which can be learned by the proposed RL algorithm with and without human labels. The proposed IQA has been demonstrated and analysed with experiments based on clinical ultrasound images from prostate cancer patients.
2021-02-16T02:16:30.021Z
2021-02-15T00:00:00.000
{ "year": 2021, "sha1": "747adb0bc06e11239162e650b36b38f5a1f19baf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2102.07615", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "747adb0bc06e11239162e650b36b38f5a1f19baf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119097655
pes2o/s2orc
v3-fos-license
Dynamical Phase Transitions in TASEP with Two Types of Particles under Periodically Driven Boundary Conditions Driven diffusive systems have provided simple models for non-equilibrium systems with non-trivial structures. Steady state behaviour of these systems with constant boundary conditions have been studied extensively. Comparatively less work has been carried out on the responses of these systems to time dependent parameters. We report the modifications to the probability density function of a two particle exclusion model in response to a periodically changing perturbation to its boundary conditions. The changes in the shape of the distribution as a function of the frequency of the perturbation contains considerable structure. A dynamical phase transition in which the system response changes abruptly as a function of perturbation frequency was observed. We interpret this structure to be a consequence of the existence of a typical time-scale associated with the dynamics of density shock profiles within the system. Driven diffusive systems have provided simple models for non-equilibrium systems with non-trivial structures. Steady state behaviour of these systems with constant boundary conditions have been studied extensively. Comparatively less work has been carried out on the responses of these systems to time dependent parameters. We report the modifications to the probability density function of a two particle exclusion model in response to a periodically changing perturbation to its boundary conditions. The changes in the shape of the distribution as a function of the frequency of the perturbation contains considerable structure. A dynamical phase transition in which the system response changes abruptly as a function of perturbation frequency was observed. We interpret this structure to be a consequence of the existence of a typical time-scale associated with the dynamics of density shock profiles within the system. I. INTRODUCTION Driven diffusive systems have been of interest to a wide community of researchers since the first models were introduced [1,2]. Mostly motivated by their ability to demonstrate the curious phenomena of non-equilibrium systems despite their theoretical simplicity, various different systems of such are proposed [3][4][5][6][7][8][9]. Asymmetric simple exclusion processes (ASEP) being one of the simplest of those, with particles interacting exclusively, hopping to both directions or in and out of the system with certain probability rates on a one dimensional lattice, can demonstrate the interesting theoretical phenomena such as phase separation [10], spontaneous symmetry breaking [11], phase coexistence [11], and shock formation [12]. Furthermore, they can be utilised in modelling various real life problems such as transport of inter-cellular motor proteins [3], traffic jams [7], surface growth [13]. One dimensional asymmetrical simple exclusion process (ASEP) systems with open boundaries are bounded to particle baths of constant densities at both ends, which can be modelled with constant boundary crossing rates for particles which enter or leave the system. Although considerable amount of work exists on the timeindependent steady state properties, there are only a few examples which study the effect of applying timedependent or oscillatory boundary rates to these systems. For instance Popkov et al. apply an on and off boundary condition to the single-species, semi-infinite ASEP, such that the oscillating exit probability rates can be thought as red and green traffic lights [14]. A significance of their result which is relevant to our work is the observation that the density fluctuations propagate with a typical velocity into the lattice from the boundary. The fluc- * Electronic address: yesil@bilkent.edu.tr tuation although weakening as it propagates, preserves its characteristic shape within the bulk. They observe the density response of the system has a sawtooth-like characteristics with periodic pileups related to the redgreen light periods of the system independent of the initial conditions. They also showed same behaviour exists in the hydrodynamic limit. In another work, Basu et al., applied a sinusoidal drive to the boundaries of single-species simple exclusion process (SEP) and ASEP models, in which particles are allowed to move to the in both directions in symmetric and asymmetric rates correspondingly. They performed Fourier analysis of the response of both systems. They found that the structure functions have bimodality, which they claim, indicate the modes of transportation in diffusive systems [15]. In this present work, we carry out a Monte Carlo study of a two-species totally asymmetric simple exclusion process (TASEP) such that the boundary conditions (BC) are abruptly oscillating with relatively small amplitudes around a phase transition point. We show that the system responds in qualitatively different forms, depending on the frequency of the perturbation. In particular, there appears to be a dynamical transition where the response of the system changes from that of a symmetric state to that of an near-symmetric one abruptly, as a function of the frequency of the perturbation. This transition is independent of the size of the perturbation. The phases in the phase diagram of the timeindependent model were first reported by Evans et al. [16]. They identified through a mean field analysis, four different phases of the order parameter density, for all symmetric parameters of the two-species. One of the phases surprisingly display broken symmetry. Between the symmetric and asymmetric phases they report a tiny regime in which particle densities are low but not symmetric. (We will label three of the phases of interest to us as LL [symmetric low density-low density], HL[the broken symmetry high density-low density], and T R [tiny regime].) We will give a precise definition of the model in the next section. For our general discussion at this point, we demonstrate how the joint density function p(n 1 , n 2 ) behaves near T R as a function of the boundary exit rates β 1 and β 2 for the two types of particles. (Arndt et al. discuss the structure of these phases in detail in [19].) T R was shown to be a finite size effect by Erickson et al. [17]. Through a Monte Carlo analysis they showed that the size of this phase decays exponentially with respect to lattice size. Detailed analysis of the joint density distributions of two types of particles for this regime reveals that the density is a superposition of "shock profiles" along the length of the system [14,18,19]. Each profile, which corresponds to a particular number of type I particles in the system, has an error-function like shape, whose midpoint is carrying out a random walk across the lattice [18]. The random walk is constrained when the shock approaches a boundary, if it gets too near the particle entry (exit) boundary, the increase (decrease) in the density near the boundary has a compensating effect on the position of the shock, pushing it away from the boundary. The entry and exit rates as a function of the position of the shock may then be interpreted as a "force" on the shocks, with a corresponding "potential", in which the random walk is carried out [19]. Fig. 2 shows these profiles corresponding to several values of occupation of the lattice at time-independent steady state. The plots show the average density of first type of particles as a function of position, when the lattice contains a total of n 1 such particles with n 1 > n 2 . This last constraint limits the averaging to one leg of the boomerang-shaped probability density. The discussion above points out two different features for the motion of the shock profile: The first corresponds to diffusive, damped motion in an effective potential. The second is the mechanism of application of an effective force through the manipulation of the boundary conditions, which will have a retarded effect dependent on the position in the lattice. We have looked into possibility of the production of interesting effects through the interplay of these two features. We investigate whether it is possible to force the shocks in the system by simply oscillating the boundary conditions. We observe signif- icant frequency dependence which is unusual for a diffusive system. We have also observed hysteresis in the density function of the system. Such behaviour was observed earlier in similar systems by Rakos et al. [20]. Hysteresis in our model appears abruptly as perturbation frequency is decreased, associated with a typical velocity in the system. In the following sections we first introduce the model we are studying, and how we apply the oscillatory boundary conditions. We then move on to the discussion of the response of the system to the boundary conditions. II. TASEP UNDER PERIODICALLY DRIVEN BC The system which is studied in this paper is TASEP on a finite, one-dimensional lattice with open boundaries and two species. Particles of type 1 (2) are allowed to enter the system from the left (right) with probability rate α 1 (2) , move only forward with probability rate γ 1 (2) if the following site is empty, and leave the system from the opposite end with probability rate β 1 (2) . Different types of particles are allowed to switch places with rate δ when they come face to face. In our simulations all probability rates except the exit rates were taken to be equal to 1. (These unitless quantities define a unitless time scale for the problem.) In comparison to the on and off exit rates of Popkov et al. [14], relatively small oscillations of the exit rates were applied to the system. We let the exit rate to oscillate around the T R phase boundary point β o = 0.275 with an amplitude ∆β: where s(t) = sgn(sin(2πt/τ )) for time t within a period of oscillation τ . To study the system we use Kinetic Monte Carlo simulation [21], with Poissonian time dynamics. We maintain a list of all possible events e (possible particle jumps within the system and motion through the boundaries) and rates ω e associated with them. The total rate for any one of these events happening is then given by Ω = e ω e . A random variable ∆t which corresponds to the time increment for the next event is then given by ∆t = − log(r)/Ω with r a random number uniformly distributed between 0 and 1. If ∆t implies a time increase past the next BC change time t t given by eqn 1, no changes are made to the system and the time is set to t t . Otherwise we select the particular type of change e that takes place at that time randomly, with the probability ω e /Ω. The procedure described in this paragraph is then repeated, and statistical averages evaluated, weighing the influence of each state with ∆t. To produce such random numbers we use the Mersenne Twister pseudorandom number generator [22,23]. In the simulations a Monte Carlo step (MCS) was taken to be N × N time increments. This definition of MCS may be associated with the maximum lattice transit time of the particles in the system. (A particle has to make N jumps to transit the system. For a "typical" distribution of particles, each jump will take O(1) time unit, and ∆t is O(1/N ).) Note that we also have a continuous time variable t associated with time increments ∆t, but large MCS is used to ensure good statistics. In each simulation, averages are calculated over 10 5 MCS. Period dependent averages are calculated by obtaining time dependent averages within each period and averaging over the periods. III. VARIATIONS IN THE CHARACTER OF FREQUENCY DEPENDENCE We change the boundary conditions in a way to break the symmetry between the two types of particles. We choose T R as the unperturbed state which is associated with the presence of shock fronts in the widest region. Note that at very high frequencies, one obtains the unperturbed state, while at very low frequencies, the system moves from one constant asymmetric BC state to the other. To discuss these varying responses we focus on the joint density distributions for some values of period of oscillations. Depending on the frequency of oscillation, we observe very different types of responses. Note that the boomerang shaped profile (similar to those in Fig. 1) disappears and re-appears as a function of frequency. At high frequencies of oscillation (low values of τ ) the density distribution preserves the boomerang shaped nature (similar to those in Fig. 1) but the distribution tends to move as a whole in response to the changing boundary condition. We use the terminology "near-symmetric" states in association with such density functions, which although not preserving perfect symmetry between the two types of particles, maintain a shape which is a perturbation of the symmetric time-independent version. This shape itself varies as the oscillation frequency is changed, resembling the time independent density distributions for different values of the parameter β in Fig. 1. (All figures display results for system size N = 200, except where N dependence is stated.) However note that in order to observe a timeindependent distribution similar to that for τ = 300 in Fig. 3, one would need to go deeper into the LL phase than the range of parameters used in oscillating BC (See Fig. 1. a). This is an indication of the resonance-like behaviour in the system; driving the density fluctuations much higher than the values one can obtain from the static values in the same range. Although the joint density is confined to a very small range τ = 300, smaller and longer values of τ result in densities which are still boomerang shaped. To quantify this behaviour we introduce a parameter, which we call "spread", defined as follows: We calculate the averages below at 100 time values t i = iτ /100 within each period τ : n m 2 p(n 1 , n 2 , t i ) Then average spread is: This is then an average of the fluctuation in the number density during a period of oscillation. Fig. 4 is a plot of this parameter as a function of oscillation period and indicates that system is going through a resonance-like behaviour at various frequencies. Extrema on this plot are identified with letters A − E and correspond to the distributions in Fig. 3. For instance for τ = 140 (point A on Fig. 4), density is mainly distributed around the LL region with some tails into symmetry broken states. When τ = 190 (point B) joint density closely resembles the equilibrium density. The minimum at C corresponds to the very compact distribution mentioned above. On the other hand, for low frequencies, e.g. for when τ = 2900 (point E), the system is in the broken symmetry state at all times within the period. The ap-pearance of large scale hysteresis is apparent in this case. We discuss below the abrupt appearance of this hysteresis effect. For even lower frequencies, the system is driven deeper into the symmetry broken phase at each half cycle, resulting in an even smaller spread as the inset to Fig. 4 displays. The effect of the amplitude ∆β of the perturbation on the spread parameter is shown in Fig. 5. The structure of the response is preserved, but the magnitude dependence is apparent. Smaller perturbation leads to smaller variation in spread at higher frequencies. However, the spread diminishes less slowly at longer periods because it takes a longer time to push the system into the asymmetric phase with a smaller perturbation. Fig. 6 displays the effect of the system size on the response. Existence of a "typical velocity" in the system would lead to an expectation of scaling of all characteristic time constants by N . Fig. 6 indicates that this is indeed the case. However characteristic times (such as response extrema) are not simply related to one another, indicating that the size of the boundary regions (which should be excluded from N ) may be different for mechanisms which are responsible for various extrema. We have looked at the hysteresis in the average values n 1 t vs n 2 t of the joint probability distribution function p(n 1 , n 2 , t) in some detail. Fig. 7 displays this effect for various values of the oscillation period. We calculate the area of the hysteresis curve displays the result. Although some amount of hysteresis (not visible at the scale of Fig. 8) exists at all frequencies, we find that a large-scale hysteresis starts at τ ∼ 5N , independent of ∆β or N . This may be interpreted as the onset of large scale motion of the probability density associated with symmetry broken phase. We then identify the value N/τ ∼ 0.2 as a typical velocity in the system. Hysteresis is not present when changes to the system are faster than that implied by this characteristic velocity. The inset to Fig. 8 shows that there is some structure associated with the break away point of the hysteresis magnitude. We identify this point as a dynamical phase transition point as a function of frequency. It is interesting to note that the values for which τ /N < 5 correspond to the range in Fig. 6 where∆ displays richer structure. IV. PULSE RESPONSE To better understand the nature of the frequency dependence of the system, we further study the "pulse response": We have applied a constant perturbation, only to the exit rate of the first type of particles, β 1 = 0.535, for a duration of ∆t = 100 over a period of τ = 10000 with a repetition for 10 7 MCS. When time reaches the end of the period, the system is relaxed to a near time-independent steady state. We have thus obtained the time-dependent shock profiles and average occupation values, which again show surprising oscillatory behaviour. boomerang-shaped probability density: where A is a normalization constant. Note that, probability is reduced at early times for smaller and larger values of n 1 . Figures 10 and 11 show the shock profiles at various times after the pulse. It can be observed that the profiles for small n 1 show less of a distortion compared to those for larger n 1 . The profiles for larger n 1 are distorted due to the exit of particles during the pulse. The change in P (n 1 , t) for small times is then due to two different mechanisms: The small n 1 shocks simply leave the system during the pulse, while the large n 1 shocks are deformed into smaller n 1 forms. The recovery of the system from these two effects seem to be qualitatively different. The statistics for large n 1 shock recover exponentially with a dynamics consistent with a diffusive system. Recovery of small n 1 shock statistics seem to be a contribution of multiple effects, resulting in an oscillatory damping. its steady state values as a function of time: To separate the two mechanisms discussed above, Fig. 12 shows the contributions to this summation for values of n 1 < N/2 and n 1 > N/2. Note that for both cases deviation from the steady state increases for a period of time even after the perturbation pulse has ended. However, smaller n 1 statistics relax to the steady state with shorter time scale oscillations suggesting that the process may be associated with boundary events rather than the bulk. The oscillatory nature of the relaxation to steady state is also apparent in Fig. 12. This unusual behaviour forms the basis of the different type of response we report for the sinusoidal drive. One does not expect to find an oscillatory response in a diffusive system. The effect seems to be a superposition of a number of recovery processes with different time scales dominated by the statistics of states with smaller number of particles. The time scale of the oscillations is consistent with our report of N/τ ∼ 0.2 for the sinusoidal drive. More work may be necessary to identify the details of the mechanisms involved in this interesting phenomena. V. CONCLUSION We report the response of the TASEP model as a function of the perturbation frequency of the boundary condition. We find that the response is qualitatively differ- ent for various ranges of the perturbing frequency. One type of change involves significant modifications in the shape of the joint distribution function, which alternates between compact and extended forms. Variation of this behaviour as a function of frequency contains considerable structure which does not depend on the size of the periodic drive, and scales with the size of the system. This implies that the response is associated with motion of features through the system, in the form of shock fronts. A second type of change that was observed is the abrupt appearance of the hysteresis as the frequency of the perturbation is lowered. This also indicates a velocity threshold under which the the density distribution cycles from one phase (associated with that particular value of the BC) to the other, with significant changes during the cycle. We identify a characteristic velocity value of ∼ 0.2 lattice sites per unit time. Higher frequencies correspond to near-symmetric states where the probability distribution moves more or less rigidly during the cycle, albeit with a probability density profile which changes appreciably as a function of frequency. We have reported the response of the system at a phase transition point, which we thought would be most interesting. Analysis of other special points on the phase diagram could also shed light on the dynamical mechanisms of interest in this system. Authors acknowledge support from Turkish Academy of Sciences (TUBA).
2015-12-08T18:36:10.000Z
2015-12-08T00:00:00.000
{ "year": 2015, "sha1": "52d39a8dd33720184f27bc78d2a44e3a4c566eff", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "52d39a8dd33720184f27bc78d2a44e3a4c566eff", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
157201969
pes2o/s2orc
v3-fos-license
Political Trust and Public Satisfaction : A Logistic Regression Analysis Based on 1113 Samples As an indispensable part of government performance evaluation, the public plays an important role in the legitimacy of the Chinese government. Based on the 1113 samples of “Deliberative Democracy and Election Survey (2010, 2013 and 2014)”, this paper studies the relationship between different levels of political trust and public satisfaction. Through descriptive statistics it was found that more than half of the public on the work of the government expressed satisfaction with the government level, and the degree of trust is gradually increasing. Logistic regression model found that political trust and public satisfaction have a significantly positive correlation, and this correlation strength in different levels of government is different. In addition, gender, political landscape and public satisfaction also have a significant correlation. Introduction Government performance has always been the basis of the legitimacy of the Chinese government [1].Public satisfaction is the most direct object of public policy, and also the most important consumer of public goods and public services.They have the greatest saying in the quality and service level of public policy.Therefore, public satisfaction has become an indispensable part of the government performance evaluation system.The public's subjective feelings can be directly fed back to the quality of the government's public service.On the other hand, it can be regarded as an important measure to supervise the government to improve the quality and efficiency of public service [2].Historical experience shows that in the process of national major transition, the public's political trust is an important factor that affects the government's ability to implement the public policy.The higher the public satisfaction with the government, the more successful policy implementation is, even when the policy mistakes, the public will be-Z.Q. Li lieve the government, patiently waiting for the government to rectify [3].Therefore, to explore the relationship between the political trust and public satisfaction, in theory the rational choice of Chinese people's political trust can be examined, and contribute to the transformation of the government in the process of public policy formulation and implementation to provide empirical evidence.This is an important significance link in the research of political trust. Satisfaction survey is an important issue in business and marketing.It gradually extended to the government and the public policy evaluation after 1980s.From the Cradozo study of customer expectation and customer satisfaction [4], many theoretical models have been used to explain the influence factors of customer satisfaction, including micro factors and macro factors.On the micro level, the expectation is inconsistent [5], fair [6] and attributed [7] to explain customer satisfaction.On the macro level, the system analysis model is established by using the concept of customer satisfaction and value [8], quality [9] [10] and so on.But the empirical analysis of public satisfaction especially political trust and public satisfaction is rare.The research on the relationship between the two has theoretical and practical significance. The rest of the paper is organized as follows.Section 2 describes the data source and gives descriptive analysis.In Section 3, we introduce the method we have used.Section 4 presents our empirical findings and Section 5 makes a conclusion. Data Sources The data used in this paper are the "Deliberative Democracy and Election Democracy", which is presided by Ma Deyong who is associate professor at Zhou Enlai School Government, Nankai University.In order to understand the evolution of the development of Chinese democracy at the grassroots level, the data were analyzed by questionnaire and interview in the township China in Sichuan province and Zhejiang Province in 2010, 2013 and 2014, the final entry of 1987 questionnaires.In this study, the non probability sampling method was used to study the sample of the village and town in each region.Each place considering the level of local economic development, social security, and the relationship between the government and the people from the city (county) and other factors, select 2 -3 townships, each township in 2 -3 village population is relatively concentrated, and then to the village the villagers issued a questionnaire.According to the research needs of this paper, we have carried on the pretreatment to the data.Delete the sample "does not know", "do not want to say" and missing values, the final total of the sample is 1113. Description Analysis The purpose of this study is to examine the relationship between political trust and public satisfaction at different levels of government.We assume that the higher the government's political trust, the higher the public satisfaction will be.Taking into account the limitations of the sample survey data, by asking the respondents, "Overall, you are satisfied with the current local township or town government work" to measure public satisfaction, respondents can choose between "satisfaction" and "dissatisfaction". The answer to this question reflects the respondent satisfaction on the work of the government, in the research hypothesis in the dependent variable.Public satisfaction with Y said that if the respondents chose "satisfaction", then Y = 1; If respondents choose "dissatisfaction", then Y = 0.Because the dependent variable is two variables, this study uses Logistic regression model to analyze.As shown in Table 1, the choice of "satisfaction", "dissatisfaction" is 57.4%, 42.6%, more than half of the respondents to the township or town government's work is satisfactory. Respondents responded to the level of trust in different levels of government, which is the measure of political trust of independent variables.Government level from high to low, divided into the central government, provincial governments, county governments and local governments.Respondents to different levels of government levels of trust can be from "very trust", "trust", "less trust" and "completely distrust" in the choice.The degree of trust is expressed by X m (m = 1, 2, 3, 4 respectively indicated that the central government, the provincial government, the county government and the local government).If the respondents chose "very trust", then X m = 1; If respondents choose "trust", then X m = 2; If respondents choose "less trust", then X m = 3; If the respondents chose "completely distrust", then X m = 4.As shown in Table 2 the respondents to different levels of government trust: Overall, respondents were trusted by the government; for governments at different levels, the level of confidence in the government's trust in the government is decreasing with the level of government, which the respondents to the central government trust is higher than the trust of local governments.On the one hand, it shows that the central government has a legal basis in the minds of respondents.On the other hand, it also shows that the local government has not made more contact with the respondents than the central government's political trust. Finally, this study also used the demographic variables, such as gender, race, age, political outlook and annual income of the family.The variables are controlled in the operation process after the code assignment is carried out.In the effective data analysis, respondents accounted for 57.1% of male, female accounted for 42.9%, the average age of respondents was 42 years old.Belong to the proportion of the Han nationality accounted for 99.2%, minority accounted for 0.8%.There are 15.6% members of the Communist Party of China, 1.2% belongs to the Democratic Party, 83.2% belong to the non Party (common people).Respondents, the family income (unit: yuan) accounted for 17.6% of the 10 thousand, 10 thousand -10 million of 66.9%, accounting for more than 15.5% of 100 thousand.Control variable assignment as shown in Table 3. Method The purpose of this study is to examine whether the respondents' trust in different levels of government as an independent variable and how they affect the public satisfaction.The dependent variable is "satisfaction" and "dissatisfaction", and the two is the two categories.So linear regression is not applicable (linear regression dependent variable is the range between positive infinity and negative infinity).The econometric models are used to interpret the data of discrete variables, including the probit model and the logistic model.Because the probit model needs to evaluate the overall multiple normal distribution, so its application is limited, and the logistic model of the sample does not need to obey the normal distribution scope, with more extensive than other models. A dependent variable is Y, the value of 1 indicates that the public is satisfied with the work of the local government; value 0 indicates that the public is not satisfied with the work of the local government.There are m independent variables affecting Y: X 1 , X 2 , X 3 , X 4 (1 ≤ m ≤ 4).The probability for a public i to work with the local government is p(y = 1|X) = p i .The probability that the public are not satisfied with the work of the local government is 1 − p i .Both of them are nonlinear functions of the independent variable vector X: ( ) The probability of the public to satisfied and not satisfied with the attitude of the local government work ratio ( ) is called the event occurrence ratio, replacing with Odds.Odds must be positive (because 0 < i p < 1), and there is no upper bound. Logarithmic transformation of Odds, the linear expression of the logistic regression model: In Formula (1) and Formula (2), α is constant, m as the number of independent variables; i β is the coefficient of the independent variable, which reflects the direction and extent of the independent variables affecting public satisfaction. Empirical Result The primary objective of this study was to examine whether political trust can affect public satisfaction and how.Pearson correlation analysis of the control variables and the independent variables, the regression coefficient significance level P value is greater than 0.05, which shows that the independent variable does not exist.Model 1 is the control of gender, race, age, political outlook, family income with demographic characteristics of the variables, and its regression with the dependent variables.Model 2, 3, 4, 5 are based on the model 1, respectively, the central government, provincial government, county government and the local government's trust as the independent variable, the public satisfaction as the dependent variable regression.The results of the regression model are shown in Table 4. In combination with Table 4, only the gender and political appearance were significantly correlated with the dependent variables in model 1.Male respondents satisfaction than female respondents, the more non partisan (common people), satisfaction will be higher, the other control variables do not have a statistically significant correlation.Model 2, 3, 4, 5 based on the model 1.Under the influence of the control varia- bles, the regression analysis of different levels of political trust and public satisfaction.The results of the four models show that there is a significant positive correlation between different levels of political trust and public satisfaction: respondents believe that the government, the government's work will be more positive attitude.The difference between the four models is that the correlation strength of each model is different: B 5 > B 2 > B 4 > B 3 .The more respondents believe that the government will be satisfied with government, this correlation strength reflected in the strongest local government, the central government, the county government again, the weakest level of the provincial government. Conclusions The empirical analysis of the influencing factors of satisfaction degree is different with the past.This study explores the relationship between political trust and public satisfaction from the perspective of public trust in different levels of government.Through the survey data of 1113 samples, we found that the political trust in different levels has a significantly positive correlation with public satisfaction.The more respondents to the government trust, the more the government's work will hold a positive attitude.At the same time, the different levels of political trust in public satisfaction of the correlation strength showed that different levels of government correlation strength are not the same.The intensity of this correlation is reflected in the strongest local government, the central government, the county government again, and the weakest level of the provincial government.This may be affected by the public's familiarity with the government.Therefore, the government's legal status is more stable, and the public's performance evaluation of the government becomes essential.Political trust has a significantly positive correlation with public satisfaction, which can indirectly improve public satisfaction by improving the public's political trust.First, the links between the government and the public at all levels should be strengthened, and the level of social services in the community and streets to help enhance the public's political trust should be enhanced.Secondly, through the sound and perfect local laws and regulations, a good social environment should be created, so that the public and the government can conduct social exchanges and cooperation under the rule of law.Finally, in the community and the streets by the government, the lead in holding a number of political, cultural and recreational activities should be taken, to promote exchanges and communication between the government and the public. Table 1 . Percentage of public satisfaction. Table 2 . Descriptive statistics of political trust at different levels. Table 3 . Description and assignment of control variables. Table 4 . Logistic regression model of political trust and public satisfaction.
2019-05-19T13:04:28.830Z
2017-01-24T00:00:00.000
{ "year": 2017, "sha1": "ef8ea5538c9da408512234e1840bf1169993734f", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=73807", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "ef8ea5538c9da408512234e1840bf1169993734f", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Psychology" ] }
15907500
pes2o/s2orc
v3-fos-license
Atomic Scale Design and Three-Dimensional Simulation of Ionic Diffusive Nanofluidic Channels Recent advance in nanotechnology has led to rapid advances in nanofluidics, which has been established as a reliable means for a wide variety of applications, including molecular separation, detection, crystallization and biosynthesis. Although atomic and molecular level consideration is a key ingredient in experimental design and fabrication of nanfluidic systems, atomic and molecular modeling of nanofluidics is rare and most simulations at nanoscale are restricted to one- or two-dimensions in the literature, to our best knowledge. The present work introduces atomic scale design and three-dimensional (3D) simulation of ionic diffusive nanofluidic systems. We propose a variational multiscale framework to represent the nanochannel in discrete atomic and/or molecular detail while describe the ionic solution by continuum. Apart from the major electrostatic and entropic effects, the non-electrostatic interactions between the channel and solution, and among solvent molecules are accounted in our modeling. We derive generalized Poisson-Nernst-Planck (PNP) equations for nanofluidic systems. Mathematical algorithms, such as Dirichlet to Neumann mapping and the matched interface and boundary (MIB) methods are developed to rigorously solve the aforementioned equations to the second-order accuracy in 3D realistic settings. Three ionic diffusive nanofluidic systems, including a negatively charged nanochannel, a bipolar nanochannel and a double-well nanochannel are designed to investigate the impact of atomic charges to channel current, density distribution and electrostatic potential. Numerical findings, such as gating, ion depletion and inversion, are in good agreements with those from experimental measurements and numerical simulations in the literature. Introduction Nanofluidics refers to the study of the transport of ions and/or molecules in confined solutions as well as fluid flow through or past structures with one or more characteristic nanometer dimensions [26,75]. The dramatic advances in microfluidics in the 1990s and the introduction of nanoscience, nanotechnology and atomic fabrication in recent years have given its own name to nanofluidics [32]. Nanofluidic systems have been extensively exploited for molecule separation and detection, nanosensing, elucidation of complex fluid behavior and for the discovery of new physical phenomena that are not observed or less influential in macrofluidic or microfluidic systems [82]. Some of such phenomena include double-layer overlap, ion permitivity, diffusion, ion-current rectification, surface charge effect and entropic forces [75,107]. One major feature of a nanofluidic system is its structural characteristic. Nanofluidic structures can be classified into nanopores and nanochannels and, in fact, these two terms are exchangeable in many cases [107]. A nanopore has comparatively short length formed perpendicularly through various materials, such as a bipore consisting of proteins, i.e., α-hemolysin and a solid-state pore [9,107]. An example of solid-state pore is a set of nanopores in a silicon nitride membrane which enables the detection of folding behaviors of a single double-stranded DNA [62]. On the other hand, a nanochannel has relatively larger dimensions of depth and width, usually fabricated in a planar format, and is often equipped with other sophisticated devices to control or influence the transport inside the channel [107]. For instance, Perry et al. demonstrates the rectifying effect of a funnel-shape nanochannel based on different movements of counterions at its tip and base [71]. A nano-scaled channel usually has either a cylindrical or a conical geometry [107]. In a cylindrical channel, the flow direction does not influence on current, but surface charges and applied external voltage alter the flux of ions with opposite sign charges. However, the difference in the size of pores in a conical channel brings different ionic conductance patterns depending on the flow direction. The other major feature of a nanofluidic system is its interactions. It is the interaction at nanoscale that distinguishes a nanofluidic system from an ordinary fluid system. Certainly, most interactions are directly inherited from the chemical and physical properties of the nanostructure, such as the geometric confinement, steric effect, polarization and charge. Some other interactions are controlled by flow conditions, i.e., ion composition and concentration, and applied external fields. Therefore, the interactions of a nanofluidic system is determined by its structure and flow conditions. The function of a nanofluidic system is in turn determined by all the interactions. Usually, most nanofluidic systems do not involve any chemical reactions. In this case, steric effects, van der Waals interactions and electrostatic interactions are pivoting factors. Therefore, in nanofluidic systems, microscopic interactions dominate the flow behavior, while in macroscopic flows and some microfluidics, continuum fluid mechanics governs and microscopic effects are often negligible. Typically, microscopic and macroscopic behaviors co-exist in a microfluidic system. Characteristic length scales, such as Reynolds number, Biot number and Nusselt number, are important to the macroscopic fluid flows. For most nanofluidic systems, one of most important characteristic length scales is the Debye length λ D = εε0k B T α Cα0q 2 α , where ε is the dielectric constant of the solvent, ε 0 is the permittivity of vacuum, k B is the Boltzmann constant, T is the absolute temperature, and C α0 and q α are, respectively, the bulk ion concentration and the charge of ion species α [26]. The Debye length describes the thickness (or, precisely, 1 e th reduction) of electrical double layer (EDL). Essentially, ionic fluid behaves like a microscopic flow within the EDL region, while acts as a macroscopic flow far beyond the Debye length. By the Gouy-Chapman-Stern model, the EDL is divided into three parts: the inner Helmholtz plane, outer Helmholtz plane and diffuse layer [75]. While the inner Helmholtz plane consists of non-hydrated coions and counterions that are attached to the channel surface, the outer Helmholtz plane contains hydrated or partially hydrated counterions. Moreover, the part between the inner and outer Helmholtz planes is called the Stern layer. Note that the EDL applies not only to the layer near the nanochannel, but also the layer around a charged biomolecule in the flow. Consequently, many microfluidic devices with quite large channel dimensions exhibit microscopic flow characteristic when the fluid consists of large macromolecules and solvent. The possible deformation, aggregation, folding and unfolding of the macromolecules in the fluidic system make the fluid flow behavior complex and intriguing [40]. Nevertheless, for rigid macromolecules, the effective channel dimensions can be estimated by subtracting the macromolecular dimension from the physical dimension of the channel. The resulting system may be approximated by simple ions for most analysis. Specifically, the charge on the wall surface derives electrostatic interactions and electrokinetic effects when ions in a solution are sufficiently close to channel wall [83,103]. Since the surface-to-volume ratio is excep-tionally high in a nanoscale channel, surface charges induce a unique electrostatic screening region, i.e., EDL [75]. In fact, it attracts ions charged oppositely (counterions) and repels ions having the same charge (coions) to sustain the electroneutrality of an aqueous solution confined in a channel. Physically, the EDL region only contains bound or mobile counterions and typically covers the nano-sized pore of a channel. Therefore, the oppositely charged ions mainly constitute the electrical current through a micro-or nano-channel [75,88]. The rectification of ionic current, which is one of the distinct transport properties of nanofluidic channels, can further elucidate the flow pattern and formation of the fluid through a nanochannel [56]. This phenomenon usually occurs when surface charge distribution, applied electric field, bulk concentration and/or channel geometry are properly manipulated along the channel axis [21]. Pu et al. conducted experiments to present ion-enrichment and ion-depletion effects on nanochannels to show that the rectification begins with these two effects [73]. In their design, an applied field gave rise to accumulation of all ions at the cathode and absence of all ions at the anode of the channels. Ion selectivity is another important feature which enables nano-sized channels to work as an ionic filter [88]. It is defined as the ratio of the difference between currents of cations and anions to the total current delivered by both ions. Vlassiouk and his colleagues examined the ion selectivity of single nanometer channels under various conditions including channel dimension, buffer concentration and applied voltage [88]. Nanofluidics has been extensively studied in chemistry, physics, biology, material science, and many areas of engineering [75]. The primary purpose of most studies is to separate and/or detect biological substances in a complex solution [69]. A variety of nanofluidic devices have been produced using extraordinary transport behaviors caused by steric restriction, polarization and electrokinetic principles [1,57]. For instance, a nanofluidic diode is an outstanding tool to take the advantage of the rectifying effect of ionic current through a nanochannel [56]. The nanofluidic diodes have been developed to govern the flow inside the channel by breaking the symmetry in channel geometry, surface charge arrangement and bulk concentration under the influence of applied voltage [1,21]. Additionally, the design and fabrication of nanofluidics for molecular biology applications is a new interdisciplinary field that makes use of precise control and manipulation of fluids at submicrometer and nanometer scales to study the behavior of molecular and biological systems. Because of the microscopic interactions, fluids confined at the nanometer scale can exhibit physical behaviors which are not observed or insignificant in larger scales. When the characteristic length scale of the fluid coincides with the length scale of the biomolecule and the scale of the Debye length, nanofluidic devices can be employed for a variety of interesting basic measurements such as molecular diffusion coefficients [54], enzyme reaction rates [31,45], pH values [65,97], and chemical binding affinities [54]. Micro-and nanofluidic techniques have been instrumented for polymerase chain reaction (PCR) amplifications [8], macromolecule accumulator [22,98], electrokinetics [4,50], biomaterial separation [41,58], membrane protein crystallization [63], and micro-scale gas chromatography [92]. Nanofluidic dynamic arrays have also been devised for high-throughput single nucleotide polymorphism genotyping [90]. Nanofluidic devices have also been engineered for electronic circuits [101], local charge inversion [46], and photonic crystal circuits [35]. Microchannels and micropores have been utilized for cell manipulation, cell separation, and cell patterning [52,72]. Efforts are given to accomplish all steps, including separation, detection and characterization, on a single microchip [75]. Despite of rapid development in nanotechnology, the design and fabrication of nanofluidic systems are essentially empirical at present [81]. Since nanofluidic device prototyping and fabrication are technically challenging and financially expensive, it is desirable to further advance the field by mathematical/theoretical modeling and simulation. The modeling and simulation of nanofluidic systems are of enormous importance and have been a growing field of research in the past decade. When the width of a channel is less than 5nm, the transport analysis requires the discreteness of substances and, in particular, molecular dynamics (MD) is a useful tool in this respect [26]. Typically, the MD determines the motion of each atom in a system using the Newton's classical equations of motion [68]. A simplified model is Brownian dynamics (BD), in which the solvent water molecules are treated implicitly, so this method costs less computationally than the MD and is able to reach the time scale of physical transport [64,68,74]. The BD describes the motion of each ion under frictional, stochastic and systematic forces by means of Langevin equation [64,68]. Further reduction in the computational cost leads to the Poisson-Nernst-Planck (PNP) theory, which is the most renowned model for charge transport [6,12,24,28,33,34,47,59,104,105]. The PNP model describes the solvent water molecule as a dielectric continuum, treats ion species by continuum density distributions and, in principle, retains the discrete atomic detail and/or charge distribution of the channel or pore [6,33,59,104,105]. The performance of the PB model and the PNP model for the streaming current in silica nanofluidic channels was compared [13]. The Brownian dynamics of ions in the nanopore channel was combined with the continuum PNP model for regions away from the nanopore channel [2]. The reader is referred to the literature [26,33,68,74,104,105] for a comprehensive discussion of the PNP theory. A further simplified model is the Lippmann-Young equation, which is able to predict the liquid-solid interface contact angle and interface morphology under an external electric field [81]. Most microfluidic systems involve fluid flow. If the fluid flow through a microfluidic pore or channel is also a concern in the theoretical modeling, coupled PNP and the Navier-Stokes (NS) equations can be utilized [16,17,23,25,53,88,89,91,94,96,106,108]. These models are able to provide a more detailed description of the fluid flow away from the microscale pore or channel, i.e., beyond the Debye screening length. Recently, a variety of differential geometry based multiscale models were introduced for charge transport [94][95][96]. The differential geometry theory of surface provides a natural means to separate the microscopic domain of biomolecules from the macroscopic domain of solvent so that appropriate physical laws are applied to appropriate domains. Our variational formulation is able to efficiently bridge macro-micro scales and synergically couple macro-micro domains [94]. One class of our multiscale models is the combination of Laplace-Beltrami equation and Poisson-Kohn-Sham equations for proton transport [14,15]. Another class of our multiscale models utilizes Laplace-Beltrami equation and generalized PNP equations for the dynamics and transport of ion channels and transmembrane transportors [94,96]. The other class of our multiscale models alternate the MD and continuum elasticity (CE) descriptions of the solute molecule, as well as continuum fluid mechanics formulation of the solvent [94][95][96]100]. We have proposed the theory of continuum elasticity with atomic rigidity (CEWAR) [100] to treat the shear modulus as a continuous function of atomic rigidity so that the dynamics complexity of a macromolecular system is separated from its static complexity. As a consequence, the time-consuming dynamics is approximated by using the continuum elasticity theory, while the less timeconsuming static analysis is carried out with an atomic description. Efficient geometric modeling strategies associated with differential geometry based multiscale models have been developed in both Lagrangian Eulerian [36,37] and Eulerian representations [99]. Nevertheless, in nanofluidic modeling, computation and analysis, there are many standing theoretical and technical problems. For example, nanofluidic processes may induce structural modifications and even chemical reactions [55,86], which are not described in the present nanofluidic simulations. Additionally, although the PNP model can incorporate atomic charge details in its pore or channel description, which is vital to channel gating and fluid behavior, atomic charge details beyond the coarse description of surface charges are usually neglected in most nanofluidic simulations. Moreover, as discussed earlier, Stern layer and ion steric effect are significant for the EDL, and are not appropriately described in the conventional PNP model. Furthermore, nanofluidic simulations have been hardly performed in 3D realistic settings with physical parameters. Consequently, results can only be used for qualitative (i.e., phenomenological) comparison and not for quantitative prediction. Finally, the material interface induced jump conditions in the Poisson equation are seldom enforced in nanofluidic simulations with realistic geometries. Therefore, it is imperative to address these issues in the current nanofluidic modeling and simulation. The objective of the present work is to model and analyze realistic nanofluidic channels with atomic charge details and introduce second-order convergent numerical methods for nanofluidic problems. We present a new variational derivation of the governing PNP type of models without utilizing the differential geometry formalism of solvent-solute interfaces. As such a domain characteristic function is introduced to represent the given solid-fluid interface. Additionally, we investigate the impact of atomic charge distribution to the fluid behavior of a few 3D nanoscale channels. We demonstrate that atomic charges give rise specific and efficient control of nanochannel flows. Moreover, we develop a second-order convergent numerical method for solving the PNP equations with complex nanochannel geometry and singular charges. Furthermore, the change of the distribution in atomic charge distribution is orchestrated with the variation of applied external voltage and bulk ion concentration to understand nanofluidic currents. Therefore, we are able to elucidate quantitatively the transport phenomena of three types of nano-scaled channels, including a negatively charged channel, a bipolar channel and a double-well channel. These flow phenomena are analyzed in terms of electrostatic potential profiles, ion concentration distributions and current-voltage characteristics. To ensure computational accuracy and efficiency for nanofluidic systems, we construct a second order convergent method to solve Poisson-Nernst-Planck equations with dielectric interface and singular charge sources in 3D realistic settings. The rest of this paper is organized as follows. Section 2 is devoted to a new variational derivation of PNP type of models using a domain characteristic function for nanofluidic simulations. In Section 3, we develop a Dirichlet to Neumann mapping for dealing with charge singularities and the matched and boundary interface (MIB) method for material interfaces. These methods are employed to compute the PNP equations with 3D irregular channel geometries and singular charges. Section 4 is devoted to validate the present PNP calculation with synthetic nanoscale channels. We first test a cylindrical nanochannel with one charged atom at the middle of the channel and then examine the channel with eight atomic charges that are placed around the channel. Since PNP equations admit no analytical solution in general, we design analytical solutions for a modified PNP system which has the same mathematical characteristic as the PNP system. In Section 5, we investigate the atomic scale control and regulation of cylindrical nanofluidic systems. Three nanofluidic channels, a negatively charged channel, a bipolar channel and a double-well channel, are studied in terms of electrostatic potential profile, ion concentration distribution and current. Finally, this paper ends with concluding remarks. Theoretical Models Unlike the charge and material transport in biomolecular systems, the charge and material transport in nanofluidic systems induces a negligible reconstruction of the solid-fluid interface compared to the system scale. Therefore, instead of using our earlier differential geometric based multiscale models [94][95][96] which allow the modification of the solvent-solute interface, we adopt a fixed solid-fluid interface in the present work. To this end, we introduce a domain characteristic function in our variation formulation. Let us consider a total computational domain Ω ⊂ R 3 . We denote Ω m and Ω s respectively the microscopic channel domain and the solution domain. Interface Γ separates Ω m and Ω s so that Ω m Γ Ω s = Ω. We introduce a characteristic function χ(r) : R 3 → R 3 such that Ω m = χΩ and Ω s = (1 − χ)Ω. Obviously, χ and (1−χ) are the indicators for the channel domain and the solution domain, respectively. Unlike the hypersurface function in our earlier differential geometry based multiscale models, the interface is predetermined in the present model. In the solution domain Ω s , we seek a continuum description of solvent and ions. In the channel or pore domain Ω m , we consider a discrete atomistic description. A basic setting of our model can be found in Fig. 1. (a) (b) Figure 1: Illustration of computational geometry. (a) A 3D view of a schematic cylindrical nanochannel whose ends are connected to two reservoirs of KCl solution; (b) A 2D cross-section view of the cylindrical channel whose diameter is 10Å and length is 49Å in the xz-plane. Here, ΦL and ΦR, respectively, represent the applied potential at the left end and the right end, and C0 represents the bulk ion concentration of both K + and Cl − . Generalized Poisson-Nernst-Planck theory Although the PNP theory is quite standard [6,16,17,23,25,53,85,88,89,91,94,106, 108], it does not include non-electrostatic interactions. Here we present a generalized PNP theory by incorporating nonelectrostatic interactions between the solution and the nanoscale channel pore, and between solvent molecules, i.e., waters and ions. We utilize a variational formulation to derive generalized PNP equations. 2.1.1. Energy functional Electrostatic energy functional. Electrostatic interactions are ubiquitous at nanoscale and are the dominate effects for nanofluidic behaviors. The electrostatic interactions are typically modeled by a number of theoretical approaches, such as the Poisson-Boltzmann (PB) theory [29,39,60,77], the polarizable continuum theory [67,84] and the generalized Born approximation [3,30]. Among these methods, the PB theory is the most popular and has a sound origin, i.e., the Maxwell's equations [7,48,70]. A variation formulation of the Poisson-Boltzmann theory was originally introduced by Sharp and Honig [76] in 1990 and was extended to an electrostatic force derivation [43] and a multiscale formalism [94,95]. In the present work, we consider the following electrostatic energy functional where Φ is the electrostatic potential, s and m are the dielectric constants of the solvent and solute, respectively, and ρ m represents the fixed charge density of the solute. Specifically, one has ρ m = N f k=1 Q k δ(r − r k ), with Q k denoting the partial charge of the kth atom in the solute and N f the total number of fixed charges. Here C α and q α , respectively, denote the concentration and the charge valence of the αth solvent species, which is zero for an uncharged solvent component. Moreover, N c represents the number of mobile ion species through the solution domain. Note that the domain characteristic function χ in Eq. (1) is different from the hypersurface function S used in our earlier work [94][95][96]. Non-electrostatic interactions. Non-electrostatic interactions refer to van der Waals interactions, dispersion interactions, ion-water dipolar interactions, ion-water cluster formation or dissociation, steric effects, et cetera. Some of these interactions are studied in terms of size effects in the past [4,5,10,15,44,51,57,61,87]. Size effects in solvation analysis were accounted with the WCA potential for the solvation [18][19][20]. Pair particle interactions in the Boltzmann kinetic theory and impact to transport equations were formulated by Snider et al. in 1996 [79, 80]. To account for solution-channel interactions, as well as ion-water interactions, the non-electrostatic interaction energy functional takes the form where U is for solvent-channel and ion-ion non-electrostatic interactions. Let assume that the aqueous environment has multiple species labelled by α and their interactions with each solute atom near the interface can be given by where C α (r) is the density of αth solvent component, which may be either charged or uncharged, and U αk is an interaction potential between the kth atom of the channel molecule and the αth component of the solvent. For water that is free of salt, C α (r) is the density of the water molecules. Here U αβ (r) is a potential for solvent-solvent non-electrostatic interactions, including possible ion-water interactions. The solvent-solute interactions in solvation analysis have been represented by the Lennard-Jones potential [18][19][20]. The Weeks-Chandler-Andersen (WCA) decomposition of the Lennard-Jones potential [93] was utilized to split the Lennard-Jones potential into attractive and repulsive parts where αk is the well-depth parameter, σ k and σ α are respectively the radii of the kth solute atom and the αth solvent component, r denotes a point on the physical space and r k represents the location of the kth atom in the channel. The solvent-solvent interaction term U αβ (r) in the total interaction potential U α (r) does not affect the derivation and the form of other expressions. More detailed description of U αβ (r) for ion channel transport can be found in our earlier work [15,96]. Chemical potential related free energy. Chemical potential related free energy is essential for the description of mobile charges in the nanofluidic system where µ 0 α is a reference chemical potential of the αth species at which the associated ion concentration is C α0 given Φ = U α = µ α0 = 0 and µ α0 is a relative reference chemical potential which is the difference between the equilibrium concentrations of different solvent species. Here k B is the Boltzmann constant and T is the temperature. The term k B T C α ln Cα Cα0 is the entropy of mixing, and −k B T (C α − C α0 ) is a relative osmotic term [66]. It is standard to determine the chemical potential of species α by the variation with respect to C α [96] δG chem Note that at equilibrium, µ chem α = 0 and C α = C α0 because of possible external electrical potentials, solventsolute interactions, and charged species. Even if the external electrical potential is absent and system is at equilibrium, the charged solute may induce the concentration response of ionic species in the solvent so that Total energy functional. The total free energy functional for the nanofluidic system consists of the electrostatic interactions, non-electrostatic interactions and chemical potential related energy term where the first row is the electrostatic free energy functional, the second row is the free energy functional of non-electrostatic interactions, and the third row is chemical potential related energy functional. Note that the electrostatic free energy functional is the same as the polar solvation free energy functional [18,19,94]. Here λ α is a Lagrange multiplier, which is included to ensure appropriate physical properties at equilibrium [38]. Although it appears that the characteristic function χ is quite similar to the hypersurface function in our earlier work [94][95][96], χ is just an indicator for a given molecular domain Ω m with a fixed interface Γ. In contrast, the hypersurface function in our earlier work not only as a characteristic function for the molecular domain, but also plays the role of a moving interface whose dynamics is driven by mechanical and electrostatic forces, i.e., the Laplace-Beltrami equation. However, the fixed interface Γ in the present work is a reasonable approximation for nanochannels. An important feature of the present total free energy function formulation (9) is that the free energy functional U of the non-electrostatic interactions is employed for nanofluidics. Therefore, the present theory is able to deal with a variety of non-electrostatic interactions. Consequently, the solvent microstructure near the channel can be predicted correctly. Governing equations The total free energy functional (9) is a function of electrostatic potential Φ and ion concentration C α . Unlike our earlier formulations [94][95][96], the solvent-channel interface Γ is given in the present work. Like our earlier work, governing equations for the system are derived by using the variational principle. The Poisson equation. The variation of the total free energy functional with respect to the electrostatic potential Φ results in the classical Poisson equation where (χ) = (1 − χ) s + χ m is the dielectric profile, which is m in the molecular domain and s in the solvent domain. Due to the characteristic function χ, the Poisson equation (10) can be split into two equations However, electrostatic potential Φ(r) is defined on the whole computational domain (for all r ∈ Ω), Therefore, at the solution-channel interface Γ, the following interface jump conditions are to be implemented to ensure the well-posedness of the generalized Poisson equation [42,49,102] [Φ(r)] = 0, r ∈ Γ (13) where [·] denotes the difference of the quantity "·" cross the interface Γ, n is the unit norm of Γ and In Eq. (10), the densities of ions C α are to be determined by the variational principle as follows. The boundary conditions of Eq. (10) depend on experimental settings. Typically, mixed boundary conditions are used. Generalized Nernst-Planck equation. It is also standard to determine the relative generalized potential µ gen α by the variation with respect to the ion density C α At equilibrium, we require µ gen α , rather than µ chem α , to vanish Therefore, the relative generalized potential µ gen α is simplified as We derived a similar quantity from a slightly different perspective in our earlier work [105]. Note that the relative generalized potential consists of contributions from the entropy of mixing, electrostatic potential, solvent-solute interaction and the relative reference chemical potential. In practice, the nanofluidic system is out of equilibrium due to applied external field and/or inhomogeneous concentration across the nanochannel. Therefore, µ gen α does not vanish. In general, a major mechanism for establishing the equilibrium is diffusion processes driven by gradients of density, velocity, temperature and electrostatic potential [79,80]. By Fick's first law, the ion flux of diffusion can be given as We therefore have the diffusion equation for the mass conservation of species α at the absence of steam velocity In the explicit form, the generalized Nernst-Planck equation is where q α Φ + U α can be regarded as the potential of the mean field. At the absence of non-electrostatic interactions, Eq. (19) reduces to the standard Nernst-Planck equation. At the steady state, one has Note that Eq. (19) does not involve the characteristic function χ because it has already been restricted to solution domain Ω s . In contrast, generalized Poisson equation (10) is defined on the whole domain (Ω). The nature boundary condition is assumed in our derivation. However, due to the experimental setup, mixed boundary conditions are typically employed in our simulations. Mathematical Algorithms The geometric setting of the ionic diffusive nanofluidic system employed in the present investigation is described. In this work, we develop a second-order PNP solver for 3D ionic diffusive nanofluidic channels with irregular geometry and material interface. To this end, we appropriately modify the computational algorithms developed in our earlier work [104] for nanofluidic systems. To emphasize the primary effects of atomic charges in ion diffusive nanofluidic channel design and the use of interface techniques in 3D nanofluidic systems, we neglect the non-electrostatic interactions, i.e., assuming U = 0, in the rest of this work. However, since non-electrostatic interactions are important for nanofluidic systems [4,5], the situation that U = 0 will be investigated in our future work. To avoid confusion, our generalized PNP model works for all kinds of ion species. However, our model is designed based on realistic transmembrane channels. Since potassium and chloride are most important ion species in cellular charge transport, we use the potassium chloride (KCl) system as an example in our model. A schematic diagram of ionic diffusive nanofluidic channels In the present work, we construct a cuboid nanofluidic system whose dimensions are 16 × 16 × 56Å 3 . It contains a cylindrical nanochannel which is placed at the center of the system as illustrated in Fig. 1(a). The radius of the channel pore is 5Å and the length of the channel is 49Å as depicted in Fig. 1(b). Each of the channel ends is connected to a reservoir of potassium chloride (KCl) solution. In our simulation, the computational domain Ω is the nanofluidic system and it is mainly divided into ion inclusion region Ω s and ion exclusion region Ω m . The ion inclusion region is the region inside the channel and the two reservoirs where ions may penetrate and travel through. The ion exclusion region is the rest where there is no mobile ion, but has fixed charged particles. In contrast to our differential geometry based multiscale models [94][95][96], the interface between two regions Ω s and Ω m is denoted by Γ, which is predetermined by the channel structure and does not change during our simulation. A number of properly charged atoms about 1.8Å apart are positioned around the channel so that the channel flow can be regulated by electrical charges. In reality, these charged atoms can be realized by appropriate dopants. The z-coordinate of the atoms along the channel length is determined first and then at each cross section perpendicular to the channel axis, the atoms are aligned along a concentric circle whose size is sufficiently bigger than that of the channel pore. The locations of the atoms are equally spaced according to the circumference of the circle. Figure 2 shows an example of placing four negative charges around the channel at z = 0Å. In the cross section on the xy-plane, we divide a concentric circle with radius 6.5Å into four parts and then locate each anion as described in Fig. 2(a). Managing the number, magnitudes and signs of charges enables one to generate various types of atomic charge distributions for the cylindrical nanofluidic channel. Iterations of Poisson and Nernst-Planck equations The MIB method is utilized to solve the interface problems Eqs. (10) and (19). Since these equations are coupled, an iterative procedure is required to obtain convergent results. Here we outline this solution procedure. To ensure that the iteration is convergent, Φ and C α are updated by a successive over relaxation where w p and w c are appropriately selected between 0 and 1. This iterative procedure is efficient for ion channel problems [18,104]. The relaxation parameters w p and w c should be sufficiently close to 1 to ensure the convergence. They have little influence on computational results as long as the iteration is convergent [18]. Although the change of applied voltage or bulk ion concentration may requires a number of iteration steps, the convergence is still maintained [104]. In this computation, the values for both relaxation parameters are fixed at w p = w c = 0.9. If the iteration is divergent, we adjust these values to w p = w c = 0.8. After the electrostatic potential Φ and the ionic concentration C α are iteratively solved, the electric current is computed at each cross section inside the channel along the channel axis [104]. For each ionic species α, its current is calculated by where S is the cross section in the xy-plane. The total current is the sum of two ionic currents. Actually, there is no significant change along the location of the cross section and hence the current through the center of the channel axis is usually chosen to elucidate the current-voltage (I-V) relation. Convergent Validation In this section, we construct analytically solvable systems to validate the proposed numerical methods. The analytic solution of the PNP equations is unknown for realistic geometries. However, it is a standard procedure to design analytical solution for slightly modified PNP equations which share the same mathematical characteristic with the original PNP equations [104]. Consequently, the numerical convergence of designed solution algorithms can be validated. We first present the analytical solution to a set of PNP-like equations. Additionally, we consider two simple examples, one with a single atomic charge, and the other with eight atomic charges, to verify the second order convergence of our numerical methods. Finally, both a negatively charged nanofluidic channel and a bipolar nanofluidic channel are utilized to further validate the proposed numerical methods. We set m = 1 and s = 80 in all the numerical tests in this section. Therefore, there is a sharp discontinuity in the dielectric coefficients across the solvent-solute interface. Analytical solution system We consider a set of N f charged atoms at r k with fixed charge Q k , where k = 1, 2, · · · , N f in Ω m . The geometry of the analytically solvable system can be arbitrary in principle. However, one can refer to the cylindrical geometry described in Section 3.1 with the cross section of the cylinder illustrated in Fig. 3. We set the electrostatic potential to and C 1 (r) = C 2 (r) = 0 for all r ∈ Ω m because ions are able to move only in the solution confined in the region Ω s . Indeed, this set of solutions satisfies the following PNP-like equations where we have set q 1 = 1, q 2 = −1, and D 1 (r) = D 2 (r) = 1. Moreover, for every r ∈ Ω s , However, R(r) = R 1 (r) = R 2 (r) = 0 for all r ∈ Ω m . Additionally, the jump conditions at the interface Γ can be specifically given as the follows: where n is the outward unit normal vector. In order to investigate the convergence order, we apply two error measurements where F num i,j,k and F exact i,j,k , respectively, represent the numerical and exact values of a function F at (x i , y j , z k ) and N is the total number of computational nodes. We first test the cylindrical channel with a single atomic charge which is placed at (6.5, 0, 0) and whose charge is −0.08e c as shown in Fig. 4(a). The set of analytical solutions of the PNP equations introduced in Section 4.1 is used to compare with numerical results by solving Eq. (??). Table 1 demonstrates numerical errors and convergence orders for different number of computational nodes. The fixed charge of the atom influences on the accuracy of the electrostatic potential Φ and the negative charge of the atom enhances the errors and reduces the orders of the negative ion concentration C 2 . It is interesting to note that the simulation attains a good second order convergence. A cylindrical nanochannel with eight atomic charges Next, we explore this cylindrical channel with eight atomic charges outside the nanochannel. Consider two cross-sections perpendicular to the channel length at z = −11Å and z = 11Å as illustrated in Fig. 4(b). In order to put negative ions same distance away from the cylinder surface and same angle difference between two atoms, we employ the polar coordinate system. The distance between each atom and the origin is always 6.5Å and the angle from the positive x-axis is increased by a right angle. Therefore, the coordinates of these atoms are as follows: 6.5 cos π 2 (i − 1) , 6.5 sin π 2 (i − 1) , −11 and 6.5 cos π 2 (i − 1) , 6.5 sin π 2 (i − 1) , 11 for i = 1, 2, 3, and 4. At each cross section, we obtain four point charges that are equally spaced. As shown in Table 2, the errors and orders in solving the PNP equations with this atomic charge setting generates little difference from those with a single atomic charge. This validation test also indicates the second order convergence of our methods. A negatively charged nanofluidic channel Now, we perform another numerical test to verify the convergence and accuracy of the proposed PNP calculation on nanofluidic channels. A negatively charged nanochannel, or a unipolar nanochannel is constructed. The channel length on the z-axis is divided into 27 subdivisions. At each circular cross section, eight charged atoms which are 1.5Å apart from the cylinder surface are equally spaced. Each atom has a charge of −0.08e c and is about 1.8Å apart from other charges. First of all, we examine colored surface plot and contour plots of electrostatic potential distributions along the negatively charged channel by computing Eq. (??). The computational results are demonstrated in Fig. 5(a) and Fig. 6. Figure 5 is useful to understand the atomic charges of the nano-scaled channel. Moreover, it is interesting to notice that our proposed PNP solver works very well with nanofluidic channels from the fact that the boundary line of the channel are obvious to identify in contour plots. Figure 5(a) shows the electrostatic potential profiles over the surface of the negatively charged channel. Most of the parts of the channel surface have blue colors. Since the blue colors indicate the negative electrostatic potential values, it is obvious that the channel surface possesses negative charge. Additionally, three contour plots are described in Fig. 6 at z = −10Å, z = 0Å and z = 10Å. In every picture, the right outside of the channel is dark blue, which also implies the channel surface is negatively charged. Then we use the same analytical solutions (??) and (??) to find the numerical errors and orders. The results are given in Table 3. Through this test, we observe that our proposed PNP numerical schemes achieve the second order accuracy in computing the potential and ion concentrations for the negatively charged channel. A bipolar nanofluidic channel We also consider a bipolar nanofluidic channel which functions as a nanofluidic diode. It is a nano-sized channel whose atomic coordinates are the same as those of the aforementioned negatively charged nanochannel. However, its charges are altered from positive to negative or vice versa at the middle of the channel axis [21,27]. In our bipolar channel, the first half cylinder is affected by atoms with charge of 0.08e c and the atomic charges on the other half are −0.08e c . The computational results of the electrostatic potential through the bipolar channel is shown in Fig. 5(b) and Fig. 7. From Fig. 5(b), we are able to see that the atomic charges are changed from positive to negative. Such properties of the bipolar channel are also clearly manifested in the cross-sectional results in Fig. 7. When we compare the contour plots in Fig. 7(a) and Fig. 7(c), it is a little bit difficult to distinguish the colors of the outside of the bipolar channel. However, the channel inside clearly shows different colors, which indicates that the change of atomic charges influences the ion transport through the channel. The validation of our numerical methods for the bipolar channel is given in Table 4. Again, we see a good second-order convergence for this test problem. In the next section, we apply our PNP simulator to study three nanochannels. Simulation Results Having validated the numerical convergence of our proposed PNP algorithm, we explore the nature of charged nanofluidic channels under various physical conditions including applied voltage, atomic charge distri- bution and bulk ion concentration. We investigate ion concentration distributions and electrostatic potential profiles along the channel direction (z-direction) and their values are averaged over the xy-cross section at each z. We also illustrate current-voltage (I-V) curves in which the current at the center of the channel pore is used. Particularly, ion concentration distribution describes the movements of two different ion species through a channel in detail and current-voltage characteristic clearly shows electrical features of a nanochannel. We examine three kinds of nano-scaled channels, namely a negatively charged channel, a bipolar channel and a double-well channel. We have used m = 2 and s = 80 for all computations in this section. A negatively charged nanofluidic channel In order to clarify the role of atomic charges in nanofluidic systems, we first consider a negatively charged channel described in Section 4.4. The atomic charges of the channel are specified in Section 4.4. Here, all of the ions around the channel have the negative sign. We vary the voltage at the end of the right reservoir and keep bulk ion concentrations of both reservoirs unchanged. Therefore, applied voltage is the driving force to relocate potassium ions and chloride ions within the nanochannel. The atomic charges determine the ion selectivity of the nanochannel and bulk ion concentration affects the current migrated through the channel. Effect of applied voltage First, we study the impact of applied external voltage to behavior of ions traveling within a negatively charged nanochannel. We set the bulk ion concentrations of K + and Cl − as C 0 = 0.01M and the charge of each atom placed around the channel surface as Q k = −0.08e c . The voltage applied at left end of the system, Φ L , is fixed at 0V and the voltage applied at right end of the system, Φ R , is increased gradually from 0V to 1V. The ∆Φ represents the difference between Φ R and Φ L , where ∆Φ = Φ R − Φ L . Figure 8 illustrates the electrostatic potential and ion concentrations along the z−axis at the center of the channel pore in response to the external voltage difference. In Fig. 8(a), as Φ R gets increased, the electrostatic potential at the right part of the inner channel becomes dramatically higher. As a result, more cations are accumulated at the left part of the inner channel as shown in Fig. 8(b). In fact, the negative atomic charges of the channel electrostatically repel anions and attract cations, which makes the solution within the channel almost unipolar. Figure 9: Ionic current for each ion species versus the applied potential difference ∆Φ. (a) The transport behavior of the nanochannel without atomic charge; It should be noticed that we use different relative reference chemical potential µα0 for different ion species, Therefore there is a small current difference between potassium and chloride ions. (b) The transport behavior of a negatively charged channel when Q k = −0.08ec. Here, the bulk ion concentrations at both reservoirs C0 = 0.01M are fixed. When Q k = 0ec, both current-voltage graphs are linear and the positive current is roughly double of the negative current; on the other hand, when Q k = −0.08ec, the positive current-voltage graph (square) is nonlinear and the negative current-voltage graph (triangle) is almost always zero. Moreover, the positive current is much larger than the negative current and the difference gets increased as the voltage increases. By comparing the current-voltage (I-V) curve of the negatively charged channel with that of an uncharged one, we are able to clarify the effect of atomic charges in a nanometer channel. In these two graphs, the current values are obtained using Eq. (??) at the cross section inside the channel which lies in the xy-plane when z = 0Å. Figure 9(a) gives the relationship between current and voltage of each ion species for the same dimensional nanochannel without atomic charge (Q k = 0e c ). Both of the ionic currents are proportional to the applied voltage and the K + current is roughly double of the Cl − current. In this case, the nanochannel obeys the Ohm's law and is non-selective. However, the negative atomic charges destroy the linearity of the positive current-voltage characteristics and generate a large difference between two ionic currents as shown in Fig. 9(b). This nonlinearity or deviation from the Ohm's law is closely related to the non-proportionality between the potential change within the channel and the applied voltage [28]. Since the negative atomic charges near the channel surface hinder the access of Cl − ions, the negative current is almost zero for every applied voltage. Thus we can conclude that a nanochannel with unipolar atomic charges generates a charged current mostly composing of counterions which can be increased by providing more external voltage. Effect of atomic charges near the channel surface Next, we examine the effect of atomic charges on ionic flow through the negatively charged channel. Except for the magnitude of Q k , we fix the bulk ion concentrations for both ion species at C 0 = 0.01M and the applied voltage difference at ∆Φ = 1V. A stronger negative atomic charge, i.e., a larger value of |Q k |, encourages the cation accumulation inside the channel and prevents the anions from entering the channel. As a result, the K + current is increased and the Cl − current is decreased to near zero. In fact, the positive current is not directly proportional to the magnitude of atomic charge because of a stronger diffusion induced by the larger concentration gradient [28]. It is interesting to note that the channel current can be controlled by the atomic charge amplitude. When the amplitude reaches a suitable threshold, almost all ions with the same sign of charge with the channel atomic charge cannot penetrate through the inlet of the nanochannel. Therefore, the proposed nanochannel has a near perfect ion selectivity. Figure 10: Effect of atomic charges on a negatively charged nanochannel. The ionic current in response to the change of the charge Q k of the atoms placed around the channel surface is depicted. Six different charges Q k = 0ec, Q k = −0.02ec, Q k = −0.04ec, Q k = −0.06ec, Q k = −0.08ec and Q k = −0.1ec are simulated when the bulk concentration is C0 = 0.01M and the applied voltage difference is ∆Φ = 1V. Here, |Q k | is the magnitude of the charge of the atoms. As the magnitude of the atomic charges is increased, the ionic current of K + is sharply amplified as well. However, the ionic current of Cl − is reduced to near zero. Effect of bulk ion concentration Another important aspect which is necessary to understand the transport within a nanofluidic channel is the bulk ion concentration. The electrical double layer produces a unique difference between a nanofluidic channel and a microfluidic one. The bulk ion concentration is a crucial factor to determine the thickness of the double layer. In fact, when double layers overlap inside a nanochannel, the aqueous solution confined in the channel expresses charge opposite to the atomic charges of the nanochannel [26]. Figure 11 shows the total current as a function of the applied voltage difference for five different bulk ion concentrations C 0 = 0.01M, C 0 = 0.05M, C 0 = 0.1M, C 0 = 0.2M and C 0 = 0.4M when the atomic charge Q k is assumed to be −0.08e c . As the bulk ion concentration is increased, more cations penetrate through the channel and thus the total channel current is dramatically increased. A higher bulk ion concentration reduces the double layer and the channel surface becomes neutralized by the pulled counterions within the layer [28]. Consequently, the I-V graph becomes near linear, i.e., obeying Ohm's law. It is noted that a charged channel with high bulk ion concentration behaves like an uncharged one, due to the decrease in the Debye length. This phenomenon is obviously manifested in the positive ion concentration distributions in Fig. 12. Figure 12(a) shows the concentration profiles of the K + ion along the x-direction at four different bulk ion concentrations C 0 = 0.01M, C 0 = 0.05M, C 0 = 0.1M and C 0 = 0.2M. Figure 12(b) demonstrates the normalized concentration profiles, which are generated by scaling each ion concentration with its bulk value C(r)/C 0 . As illustrated in the figure, the Debye screening effect can be observed by the concentration distributions, which are higher at the channel boundary and lower at the channel center. Additionally, for the normalized concentration profiles, the lower the bulk concentration, the higher the normalized concentration, which indicates the larger Debye-Layer. The normalized current is considered to assure that a higher bulk ion concentration weakens the role of atomic charges. The normalized current is computed by dividing the current through the negatively charged channel by that through the uncharged one at several bulk ion concentrations. Excluding the atomic charges, both channels have the same voltage difference and the same bulk ion concentration. Figure 13(a) represents the normalized ionic current for each ion species and Fig. 13(b) shows the normalized total channel current. In both figures, the normalized values get close to one as the bulk ion concentration is increased, which implies that the charged channel with a high bulk ion concentration does not demonstrate much atomic charge effect in the transport phenomena. The normalized channel current with respect to the increase in bulk ion concentration. The atomic charge Q k = −0.08ec and the applied voltage difference ∆Φ = 1V are fixed. The normalized current is the quotient of the current of the negatively charged channel and the current of the uncharged channel when Q k = 0ec. As the bulk ion concentration gets larger, the normalized value becomes almost one. The negatively charged channel with a sufficiently larger bulk ion concentration behaves like a uncharged channel. A bipolar nanofluidic channel In this section, we examine a bipolar channel whose size and atomic charge constitution are described in Section 4.5. In this channel, the first half of the channel is positively charged and the second half is negatively charged. A bipolar channel can behave like a p-n junction. Therefore, it is interesting to explore the transport properties of the bipolar nanochannel. Effect of applied voltage We first consider three types of voltage bias across the channel length. One case is called no bias in which both ends of the system have zero voltage. Another case is named a forward bias, for which the voltage applied at the right end of the system is 1V. The other case, on the contrary, is referred a reverse bias, for which the voltage at the left end of the system is 1V. Additionally, we fix the bulk ion concentration of KCl at 0.1M and set the amplitude of atomic charge |Q k | to 0.08e c . Figure 14 compares the electrostatic potential profiles and ion concentration distributions along the z-direction in each case. As shown in Fig. 14(i-a), when ∆Φ = 0V, the electrostatic potential is high under the positive atomic charge, but it is low under the negative atomic charge. Subsequently, the ion concentration is plotted in the opposite way in Fig. 14(i-b). Generating the potential gap ∆Φ between two ends of the system brings about two peculiar phenomena within the bipolar channel. At the forward bias with ∆Φ = 1V, the electrostatic potential is gradually increased (Fig. 14(ii-a)), but at reverse bias with ∆Φ = −1V, it is sharply decreased (Fig. 14(iii-a)). These two results are quite conjunctive with the ion concentration curves in the way that the flux is invariable along the channel axis at steady state and thus the main factor altering the potential is the ion distribution [27]. As plotted in Fig. 14(ii-b), under the forward bias, both ion species are attracted to the junction, so the peak value of the ion concentration is greater than that under no bias. However, under the reverse bias, both ion species are moved away from the junction and each one produces a small pile at the opposite atomic charge as presented in Fig. 14(iii-b). Accordingly, the forward bias brings about an ion accumulation zone at the channel junction, whereas the reverse bias creates an ion depletion zone there. Figure 15 shows the current-voltage curves of each ion species in the bipolar channel. While both current graphs are almost zero at every reverse bias, they are monotonically increased as the potential difference gets bigger. This result comes from the two facts that the ion-depletion zone under reverse bias terminates the flow inside the bipolar channel, but the ion-accumulation zone under forward bias encourages more ions to pass through the channel. Moreover, the positive ionic current enhances more abruptly. Another remarkable discovery from the current-voltage characteristics is that a higher applied voltage reduces the gradient of the curve, which is analytically discussed in the literature [27]. Therefore, by managing the external voltage, one can turn on and off the current through the bipolar nanochannel. Figure 16 depicts the channel current in response to the applied voltage difference at |Q k | = 0.04e c , |Q k | = 0.08e c and |Q k | = 0.12e c . Here, we set the bulk ion concentration at two reservoirs to 0.1M. In every case, the total current gets enlarged as ∆Φ becomes larger. Moreover, the rate of change of the current with respect to the voltage difference gets slower. The highest atomic charge amplitude, that is, |Q k | = 0.12e c has the greatest amplitude of the current curve. In contrast, the lowest atomic charge amplitude does not fully draw both ion species at either sides under the reverse bias and thus at some negative voltage differences the current is nonzero. It is interesting to note that the channel current within a bipolar nanofluidic channel can be perfectly regulated if the atomic charges satisfy an appropriate threshold. Effect of bulk ion concentration We also test our bipolar channel at two different bulk ion concentration, namely, C 0 = 0.05M and C 0 = 0.2M, and compare the total current-voltage graphs with that at C 0 = 0.1M as described in Fig. 17. Every current-voltage curve nearly vanishes when ∆Φ is negative, but significantly increases when ∆Φ is positive. At higher bulk ion concentration, more ions are accumulated at the middle of the bipolar channel, so the total current gets bigger. Moreover, it has the maximum amplitude and gradient of the current alteration with respect to the voltage difference. To this end, it is expected to surmise that bulk ion concentration promotes the quantity of the total current through the bipolar channel. Our computational outcomes are in a good agreement with other numerical studies in the literature [27]. A double-well nanofluidic channel Finally, we consider a double-well nanofluidic channel which is named after the shape of the electrostatic potential curve. The electrostatic potential through the channel axis in a cylindrical channel may have several Figure 16: Effects of atomic charges on the current for a bipolar nanofluidic channel. Three sets of atomic charges, i.e., |Q k | = 0.04ec (square), |Q k | = 0.08ec (triangle) and |Q k | = 0.04ec (diamond) are studied. Here, the bulk ion concentration C0 of K + and Cl − is fixed at 0.1M. All I-V curves increase when ∆Φ varies from −1V to 1V. Greater magnitude of the atomic charges results in higher channel current with forward bias, but insufficient atomic charge may weaken the depletion zone with reverse bias and so there is a leakage. potential wells by modifying atomic charge distribution. In fact, one of the most well-known biological channels, Gramicidin A channel, has a double-well transmembrane ion channel [104]. In this section, we design a cylindrical channel whose electrostatic potential curve has a double-well structure by varying the sign of The electrostatic potential graphs shows two potential wells, which brings about two piles of K + ions along the channel axis. Moreover, higher applied voltage at the left end of the system, ΦL, breaks the symmetry of the potential wells. The left well becomes weaker and the right well becomes stronger. Consequently, the concentration of K + ions (dashed line) at the left pile becomes lower, but there is little change in the concentration of K + ions at the right pile. atomic charges. As illustrated in Fig. 18, the middle section of the nanochannel is positively charged, but the other parts of the channel are negatively charged. Effect of applied voltage At first, we alter the applied voltage, but fix bulk ion concentration at C 0 = 0.05M and the atomic charge distribution as described in Fig. 18. Here, Φ R is set to be 0V and Φ L is increased gradually from 0V to 1V. Figure 19 presents the electrostatic potential and ionic concentration along the channel length. On the left hand side of the inner channel, the electrostatic potential becomes higher, which results in moderating the left potential well as in Fig. 19(a). Subsequently, the positive ion concentration shows a dramatic change on the left hand side. Moreover, the small change in the potential at the right hand side of the channel corresponds to the small change in the concentration profile on the right. Effect of bulk ion concentration As in Fig. 20, the increase in the total current through the double-well channel is derived from the increase in the bulk ion concentration. Herein, the external voltage difference is the same (∆Φ = −1V). Like the negatively charged channel, the I-V relation becomes linear as the bulk ion concentration gets multiplied. These results are consistent with those observed from both numerical simulations [104] and experimental measurements [11] of the Gramicidin A channel. Therefore, atomic design of 3D nanofluidic channels proposed in the present work can be used to study biological channels, which is particularly valuable when the structure is not available. Concluding Remarks Recently the dynamics and transport of nanofluidic channels have received great attention. As a result, related experimental techniques and theoretical methods have been substantially promoted in the past two decades [4,8,90]. Nanofluidic channels are utilized for a vast variety of scientific and engineering applications, including separation, detection, analysis and synthesis of chemicals and biomolecules. Additionally, inorganic nanochannels are manufactured to imitate biological channels which is of great significance in elucidating ion selectivity and ion current controllability in response to an applied field in membrane channels [68,78]. Molecular and atomic mechanisms are the key ingredients in the design and fabrication of nanofluidic channels. However, atomic details are scarcely considered in nanofluidic modeling and simulation. Moreover, previous simulation of transport in nanofluidic channels has been rarely carried out with three-dimensional (3D) realistic physical geometry. Present work introduces atomistic design and simulation of 3D realistic ionic diffusive nanofluidic channels. We first proposes a variational multiscale paradigm to facilitate the microscopic atomistic description of ionic diffusive nanochannels, including atomic charges, and the macroscopic continuum treatment of the solvent and mobile ions. The interactions between the solution and the nanochannel are modeled by non-electrostatic interactions, which are accounted by van der Waals type of potentials. A total energy functional is utilized to put macroscopic and microscopic representations on an equal footing. The Euler-Lagrange variation leads to generalized Poisson-Nernst-Planck (PNP) equations. Unlike the hypersurface in our earlier differential geometry based multiscale models [94][95][96], the solid-fluid interface is treated as a given profile. A domain characteristic function is introduced to replace the hypersurface function in our earlier formulation. Efficient and accurate numerical methods have been developed to solve the proposed generalized PNP equations for nanofluidic modeling. Both the Dirichlet-Neumann mapping and matched interface and boundary (MIB) methods employed to solve the PNP system in 3D material interface and charge singularity. Rigorous numerical validations are constructed to confirm the second-order convergence in solving the generalized PNP equations. The proposed mathematical model and numerical methods are employed for 3D realistic simulations of ionic diffusive nanofluidic systems. Three distinct nanofluidic channels, namely, a negatively charged nanochannel, a bipolar nanochannel and a double-well nanochannel, are constructed to explore the capability and impact of atomic charges near the channel interface on the channel fluid flow. We design a cylindrical nanofluidic channel of 49Å in length and 10Å in diameter. Several charged atoms of about 1.8 angstrom apart are equally located outside the channel to regulate nanofluidic patterns. For the negatively charged channel, all of the atoms have the negative sign; on the other hand, for the bipolar channel, half of them has the negative sign and the other half has the positive sign. A double-well channel has positively charged atoms at the middle and negatively charged atoms on the remaining part of the channel. Each end of the channel is connected to a reservoir of KCl solution and both reservoirs have the same bulk ion concentration. Asymmetry in the applied electrostatic potentials at the ends of two reservoirs gives rise to current through these nanochannels. We perform numerical experiments to explore electrostatic potential, ion concentration and current through the channels under the influence of applied voltage, atomic charge and bulk ion concentration. The negatively charged channel generates a unipolar current because the negative atomic charge attracts counterions, but repels coions. The current within the nanochannel increases when external voltage, magnitude of atomic charge and/or bulk ion concentration are increased. However, the bulk ion concentration has a limitation in its growth because a larger bulk ion concentration shortens Debye length and thus the charged channel may behave like an uncharged one showing the Ohm's law. The bipolar channel can create accumulation or depletion of both ions in response to the current direction. When the right end has a higher voltage, both ions are stored at the junction of the channel length. On the contrary, when the left end has a higher voltage, both ions are moved away from the junction. Applied voltage, atomic charge and bulk ion concentration affect the amplitude and gradient of the current-voltage characteristic. At last, the special atomic charge distribution of the double-well channel produces the electrostatic potential profile with two potential wells. Increasing applied voltage at the left hand side of the system results in an obvious change in the left potential well and the K + concentration on the left. The present study concludes that the properties and quantity of the current though an ionic diffusive nanochannel can be effectively manipulated by carefully altering applied voltage, atomic charge and bulk ion concentration. Our results compare well with those of experimental measurements and theoretical analysis in the literature. Since the physical size of model is close to realistic transmembrane channels, the present model can be utilized not only for ionic diffusive nanofluidic design and simulations, but also for the prediction of membrane channel properties when the structure of the channel protein is not available or changed due to the mutation. Non-electrostatic interactions, are considered in our theoretical modeling but are omitted in the present numerical simulations to focus on atomistic design and simulation of 3D realistic ion diffusive nanofluidic channels. However, non-electrostatic interactions can be a vital effect in nanofluidic systems. A systematical analysis of non-electrostatic interactions is under our consideration.
2015-03-02T01:11:59.000Z
2015-03-02T00:00:00.000
{ "year": 2015, "sha1": "47fc724e944a6ffd894b39deb749dddd8b608e6a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1503.00385", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ebd66f1636430970cd9b4dfe0a409081a439dad3", "s2fieldsofstudy": [ "Chemistry", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Materials Science", "Mathematics", "Biology", "Physics" ] }
24377529
pes2o/s2orc
v3-fos-license
Prevalence and Antimicrobial Resistance of Microbes Causing Bloodstream Infections in Unguja, Zanzibar Background Bloodstream infections (BSI) are frequent and cause high case-fatality rates. Urgent antibiotic treatment can save patients’ lives, but antibiotic resistance can render antibiotic therapy futile. This study is the first to collect epidemiological data on BSI from Unguja, Zanzibar. Methods Clinical data and blood for culturing and susceptibility testing of isolated microbes were obtained from 469 consecutively enrolled neonates, children and adults presenting with signs of systemic infections at Mnazi Mmoja Hospital (MMH), Zanzibar. Results Pathogenic bacteria were recovered from the blood of 14% of the patients (66/469). The most frequently isolated microbes were Klebsiella pneumoniae, Escherichia coli, Acinetobacter spp. and Staphylococcus aureus. Infections were community-acquired in 56 patients (85%) and hospital-acquired in 8 (12%) (data missing for 2 patients). BSI caused by extended-spectrum beta-lactamase (ESBL) producing Enterobacteriaceae (E. coli, K. pneumoniae) was found in 5 cases, of which 3 were community-acquired and 2 hospital-acquired. Three of these patients died. Six of 7 Salmonella Typhi isolates were multidrug resistant. Streptococcus pneumoniae was found in one patient only. Conclusions This is the first report of ESBL-producing bacteria causing BSI from the Zanzibar archipelago. Our finding of community-acquired BSI caused by ESBL-producing bacteria is alarming, as it implies that these difficult-to-treat bacteria have already spread in the society. In the local setting these infections are virtually impossible to cure. The findings call for increased awareness of rational antibiotic use, infection control and surveillance to counteract the problem of emerging antimicrobial resistance. Introduction Sepsis is a major health problem associated with high mortality rates [1,2]. Data on both mortality and incidence of sepsis in Africa are limited. A mean mortality rate of 18.1% is reported in a meta-analysis on community acquired bloodstream-infection (BSI) in Africa [1]. In a study on a pediatric population in Tanzania a mortality rate of 34.9% was found [3]. High prevalence of immunosuppression due to malnutrition and other infectious diseases including human immunodeficiency virus (HIV) infection and measles may contribute to an increased burden of severe bacterial infections in African countries [3,4]. BSI caused by multidrug-resistant, extended-spectrum beta-lactamase (ESBL) producing Gram-negative bacilli is associated with very high case-fatality rates approaching those of the pre-antibiotic era [3]. Epidemiological data from specific geographic regions are needed to optimize guidelines for empirical treatment. In Zanzibar, data on the etiology of BSI have only been published from Pemba, the less populated of the two main islands comprising Zanzibar [5]. We performed a prospective cohort study in patients suspected of having BSI at Mnazi Mmoja Hospital (MMH) on Unguja, the most populated island of the Zanzibar archipelago, Zanzibar, Tanzania. The aim was to identify the most common bacterial pathogens causing BSI and to determine their antimicrobial susceptibility. Study site Mnazi Mmoja hospital (MMH), Zanzibar, Tanzania, is the main referral and teaching hospital of the Zanzibar Archipelago with a population estimated to about 1.3 million in 2012 (http:// www.nbs.go.tz). The hospital also offers primary and secondary health care for the residents of Zanzibar city with a population of about 600,000. The hospital has 544 beds. Study design Patients in the medical, pediatric and neonatal departments were enrolled in the study if they, either on admission or during their hospital stay, had fever ( 38.3°C in adults, 38.5°C in children) or hypothermia (<36.0°C), tachypnoea >20/min, tachycardia >90/min or were otherwise suspected to have systemic bacterial infection as judged by the clinician. Demographic and clinical information was obtained. Infections were defined as community-acquired and hospital-acquired, if pathogens were detected in samples taken within 2 days after admission and > 2 days after admission, respectively. Methods Patients were included over a period of 7 months (26 th March to 22 nd June 2012, 26 th October to 21 st December 2012, and 4 th February to 22 nd April 2013). Blood specimens were inoculated in BACTEC Myco/F lytic blood culturing vials (Becton Dickinson, Franklin Lakes, N.J.), one bottle per episode of febrile illness. The bottles were incubated at 37°C for 7 days and checked daily on Monday to Friday and once on either Saturdays or Sundays for microbial growth by inspecting the bottom indicator of the bottle for fluorescence [6]. Positive samples were examined by microscopy of Gram-stained preparations and subcultivated for two days on chocolate agar and on human blood agar in 5% CO 2 , and on Mac-Conkey agar in aerobe atmosphere. The isolates were identified according to established conventional procedures [7]. Samples with polymicrobial growth were included, if at least one of the microbes was considered a pathogen. As most patients had only one blood culture drawn, it was not possible to ascertain the role of bacteria of uncertain clinical relevance, such as coagulase-negative staphylococci, diphtheroids and Bacillus species. Thus, these species were considered contaminants. Gram-negative rods were identified using standard biochemical tests and the API 20E or the API 20NE system (bioMérieux, Marcy-l'Etoile, France). Susceptibility testing was performed by disc diffusion technique as described in the EUCAST guidelines [8]. Reports on the results were sent to the wards. Isolates of pathogenic microbes were sent to Baerum Hospital, Vestre Viken Health Trust, Norway, for quality control and further identification by VITEK 2 and, if necessary, matrixassisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry and/or 16S rDNA polymerase chain reaction (PCR) sequencing (performed at Oslo University Hospital and/or the Norwegian Institute of Public Health, Oslo, Norway). In Norway, susceptibility testing was performed by disc diffusion technique and/or Etest gradient system (bioMérieux) according to the EUCAST guidelines and/or by VITEK 2 and interpreted by the S-I-R system [9]. Enterobacteriaceae isolates resistant to cefotaxime or ceftazidime, were further assessed for ESBL type resistance by ESBL Etest gradient system (bioMérieux, France), ESBL CTX-M inhouse PCR [10] and AmpC in-house PCR [11]. Statistical analysis and ethical approval Differences between proportions were compared using Fisher's exact test with cutoff for statistical significance at p = 0.05. The research protocol was approved by the Zanzibar Medical Research and Ethical Committee (ZAMREC), record no ZAMREC /0004/JAN/012, and by the Regional Committee for Medical Research Ethics Health Region West (REK III), Norway, record no 201124397/2011/2439/REK vest. Written informed consent was obtained from the patient or, in the case of children, from a parent or a responsible family member. Among the 66 patients with pathogenic microbes in the blood culture, 56 patients (85%) had community-acquired infection, and 8 patients (12%) had hospital-acquired infection. Mode of acquisition could not be assessed for 2 patients due to missing information. Eighteen isolates of pathogens could not be retested in Norway as they either were not stored in the freezer in Zanzibar (n = 12) or did not survive the transport (n = 6). Antimicrobial resistance (Table 4) Six isolates (from 5 patients) of Enterobacteriaceae (Klebsiella pneumoniae (5) and E. coli (1)) were suspect of ESBL-production as they displayed resistance to cefotaxime or/and ceftazidime on disc-diffusion testing. ESBL-Etest was positive for 4 isolates (from 3 patients), K. pneumoniae (3) and E. coli (1). For 2 of the K. pneumoniae isolates including 1 isolate testing intermediate for meropenem, the results on further testing for ESBL production are lacking. The 4 ESBL E-test positive isolates were tested with PCR for CTX-M genotype. Three of the isolates were CTX-M PCR positive. The fourth isolate (K. pneumoniae) was CTX-M PCR negative, but CMY-PCR positive (belonging to the AmpC β-lactamases). Among the 5 patients with BSI caused by bacteria with confirmed or probable ESBL-production, the infection was classified as hospital-acquired in 2 patients and community-acquired in 3 patients. In 13 patients, ESBL-negative E. coli or K. pneumoniae or both (two patients had mixed infection with both E.coli and K. pneumoniae) were isolated, of which the majority were community-acquired (10/13). Three of the 5 (60%) patients with infection caused by confirmed or probable ESBL-positive bacteria died. Four of the 11 (36%) of the patients with infection caused by bacteria without ESBL-production died (data were missing for 2 patients). This difference in case-fatality rates was not statistically significant (p = 0.6). Only one isolate of S. pneumoniae was recovered. Resistance to oxacillin indicated a reduced susceptibility to penicillin G. Further testing was not possible as the isolate died. The only Enterococcus faecium that was isolated was high level gentamicin resistant, but susceptible to vancomycin. All nine S. aureus strains were susceptible to cefoxitin, clindamycin and erythromycin. No methicillin-resistant S. aureus (MRSA) was found. All 7 S. aureus isolates tested for pencillinase-production were positive, 2/9 were resistant to trimethoprim-sulfamethoxazole, 1/ 9 to tetracycline. Discussion While ESBL producing microbes in clinical samples including blood cultures have been reported from other parts of the African continent [13], including the mainland of Tanzania [14][15][16], this is the first report of ESBL-producing bacteria causing bloodstream infections from the Zanzibar archipelago. In a recent study on bacteremia from the neighbor island Pemba [5] no ESBL-positive bacteria were found. The finding of ESBL-positive microbes in blood culture is associated with increased mortality [3]. We did not find significantly higher case-fatality rate in patients with bloodstream-infections caused by ESBL producing Enterobacteriaceae (3/5) compared to those caused by ESBL negative microbes (4/11), but the numbers of the patients were small. Differences between the findings of the studies from MMH/ Unguja and Pemba in both etiology of bloodstream infections and the susceptibility patterns of the isolated microbes may be explained by the fact that Unguja has a more urban infrastructure and people have easier access to antimicrobials. Furthermore, Unguja has more extensive contact with mainland Tanzania, where a high prevalence of resistant microbes has been documented [15,16], and also more exposure to tourists and international travelers. These differences may imply a higher rate of preadmission antimicrobial treatment, although we have no evidence to support this. While the study from Pemba assessed community-acquired infections at three district hospitals, our study was performed at an urban referral hospital and included nosocomial infections and patients in a neonatal intensive care unit. This may have led to selection of more severely ill patients, infections with more resistant microbes, and more frequent use of broadspectrum antimicrobials, which in turn may also have contributed to the higher rate of resistant bacteria in our study. Previously, ESBL-producing bacteria were largely associated with nosocomial infections, but according to more recent studies, infections caused by community-acquired ESBL-producing bacteria are increasing [17]. ESBL-positive bacteria at MMH were found not only in hospital-acquired, but also in community-acquired infections. Our finding of community-acquired bloodstream infections caused by ESBL producing bacteria is alarming, as it implies that these difficult-to-treat bacteria have already spread in the society. Treatment of infections caused by ESBL-producing bacteria is much more costly, if at all available, and leads to prolonged hospital stays for those who survive [18,19]. ESBL-positive bacteria are resistant to third generation cephalosporins. These are often used as first line medication in sepsis at MMH. ESBL resistance is plasmid-mediated. These plasmids also often carry resistance genes to other groups of antibiotics [20]. Therefore carbapenems are the cornerstone of treatment of infections caused by ESBL-producing bacteria. However, these antibiotics are expensive and generally not available in resource-constrained settings such as Zanzibar, rendering such infections virtually untreatable in the local setting. Even if carbapenems were available, their use in the absence of accessible microbiological diagnostic services is problematic. Low treatment success due to high prevalence of infections caused by resistant bacteria likely results in increasing empiric use of broad-spectrum antibiotics, which exerts a strong selection pressure favoring further emergence of multidrug-resistant bacteria in the hospital and the society. Infections caused by carbapenem-resistant bacteria have already been documented in nearby Dar es Salaam, in mainland Tanzania [21]. Globally, antimicrobial resistance to Gram-negative microbes is rising faster than in Grampositive bacteria and there are no new antibiotics effective against Gram-negative bacteria in the immediate pipeline [22]. In countries with limited resources, the rapid emergence of antibiotic-resistant bacteria is furthermore promoted by patient overcrowding, overwhelmed health-care workers, limited hospital infrastructure, poor compliance with hand hygiene, and, lack of infection control programs [23]. Improved microbiological diagnosis, antibiotic susceptibility testing and epidemiological studies, may help guide sustainable, rational antibiotic use. Comparison of the etiology of sepsis among different studies in Africa [1] is challenging, as different populations are included (adults, children, neonates, all age groups, communityacquired or/and nosocomial infections). The varying prevalence of other diseases, such as HIV-infections and malaria probably also have an impact on the findings [24], as well as the geographical region and the socio-economic structure. Our study population consists of all age groups, with both community-acquired and nosocomial infection, from an area with low prevalence of malaria and HIV-infection [25][26][27]. The prevalence of bacteremia in our study (14%) is in line with findings of a meta-analysis on the cause of community-acquired bloodstream infections in Africa, which found a prevalence of 13.4% among patients with fever [1]. Salmonella enterica, of which 41% were Salmonella Typhi, followed by Streptococcus pneumoniae were the most frequent isolated microbes, with S. enterica being the most common isolate in adults, and S. pneumoniae the most frequent in children. Other common bacteria were S. aureus and E. coli. [1]. We found only one isolate of S. pneumoniae, and we suspect, as in the study from Dar es Salaam [3], that frequent prehospital antibiotic use may have precluded the recovery of pneumococci from blood cultures, resulting in underestimation of the proportion of pneumococcal infections. The only S. pneumoniae in our study was oxacillin resistant implicating a decreased sensitivity to penicillin G. In the study from Pemba, S. pneumoniae was the second most common microbe (15%), after S. Typhi (58%), and 25% (3/12) of the pneumococcal isolates were penicillin resistant [5]. We found low rates of resistance among Gram-positive bacteria. No methicillin-resistant S. aureus (MRSA) was isolated. This is in line with the study from Pemba, but contrary to findings from other African countries and mainland Tanzania [3,28,29]. Non-fermentative gram-negative rods were frequently isolated from neonatal patients and must be regarded as real pathogens [30] as the immune system in neonates is still immature. Acinetobacter has been shown to cause severe disease, particularly in tropical countries [30]. The study from Dar es Salaam also found a high proportion of BSI attributable to non-fermenters (11.6%, 34/294) [3]. Non-fermentative gram-negative bacteria are generally isolated more frequently in sepsis especially in patients with underlying diseases [31]. However, contamination must also be considered in these cases. Polymicrobial infection, i.e. growth of 2 or 3 different microbes from the same blood culture, occurred frequently (19% of BSIs) in our study, as in the study from Dar es Salaam (12%) [3]. Polymicrobial BSI may have been caused by translocation from gastrointestinal focus of infection, possibly in very sick or immunocompromised patients. Contamination of the samples may be another possible explanation. Better staff training in the technique of taking samples can reduce the risk of contamination. The main limitation of the study is the low number of patients included. Another limitation is that only one sample per patient was taken, due to limited resources. Coagulase-negative staphylococci were therefore counted as contaminants, although they may have had clinical significance in some cases of immune-compromised patients or patients with indwelling devices and among neonates. No anaerobe culture was performed. Data were lacking on pretreatment with antibiotics and outcome. As the study did not cover all seasons, possible seasonal variations may have been missed. Limitations in laboratory facilities and transport caused loss of some data [32,33]. Conclusions This is the first study of bloodstream-infections from Unguja, Zanzibar, and the first to document the presence of ESBL-producing multidrug-resistant Enterobacteriaceae as a cause of bloodstream infections in the Zanzibar archipelago. These infections are difficult to treat in the local setting and were associated with a high case-fatality rate. The finding of communityacquired infections caused by ESBL-resistant bacteria in Zanzibar is particularly worrying, as it indicates a general spread of these resistant bacteria in the society. The study findings call for prudent antibiotic use and focus on infection control in health-care settings. More data are needed on the etiology and antimicrobial susceptibility in bloodstream infections in Zanzibar including the prevalence of multidrug-resistant ESBL-producing bacteria, and this knowledge can be used to guide the development of new treatment guidelines for MMH and Zanzibar. The education of health workers in rational use of antimicrobials as well as in infection control should be intensified. Supporting Information S1
2018-04-03T05:01:32.960Z
2015-12-23T00:00:00.000
{ "year": 2015, "sha1": "58453af7c6d0f1f68ada27591916a0a75d13a3c6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0145632&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "224709a3b3f925e3fca7d3c03ce809fd77e955ca", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54472893
pes2o/s2orc
v3-fos-license
Structural basis of development of multi-epitope vaccine against Middle East respiratory syndrome using in silico approach Background Middle East respiratory syndrome (MERS) is caused by MERS coronavirus (MERS-CoV). Thus far, MERS outbreaks have been reported from Saudi Arabia (2013 and 2014) and South Korea (2015). No specific vaccine has yet been reported against MERS. Purpose To address the urgent need for an MERS vaccine, in the present study, we have designed two multi-epitope vaccines (MEVs) against MERS utilizing several in silico methods and tools. Methods The design of both the multi-epitope vaccines (MEVs) are composed of cytotoxic T lymphocyte (CTL) and helper T lymphocyte (HTL) epitopes, screened form thirteen different proteins of MERS-CoV. Both the MEVs also carry potential B-cell linear epitope regions, B-cell discontinuous epitopes as well as interferon-γ-inducing epitopes. Human β-defensin-2 and β-defensin-3 were used as adjuvants to enhance the immune response of MEVs. To design the MEVs, short peptide molecular linkers were utilized to link screened most potential CTL epitopes, HTL epitopes and the adjuvants. Tertiary models for both the MEVs were generated, refined, and further studied for their molecular interaction with toll-like receptor 3. The cDNAs of both MEVs were generated and analyzed in silico for their expression in a mammalian host cell line (human). Results Screened CTL and HTL epitopes were found to have high propensity for stable molecular interaction with HLA alleles molecules. CTL epitopes were also found to have favorable molecular interaction within the cavity of transporter associated with antigen processing. The selected CTL and HTL epitopes jointly cover upto 94.0% of worldwide human population. Both the CTL and HTL MEVs molecular models have shown to have stable binding and complex formation propensity with toll-like receptor 3. The cDNA analysis of both the MEVs have shown high expression tendency in mammalian host cell line (human). Conclusion After multistage in silico analysis, both the MEVs are predicted to elicit humoral as well as cell mediated immune response. Epitopes of the designed MEVs are predicted to cover large human population worldwide. Hence both the designed MEVs could be tried in vivo as potential vaccine candidates against MERS. Introduction Middle East respiratory syndrome (MERS) is a respiratory disease caused by Middle East respiratory syndrome coronavirus (MERS-CoV). MERS involves high fever, cough, difficulty in breathing, chills, chest pain, body aches, sore throat, diarrhea, nausea/vomiting, running nose, renal failure, and pneumonia. The first case of MERS in humans was reported in Saudi Arabia in 2012. After 2012, within a span of only 4 years, MERS-CoV infection was reported from 27 countries. 1 Three MERS outbreaks have already occurred in Saudi Arabia (2013 and 2014) and South Korea (2015). [2][3][4] The outbreak reported in South Korea also involved spread of MERS through hospital-to-hospital transit of patients and had the fatality rate as high as 40%. High attack rate and easy spread of MERS indicate an epidemic risk. [5][6][7] To date, no specific vaccine is available for MERS. The steep increase in MERS cases and its high mortality rate demand an urgent need for specific and safe MERS vaccine. Thus far, only little information is known about the pathogenesis of MERS-CoV. Hence, an immunoinformatics approach to thoroughly study and screen immunogenic proteins from the available proteome sequence data of MERS-CoV is essential for vaccine design. The proteome of MERS-CoV consists of several important vaccine candidate and drug target proteins. These proteins are involved in infection and pathogenesis of MERS-CoV to human host cells. Spike (S) glycoprotein, in particular, its receptor-binding domain, is involved in virus and host cell interaction. 8 Envelope (E) protein plays an important role in host cell recognition. 9 Nucleocapsid (N) protein is involved in RNA binding during ribonucleocapsid formation by MERS-CoV. 10 Membrane (M) protein has interferon (IFN)-antagonizing properties, thus reducing IFN levels in infected patients. 11 Open reading frame (ORF) proteins also play critical roles in viral infection and pathogenesis. A mutation study of ORFs (ORF3, ORF4a, ORF4b, and ORF5) indicated that ORFs have major implications in viral infection involving disruption of host cell processes, dysregulated IFN pathway activation, and abrupt inflammation. 12 Proteins ORF1a (4P16) and ORF1ab (4WUR) are papain-like proteases (PL(pro)) involved in viral infection and are potential targets for the development of antiviral drugs. 13 PL(pro) facilitate infection by their proteolytic, deubiquitinating, and deISGylating activities suppressing innate immune response from the host cell. Protein ORF1a (4RSP), a protease (3CLpro), facilitates proteolytic activity during viral infection and replication. 14,15 Protein ORF1ab (5WWP) is a helicase protein of MERS-CoV, and it is one of the most conserved proteins among nidoviruses. Protein ORF8b is a highly conserved protein, and during infection, it induces various immune responses. 16 In the present study, we propose two multi-epitope vaccine (MEV) designs for MERS. The proposed MEVs are composed of cytotoxic T lymphocyte (CTL) and helper T lymphocyte (HTL) epitopes. Both CTL and HTL MEVs contain overlapping regions of linear B-cell epitopes. Both MEVs also contain human β-defensin-2 (hBD-2) and human β-defensin-3 (hBD-3) as adjuvants to enhance the immunogenic response. 17,18 Methodology To design the MEVs, different in silico tools were used to screen potential CTL, HTL, and B-cell epitopes from 13 MERS-CoV proteins that are involved in virus-host interaction and viral proliferation. All three types of epitopes were studied for overlapping consensus regions. CTL and HTL epitopes showing partial or complete overlapping regions with all or any of the three kinds of epitopes and the epitopes with the highest number of human leukocyte antigen (HLA) allele binders were selected for detailed analysis. The selected CTL and HTL epitopes were further validated for their molecular interaction with their HLA allele binders. Furthermore, the selected CTL epitopes were validated for their molecular interaction with cavity of transporter associated with antigen processing (TAP) to analyze their smooth passage from the cytoplasm to the endoplasmic reticulum (ER) lumen. 19 Three-dimensional (3D) structure model for both MEVs was generated, refined, validated, and analyzed for different physiochemical properties. Both CTL and HTL MEV models were further screened to have several B-cell discontinuous epitopes and IFN-γ-inducing epitopes. Because the immune system broadly recognizes pathogens partially due to the involvement of toll-like receptor 3 (TLR-3) and its signaling cascade, both CTL and HTL MEVs were further studied for their molecular interaction with TLR-3. 20,21 Moreover, the cDNAs of both MEVs were analyzed and were predicted to be highly expressing in a mammalian host cell line (human) ( Figure S1). 2379 structural basis of development of multi-epitope vaccine against MeRs (3CLpro); ORF1ab (4WUR), a PL(pro); ORF1ab (5WWP); and ORF8b. For sequence-based epitope screening, fulllength protein sequences of the abovementioned MERS-CoV proteins were retrieved from the NCBI database (National Center for Biotechnology Information, https://www.ncbi. nlm.nih.gov/protein). A total of 2,499 amino acid sequences belonging to different proteins of MERS-CoV of different strains and origins were retrieved. For structure-based epitope screening, available 3D structures of MERS-CoV proteins were retrieved from Protein Data Bank (PDB, http://www.rcsb.org/pdb/home/home.do). For the proteins with no structure available in PDB, homology modeling was performed using the Swiss-model, (http://swissmodel. expasy.org/) (Table S1). 22 Screening of potential epitopes T-cell epitope prediction screening of cTl epitope To screen CTL epitopes, the Immune Epitope Database (IEDB) tool "Proteasomal cleavage/TAP transport/MHC class I combined predictor" (http://tools.iedb.org/processing/) was used. [23][24][25] The total score generated by the tool is a combined score of proteasome, TAP (N-terminal interaction), major histocompatibility complex (MHC), and processing analysis scores. The score is generated using the combination of six methods, namely, Consensus, NN-align, SMM-align, Combinatorial Library, Sturniolo, and NetMHCIIpan. The IC50 (nM) value was also obtained using this IEDB tool. Epitopes with high, intermediate, and least affinity of binding to an HLA allele have IC50 values <50 nM, <500 nM, and <5,000 nM, respectively. Immunogenicity of the screened CTL epitopes was also obtained by the "MHC I Immunogenicity" tool of IEDB (http://tools.iedb.org/ immunogenicity/) with all parameters set to default (first, second, and C-terminus amino acids). 26 The tool predicts immunogenicity of a peptide MHC complex on the basis of physiochemical properties of amino acids and their position within the short peptide sequence. screening of hTl epitopes To screen HTL epitopes, the IEDB tool "MHC-II Binding Predictions" (http://tools.iedb.org/mhcii/) was used. The percentile rank for each peptide is generated by the combination of three methods (Combinatorial Library, SMM-align, and Sturniolo) by comparing the score of peptide against the scores of other random five million 15-mer peptides from the SwissProt database. [27][28][29][30] The rank for the Consensus method was generated by the median percentile rank of the three methods. Population coverage by cTl and hTl epitopes The "Population Coverage" tool of IEDB (http://tools. iedb.org/population/) was used to analyze the world human population coverage by the shortlisted 28 CTL and 28 HTL epitopes. 31 Use of multi-epitopes involving both CTL and HTL epitopes would have the higher probability of larger human population coverage worldwide. B-cell epitope prediction sequence-based B-cell epitope prediction Protein sequence-based linear B-cell epitopes were predicted by six different prediction methods available at "B Cell Epitope Prediction Tools" tool of IEDB server (http://tools. iedb.org/bcell/); these tools are based on the propensity scale method and physicochemical properties of the antigenic sequence. These methods include "BepiPred Linear Epitope Prediction" (propensity scale method such as hidden Markov model), "Chou & Fasman Beta-Turn Prediction", "Emini Surface Accessibility Prediction", "Karplus & Schulz Flexibility Prediction", "Kolaskar & Tongaonkar Antigenicity", and "Parker Hydrophilicity Prediction". [32][33][34][35][36][37] structure-based B-cell epitope prediction Two structure-based B-cell epitope prediction methods, namely, DiscoTope 2.0 (Structure-based Antibody Prediction tool; http://tools.iedb.org/discotope/) and Ellipro (Antibody Epitope Prediction tool; http://tools.iedb.org/ellipro/), available in IEDB, were used for linear and discontinuous B-cell epitope prediction. 38,39 The ElliPro method is based on the location of the residue in the protein's 3D structure. The residues lying outside of the ellipsoid covering 90% of the inner core residues of the protein score highest protrusion index (PI) of 0.9 and so on. Discontinuous epitopes predicted by ElliPro are clustered on the basis of distance R in Å between two residues' centers of mass lying outside the largest possible ellipsoid. The larger the value of R, the larger will be the number of discontinuous epitopes clustered. DiscoTope 2.0 is based on the number of contacts of the residues' Cα carbon atom with other Cα carbon atoms in the 3D structure within the 10 Å distance and the propensity of a residue to be a part of an epitope. Toxicity assessment of CTL and HTL epitopes and B-cell epitopes was performed by ToxinPred (http://crdd.osdd.net/ raghava/toxinpred/multi_submit.php) analysis. This analysis allows to identify highly toxic or nontoxic peptides. The analysis was performed by the "support vector machine (SVM) (SwissProt)-based" method with all the parameters set to default. 41 Overlapping residue analysis Multiple Sequence Alignment (MSA) analysis using the Clustal Omega tool (https://www.ebi.ac.uk/Tools/ msa/clustalo/) of European Bioinformatics Institute was performed for all the shortlisted CTL, HTL, and B-cell epitopes from 13 MERS-CoV proteins. 42 MSA by Clustal Omega virtually aligns any number of protein sequences and delivers accurate alignments. epitope selected for molecular interaction study with hla allele and TaP CTL and HTL epitopes were shortlisted for further in silico analysis on the basis of their overlapping sequence regions among all three types of epitopes (CTL, HTL, and B-cell) or complete overlap among any two types of epitopes or the highest number of HLA allele binders. Molecular interaction analysis of the selected epitopes with hla alleles Tertiary structure modeling of hla alleles and selected T-cell epitopes Template-based homology modeling for HLA class I and II allele binders of the shortlisted epitopes was performed by the Swiss-model. 22 Protein sequences of HLA class I and II alleles were retrieved from Immuno Polymorphism Database (IPD-IMGT/HLA) (https://www.ebi.ac.uk/ipd/ imgt/hla/allele.html). 69 Template with high sequence identity was chosen for modeling, and the models thus generated were validated for quality by the Qualitative Model Energy ANalysis (QMEAN) analysis. The QMEAN value is a composite (both global and local [ie, per residue] structure) quality estimate of the generated model. 43 Models with an acceptable QMEAN value with a cutoff of −4.0 were chosen for further studies (Table S2). "Natural Peptides Module for Beginners" module of the PEPstrMOD tool (http://osddlinux.osdd.net/raghava/ pepstrmod/nat_ss.php) was used to generate tertiary structures of the selected epitopes. 44 Prediction was carried out with a simulation time window of 100 ps, and the peptide environment was set to vacuum. Molecular docking and molecular dynamics (MD) simulation study of the selected epitopes with hla alleles AutoDock Tool 4.2 and AutoDock Vina were used for molecular docking study of the selected epitopes and their respective HLA allele binders. 45,46 Further, the docked complexes were subjected to MD simulation study by Gromacs 5.1.4 using Optimized Potentials for Liquid Simulations -all atom (OPLS-AA) force field. 47,48 Molecular interaction analysis (docking) of the selected cTl epitopes with TaP Molecular docking study of the shortlisted CTL epitopes with TAP receptor was performed by AutoDock Vina. 45,46 For more accurate prediction, instead of homology modeling, the cryo-EM structure of TAP (PDB ID: 5u1d) was used. 49 For docking, the antigen from the TAP cavity of the original structure was removed. Design, characterization, and interaction analysis of MeVs with TlR-3 Design of MeVs From the screened and shortlisted CTL and HTL epitopes, two MEVs were designed using EAAAK and GGGGS as short peptide rigid and flexible linkers, respectively ( Figure 1A and B). To enhance the immune response, hBD-2 (PDB ID: 1FD3, sequence: GIGDPVTCLKSGAICHPVFCPRRYKQIGTCG LPGTKCCKKP) and hBD-3 (PDB ID: 1KJ6, sequence: GII NTLQKYYCRVRGGRCAVLSCLPKEEQIGKCSTRGRK CCRRKK) were used as adjuvants for both MEVs at N and C terminals, respectively. 17,18,[50][51][52][53] Upon lung infection, the expression of hBD-2 and hBD-3 was found to be increased. β-Defensins are involved in chemotactic activity for memory T cells, monocytes, and immature dendritic cells as well as in degranulation of mast cells. Thus, hBDs enhance innate and adaptive immunity and therefore were chosen here as adjuvants for the design of MEVs. IFn-γ-inducing epitope prediction IFN-γ epitopes with potential to induce the release of IFN-γ from CD4+ T cells from both MEVs were predicted by "IFNepitope" server (http://crdd.osdd.net/raghava/ ifnepitope/scan.php) using the "Motif and SVM hybrid" (MERCI: Motif-EmeRging and with Classes-Identification, and SVM) approach. The tool generates overlapping sequences from the query protein/antigen and uses it for IFN-γ-inducing epitope prediction. The prediction is based on a dataset of IFN-γ-inducing and IFN-γ-noninducing MHC class II binders. 54 For both MEVs, AlgPred was used for allergenicity prediction (http://crdd.osdd.net/raghava/algpred/submission.html). 56 The AlgPred prediction of allergens is based on similarity of the known epitope with any region of the submitted protein. VaxiJen (http://www.ddg-pharmfac.net/vaxijen/ VaxiJen/VaxiJen.html) was used to analyze the probability of antigenicity of both MEVs. 57 The VaxiJen analysis applies an alignment-free approach that is solely based on the physicochemical properties of the sequences of the submitted proteins. Physicochemical analysis of MeVs To analyze the physiochemical properties of CTL and HTL MEVs, ProtParam (https://web.expasy.org/protparam/) was used. 58 ProtParam analysis performs empirical investigation for a given protein amino acid sequence. 3D structure modeling and refinement of MEVs Both the CTL and HTL MEVs were further subjected to 3D structure modeling by RaptorX structure prediction server (http://raptorx.uchicago.edu/StructurePrediction/predict/). 59 The quality of the generated model is indicated by its P-value. The P-value is the probability of a predicted model being worse than the best. It indicates relative quality in terms of modeling error by combining global distance test (GDT) and un-normalized GDT, that is, error at each residue. The smaller the P-value, the higher is the quality of the predicted model. The generated MEV models were further refined by ModRefiner and GalaxyRefin. 60,70 GalaxyRefine performs repeated structure perturbation along with subsequent structural relaxation by dynamics simulation to refine a protein structure. To avoid breaks in model structures, GalaxyRefine uses the triaxial loop closure method. In silico validation of refined MEV models The refined 3D models of both MEVs were then subjected to RAMPAGE analysis for generating a Ramachandran plot (http://mordred.bioc.cam.ac.uk/~rapper/rampage.php). 61 The Ramachandran plot shows the residues that form energetically allowed and disallowed dihedral angles psi (ψ) and phi (ϕ), which are calculated on the basis of van der Waal radius of the amino acid side chain. Discontinuous B-cell epitope prediction from MeVs Both the designed CTL and HTL MEVs were subjected to discontinuous B-cell epitope prediction to analyze structurebased humoral immunogenic potential of both MEVs using the ElliPro tool. 39 Molecular docking and MD simulation study of MeVs and the immunological receptor TlR-3 To study the molecular interaction, the refined models of CTL and HTL MEVs were docked with TLR-3 by PatchDock server (http://bioinfo3d.cs.tau.ac.il/PatchDock/). [62][63][64] The 3D structure of human TLR-3 ectodomain (ECD) was retrieved analysis of cDnas of both MeVs for cloning and expression Optimized cDNAs for both MEVs were generated by the Codon Usage Wrangler Tool with the option of a mammalian host cell line (human) as the expression system (http://www. mrc-lmb.cam.ac.uk/ms/methods/codon.html). Further, the GenScript Rare Codon Analysis Tool (https://www.genscript. com/tools/rare-codon-analysis) was used to analyze the cDNAs of both MEVs. The tool provides GC content, codon adaptation index (CAI), and tandem rare codon frequency for cDNA. [65][66][67] Results and discussion screening of potential epitopes T-cell epitope prediction screening of cTl epitope On the basis of "total score" and acceptably low IC50 (nM) value of epitope-HLA class I allele pairs, 75 CD8+ CTL epitopes were chosen. Later, 28 epitopes were shortlisted with high scoring and larger number of HLA class I allele binders for further studies. Immunogenicity of the screened CTL epitopes was also determined. The higher the immunogenicity score, the greater is the immunogenicity of the epitope (Tables S3 and S4). screening of hTl epitopes Screening of HTL epitopes was performed on the basis of "percentile rank". Small percentile rank shows the high affinity of the peptide for its respective HLA allele. Initially, from the 13 MERS-CoV proteins, 70 CD4+ T-cell epitopes with the highest percentile rank were screened, and 28 epitopes with high percentile rank and the highest number of HLA class II allele binders were then shortlisted (Tables S5 and S6). Several CTL and HTL epitopes predicted in the present study show overlapping regions with the epitopes detected to induce T-cell responses in previous studies done using peripheral blood mononuclear cells from infected patients. 68 Hence, epitopes screened and reported in the present study could be predicted to originate from the highly immunogenic stretch of MERS-CoV protein sequences. Population coverage by cTl and hTl epitopes In this study, most geographical regions of the world, and in particular, the countries of Middle East, South Asia, East Asia, and Northeast Asia, were included. This study indicates that the combined use of all shortlisted CTL and HTL epitopes would have an average worldwide population coverage of 94.0%, with an SD of 20.19 (Table S7). B-cell epitope prediction sequence-based B-cell epitope prediction Initially, a total of 144 B-cell epitopes were screened from 13 MERS-CoV proteins, with epitope length of four or more than four amino acids by the BepiPred Linear Epitope Prediction method. The epitopes predicted by five other methods based on different physicochemical properties showed a significant consensus overlap of amino acid sequences with that of BepiPred Linear Epitope Prediction. From the 144 B-cell epitopes, 12 with the length of 4-19 amino acids were shortlisted (Table S8 and Figure 2). structure-based B-cell epitope prediction Discontinuous and linear epitopes predicted by DiscoTope 2.0 and Ellipro methods showed significant consensus overlap of amino acid sequence with the linear epitopes predicted by the BepiPred linear epitope method (Table S8 and Figure 2). This result confirms that the shortlisted BepiPred linear epitopes are highly immunogenic B-cell epitopes. characterization of potential epitopes epitope conservation analysis The conservation analysis of shortlisted 28 CTL, 28 HTL, and 12 B-cell epitopes shows that amino acid sequence conservancy of CTL epitopes varied from 72.7% to 100%, that of HTL epitopes varied from (50% for two epitopes) 68.18% to 100%, and that of B-cell epitopes varied from 85.71% to 99.26% (Tables S3, S5, and S8). This result indicates high conservancy nature of the shortlisted CTL, HTL, and B-cell epitopes. epitope toxicity prediction All the shortlisted CTL, HTL, and B-cell epitopes analyzed by the ToxinPred tool were predicted to be nontoxic (Tables S3, S5, and S8). The analysis was based on the ToxinPred main dataset consisting of 1,805 toxic peptides for its prediction. Overlapping residue analysis MSA analysis of all the screened CTL, HTL, and B-cell epitopes from the 13 MERS-CoV protein candidates revealed that several epitopes of CTL, HTL, and B-cell have overlapping amino acid sequence regions. Epitopes with two or more than two residues in the overlapping region are shown in Figure 3. Overlapping cTl, hTl, and B-cell epitopes. Notes: Overlapping cTl (red), hTl (blue), and B-cell epitopes (green) were sorted out by Msa using clustal Omega at eBI server. The ringed clusters of epitopes involve all three types of epitopes, epitopes with full sequence overlap, and the epitopes with highest number of hla allele binders. Abbreviations: cTl, cytotoxic T lymphocyte; eBI, european Bioinformatics Institute; hla, human leukocyte antigen; hTl, helper T lymphocyte; Msa, multiple sequence alignment. Figure 2 Overlapping regions of the BepiPred linear B-cell epitopes and the epitopes predicted by other methods. Notes: This analysis shows a strong consensus between the BepiPred linear B-cell epitopes and the epitopes predicted by other sequence-based and structure-based epitope prediction methods. Different colors are used to highlight overlapping regions of epitopes predicted by the different methods. epitope selected for molecular interaction study with hla allele and TaP Clusters of overlapping CTl, HTL, and B-cell epitopes Overlapping CTL and HTL epitopes clustering with all three (CTL, HTL, and B-cell) types of epitopes or with complete epitope sequence overlap or with the highest number of HLA allele binders were identified and selected for their molecular interaction analysis with HLA alleles and TAP ( Figure 3). These include CTL epitopes: 47 Molecular interaction analysis of the selected epitopes with hla allele Molecular docking and MD simulation study of the selected epitopes with hla alleles Molecular docking study of all the selected epitopes with their respective HLA allele binders revealed a significantly favorable molecular interaction. Docking complexes thus formed have significantly negative binding energy, and several amino acid residues of epitopes and HLA alleles were involved in hydrogen bond formation ( Figure 4). To analyze the stability of binding, docking complexes were further subjected to MD simulation study with an analysis time window of 0.1 ns at the reasonably invariable temperature (~300 K) and pressure (~1 bar). The results of MD simulation for all the epitope-HLA allele complexes showed a very convincing reasonably invariant root mean square deviation (RMSD) value between ~0.1 and 0.2 nm, thus indicating stable complex formation ( Figure 5). Moreover, the reasonably invariant radius of gyration (Rg) of the complexes ( Figure S2) and RMS fluctuation (RMSF) for all the atoms in the complexes ( Figure S3) indicate that the epitope-HLA complexes remain very stable in their folded form. B-factor of the epitope and HLA allele complexes is shown in the rainbow color presentation in Figure S4. Most of the regions of the complexes are stable (blue) with a very small region being acceptably fluctuating (yellow and orange). Molecular interaction analysis (docking) of the selected cTl epitopes with TaP Molecular docking results show a favorable molecular interaction between the selected CTL epitopes and the TAP cavity. The result reveals several molecular interaction sites for the selected epitopes within the TAP cavity. Two sites of interaction-one close to the cytoplasm and another close to the ER lumen-are shown in Figure 6. All the interactions showed a significantly negative binding energy with one or more than one hydrogen bond formation at both sites of interaction. From this study, we may predict a smooth passage for selected CTL epitopes from the cytoplasm to the ER lumen throughout the entire passage of the TAP cavity ( Figure 6). characterization and interaction analysis of designed MeVs with TlR-3 IFn-γ-inducing epitope prediction IFN-γ is involved in both adaptive and innate immune responses. It stimulates macrophages and natural killer cells. IFN-γ-inducing 15-mer peptide epitopes were predicted from (Table S9 and Figure 7C and G). The AlgPred analysis of CTL and HTL MEVs revealed that both MEVs are nonallergens with the score of −0.95415008 and −0.8647714, respectively, with sensitivity and specificity of 88.87% and 81.86%, respectively, while the default threshold value was −0.4. The VaxiJen analysis indicated both MEVs to be probable ANTIGENS with prediction score of 0.5302 and 0.5097 for CTL MEVs and HTL MEVs, respectively, while the default threshold value was 0.4 for viral proteins. Hence, both MEVs are predicted to be nonallergic and potentially antigenic in nature. Physicochemical analysis of MeVs ProtParam analysis showed that CTL MEVs have 500 amino acids, 50.4 kDa molecular weight, and 9.72 theoretical pI. The half-life in mammalian reticulocytes, yeast, and Escherichia coli was 30 hours, 20 hours, and 10 hours, respectively; aliphatic index was 61.68, and grand average of hydropathicity (GRAVY) was −0.020, indicating globular and hydrophilic nature of CTL MEVs; the instability index was 50.70, indicating that CTL MEVs are theoretically close to stable in nature. ProtParam study showed HTL MEV has 657 amino acids, 67.583 kDa molecular weight, and 9.29 theoretical pI. The half-life in mammalian reticulocytes, yeast, and E. coli was 30 hours, 20 hours, and 10 hours, respectively; aliphatic index was 94.22, and grand average of hydropathicity (GRAVY) was 0.337, both indicating globular and hydrophilic nature of HTL MEV; the instability index was 37.66, indicating theoretically stable nature of HTL MEV. The structural accuracy of initial model including joining of the gaps was performed by ModRefiner refinment. Galaxy-Refine was used to further refine the 3D models of CTL and HTL MEVs. For both MEVs, refined model 1 was chosen on the basis of best scorings. For CTL MEV refinement, the score of model 1 was as follows: Rama favored was 94.2%, GDT-HA was 0.9500, RMSD was 0.433, MolProbity was 2.211, Clash score was 18.3, and Poor rotamers was 1.2. For HTL MEV refinement, the score of model 1 was as follows: Rama favored was 93.4%, GDT-HA was 0.9502, RMSD was 0.412, MolProbity was 2.262, Clash score was 21.9, and Poor rotamers was 0.6. Here, MolProbity is the log-weighted combination of clash score, percentage Ramachandran not favored, and percentage bad side-chain rotamers. After refinement, all the mentioned parameters improved significantly as compared to those of the initial model (Table S10). In silico validation of refined MEV models The RAMPAGE analysis showed that the refined CTL MEV model has 94% residues in the favored region, 3.8% residues in the allowed region, and only 1.4% residues in the outlier region, while the refined HTL MEV model has 94% of residues in the favored region, 5.2% residues in the allowed region, and only 0.8% residues in the outlier region ( Figure 7D and H). Discontinuous B-cell epitope prediction from MeVs Discontinuous B-cell epitope prediction from both MEVs was performed by the ElliPro tool. The screening revealed CTL MEV to have six discontinuous epitopes and HTL MEV to have five discontinuous epitopes. The PI score for CTL MEV epitopes ranged from 0.549 to 0.906 and that for HTL MEV ranged from 0.56 to 0.729 (Table S11 and Figure 7C and G). The higher the score, the greater is the potential of the B-cell discontinuous epitope. Molecular docking and MD simulation study of MeVs and the immunological receptor TlR-3 The refined models of CTL and HTL MEVs were further analyzed for their interaction with the ECD of human TLR-3 by molecular docking using the PatchDock tool. Docking conformation chosen for the CTL and HTL MEVs showed scores of 18,096 and 23,690, respectively, which were highest among all docked complexes. The highest score indicates the best geometric shape complementarity fitting of ligand and receptor predicted by the tool. The docking complex shows a fitting confirmation of both the MEVs within the ECD of TLR-3 ( Figure 8A and E). Further, the MD simulation analysis of the docked CTL-TLR-3 and HTL-TLR-3 complexes showed a very convincing reasonably stable RMSD value between ~0.4 and 0.5 nm for a given time window of 6 ns at the reasonably invariable temperature (~300 K) and pressure (~1 bar). These results indicate a stable complex formation for both MEVs with TLR-3 ( Figure 8B and F). The reasonably invariant Rg of MEV-TLR-3 complexes ( Figure 8C and G) and RMSF for all the atoms in the complexes ( Figure 8D and H) indicate that both the MEV-TLR-3 complexes are reasonably stable and are properly folded. The B-factor of CTL and HTL MEV complexes with the TLR-3 receptor is shown by rainbow color presentation in Figure 8A and analysis of cDnas of both MeVs for cloning and expression Optimized cDNAs for both CTL and HTL MEVs were generated by the Codon Usage Wrangler Tool with a mammalian host cell line (human) as the choice of the expression system. The GenScript Rare Codon Analysis Tool showed that the GC content of optimized CTL MEV cDNA was 69.74% and CAI score was 1.0 with 0% tandem rare codons. Likewise, the GC content of optimized HTL MEV cDNA was 66.63%, and the CAI score was 1.0 with 0% tandem rare codons. Ideally, the GC content should be 30%-70%, the CAI score indicating the possibility of cDNA expression in the chosen expression system should be between 0.8 and 1.0, and tandem rare codon frequency indicating the percentage of low-frequency codons present in cDNA for the chosen expression system should be <30%. Tandem rare codons may hinder the expression of cDNA or even interrupt the translational machinery. Hence, the optimized cDNAs of both MEVs are predicted to be highly expressing in the mammalian host cell line (human). Conclusion In this study, we propose the design of two MEVs against MERS-CoV that consisted of CTL and HTL epitopes. The selected CTL and HTL epitopes were validated by in silico methods for their molecular interaction with HLA alleles and TAP (for CTL epitopes). The population coverage by the designed MEVs is as high as 94% of the world population. Both MEVs were found to contain IFN-γ epitopes and B-cell discontinuous epitopes. Moreover, they also showed stable interaction with the immunoreceptor TLR-3. On the basis of the design and in silico validation of both the CTL and HTL MEVs, their joint administration is predicted to induce humoral and cell-mediated immune response. Codon-based cDNAs of both MEVs are predicted to have high expression in the mammalian host cell line (human); hence, they could be cloned and expressed at the laboratory level for in vivo trials on humanized HLA-expressing mice for further study.
2018-12-16T18:46:01.079Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "916a00f4b5b7c67eaea828ff5e0949f77c440c6e", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=46383", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed94f0bd0fe17c099f8cdff4bf9edbbd29ebf41f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
155885383
pes2o/s2orc
v3-fos-license
Hyperspectral Pansharpening Based on Homomorphic Filtering and Weighted Tensor Matrix : Hyperspectral pansharpening is an e ff ective technique to obtain a high spatial resolution hyperspectral (HS) image. In this paper, a new hyperspectral pansharpening algorithm based on homomorphic filtering and weighted tensor matrix (HFWT) is proposed. In the proposed HFWT method, open-closing morphological operation is utilized to remove the noise of the HS image, and homomorphic filtering is introduced to extract the spatial details of each band in the denoised HS image. More importantly, a weighted root mean squared error-based method is proposed to obtain the total spatial information of the HS image, and an optimized weighted tensor matrix based strategy is presented to integrate spatial information of the HS image with spatial information of the panchromatic (PAN) image. With the appropriate integrated spatial details injection, the fused HS image is generated by constructing the suitable gain matrix. Experimental results over both simulated and real datasets demonstrate that the proposed HFWT method e ff ectively generates the fused HS image with high spatial resolution while maintaining the spectral information of the original low spatial resolution HS image. Introduction Depending on the number of acquired bands, remote sensing imaging technology has developed from collecting panchromatic (PAN) and color images to multispectral (MS) images, and it can now capture hyperspectral (HS) images with dozens of hundreds of bands. A PAN image with very high spatial resolution is a single-band grayscale image acquired in the visible range. It is able to obtain the shape feature of objects, but cannot distinguish colors. A color image consists of three bands which are red, green and blue, and displays the colors of objects. However, it is difficult to distinguish the features in similar colors. An MS image not only obtains spatial features, but also obtains spectral information in several bands, which is more capable of distinguishing categories of different features. However, the rough spectral resolution of MS images may not meet the requirements in some applications, and it is hard to realize fine feature detection [1]. An HS image with a higher spectral resolution on the order of nanometers can provide finer classification [2], which has been applied to many fields [3][4][5][6][7] and some practical applications, such as vegetation study [8], precision agriculture [8], regional geological mapping [9], mineral exploration [10], and environment monitoring [11]. Due to technical limitations, the spatial resolution of an HS image is low. Many hyperspectral pansharpening algorithms were developed, among which hyperspectral pansharpening methods using Bayesian and matrix factorization have been proposed in recent years. The Bayesian-based approaches include Bayesian naive Gaussian prior [12], Bayesian sparsity promoted Gaussian prior [13], and HySure [14]. These algorithms utilize the posterior distribution, and are based on maximum a posteriori estimation to fuse LRHS and HRPAN images [15]. The matrix factorization approach generates a fused HRHS image by using the nonnegative matrix factorization (NMF) under some constraints to estimate endmember and abundance matrices [16]. The matrix factorization approach is well represented by the nonnegative sparse coding (NNSC) [17] and constrained nonnegative matrix factorization (CNMF) [18] methods. The main challenge in hyperspectral pansharpening is to effectively improve the spatial resolution while preserving the original spectral information. The Bayesian and matrix factorization approaches are able to achieve good results on this challenge, but have a high computational cost. Component substitution (CS) and multi-resolution analysis (MRA) approaches are two classical hyperspectral pansharpening approaches which have simple and fast implementation. For the CS class, intensity-hue-saturation (IHS) transform [19,20], principal component analysis (PCA) transform [21,22], Gram-Schmidt (GS) [23], and adaptive GS (GSA) [24] are the most representative methods. The CS class extracts spatial details of the HS image, and replaces the extracted spatial details with the HRPAN image. Regardless of superior spatial performance, the CS class suffers from serious spectral distortion [25]. The typical algorithms of the MRA technique are smoothing filter based intensity modulation (SFIM) [26], Laplacian pyramid [27], modulation transfer function generalized Laplacian pyramid (MTF-GLP) [28], and MTF-GLP with high pass modulation (MTF-GLP-HPM) [29]. The MRA methods generally utilize a multi-resolution decomposition to extract spatial details which are imported into the HS image. Compared with the CS methods, the MRA methods generate less spectral distortion, but usually have a larger computational burden [30]. Recently, several algorithms based on the CS and MRA approaches have been proposed, such as the Sentinel-2A CS and MRA based sharpening algorithm [31], the multiband Filter estimation (MBFE) algorithm [32], and the guided filter PCA (GFPCA) algorithm [33]. Moreover, several intelligent processing-based methods have also been proposed, and examples include deep two-branches convolutional neural network (Two-CNN-Fu) [34], Bidirectional Pyramid Network [35], and 3D-convolutional neural network (3D-CNN) [36]. The CS and MRA approaches mostly extract the spatial information of the HRPAN image and inject it into the LRHS image, but without considering the spatial information of the LRHS image. Due to the incomplete spatial information injection, the CS and MRA approaches may result in distortion. To address this problem, we propose a novel hyperspectral pansharpening method by combining homomorphic filtering with a weighted tensor matrix. An optimized weighted tensor matrix-based method which considers the structure information of the LRHS and HRPAN images is proposed to generate more comprehensive spatial information. In addition, to extract the spatial structure information of the LRHS images, open-closing morphological operation is first used for noise removal, and homomorphic filtering is then introduced to extract the spatial details of each band. Finally, a weighted root mean squared error based method is proposed to obtain the total spatial component of the LRHS image from extracted spatial details of each band, and the Laplacian pyramid networks super-resolution algorithm is adopted to enhance the spatial resolution of the obtained spatial component. Comparative analysis was used to demonstrate the applicability and superiority of the proposed method in both spectral and spatial qualities. As stated above, a new hyperspectral pansharpening method based on homomorphic filtering and weighted tensor matrix is proposed in this paper. The main novelties of the proposed hyperspectral pansharpening method are concluded in the following aspects. 1. A novel HS image spatial component extraction strategy is proposed. Open-closing morphological operation and homomorphic filtering are first introduced to remove the noise and extract the spatial details of each band of the HS image, respectively. Then, a weighted root mean squared error-based method is proposed to obtain the total spatial component of the HS image. 2. An optimized weighted tensor matrix-based method is proposed to integrate the spatial component of the HS image with the spatial component of the PAN image. The weighted structure tensor matrix that represents the structural information of multiple images is applied to hyperspectral pansharpening for the first time. The classical methods which mostly extract the spatial information of the PAN image inject the incomplete spatial information, and may lead to distortion. Unlike there classical methods, the proposed optimized weighted tensor matrix-based method generates the spatial information not only from the PAN image but also from the HS image, and can reduce the distortion caused by the insufficient spatial information. The remainder of this paper is organized as follows. Section 2 describes the weighted structure tensor matrix and homomorphic filtering. In Section 3, the proposed homomorphic filtering and weighted tensor matrix-based hyperspectral pansharpening algorithm is presented. Experimental results and discussion are provided in Section 4, and conclusions are drawn in Section 5. Weighted Structure Tensor Matrix For an image I, the structure tensor matrix M can be decomposed as: where I x = ∂I/∂x and I y = ∂I/∂y are the horizontal and vertical partial derivatives of the image, ∇I = [ I x I y ] T , () T is the transpose operation, v 1 , v 2 , e 1 and e 2 are the two eigenvectors and the corresponding eigenvalues, respectively. As shown in Equation (1), the tensor matrix M which is a symmetric and semi-definite positive matrix has eigen-decomposition, and it has been exploited in some fields, such as texture synthesis [37], image regularization [38], denoising [39], and recognition systems [40]. The eigenvalues obtained by decomposing the tensor matrix are utilized to describe the where M w is the weighted structure tensor matrix, M l is the structure tensor matrix of the lth image, ∂I l /∂x and ∂I l /∂y are the horizontal and vertical partial derivatives of the lth image, respectively. The weighted tensor matrix M w is also a symmetric and semi-definite positive matrix, and can be proceeded the eigen-decomposition. Homomorphic Filtering Homomorphic filtering which is a type of frequency domain filtering can compress the image brightness range and enhance the image contrast. Homomorphic filtering has been applied to some image processing problems [41][42][43] and is based on an image imaging model: where f represents an image, f H represents the high frequency reflectance component, and f L represents the low frequency illumination component. Homomorphic filtering aims to reduce the low frequency component of an image. Logarithmic transformation is utilized to separate the two components: After applying the Fourier transform: where F, F H and F L denote the Fourier transform of ln( f ), ln( f H ) and ln( f L ), respectively. Then, the high-pass filter H is applied to Equation (5) as where S is the filtered result. The final image is obtained by the inverse Fourier transform and the exponential operation: where f h f denotes the homomorphic filtered image, s denotes the inverse Fourier transform of S, and −1 denotes the inverse Fourier transform. The open-closing operation which belongs to the mathematical morphology operation is an effective denoising processing operation [44,45]. The effects of denoising using the open operation and closed operation alone are usually not very good, since they may cause amplitude deflection. By contrast, the open-closing operation has the better denoising effect. The open operation is first applied on the image, and the selected structure element is larger than the noise size to remove the background noise. Then, the closed operation is utilized to remove the noise of the image obtained in the previous step. The open-closing denoising operation is suitable for the images which have less small details. Since the LRHS image has the low spatial resolution, the fine spatial details are few. The open-closing operation is applicable to removing noise with high interference in the LRHS image. The open-closing morphological operation is applied as: The open-closing operation which belongs to the mathematical morphology operation is an effective denoising processing operation [44,45]. The effects of denoising using the open operation and closed operation alone are usually not very good, since they may cause amplitude deflection. By contrast, the open-closing operation has the better denoising effect. The open operation is first applied on the image, and the selected structure element is larger than the noise size to remove the background noise. Then, the closed operation is utilized to remove the noise of the image obtained in the previous step. The open-closing denoising operation is suitable for the images which have less small details. Since the LRHS image has the low spatial resolution, the fine spatial details are few. The open-closing operation is applicable to removing noise with high interference in the LRHS image. The open-closing morphological operation is applied as: Hyperspectral Image Preprocessing band of the LRHS image and the denoised LRHS image, respectively, and S 1 and S 2 are the structure elements. Here, • represents the opening operation which first uses the erosion operation and then the dilation operation, and • denotes the closing operation which does in reverse. The erosion and dilation operations obtain the local minimum and maximum of the image, respectively. Equation (8) can be expressed in detail as: Remote Sens. 2019, 11, 1005 6 of 18 for k = 1, 2, . . . , B, where Θ and ⊕ denote the erosion and dilation operations, respectively. Hyperspectral Image Spatial Information Extraction Homomorphic filtering is a filtering method that transforms the nonlinear problem into a linear problem. It transforms the nonlinear multiplicative mixed problem into an additive model by the logarithmic transformation, and then uses linear filtering to process it. The homomorphic filtering suppresses the low frequency illumination component and enhances the high frequency reflectance component. For an HS image, the high frequency component of each band is considered as the spatial component for each band. To obtain the spatial information for each band, we apply homomorphic filtering to each band of the denoised LRHS image. Through the use of homomorphic filtering, the low frequency component of each band of the denoised LRHS image is suppressed, and the high frequency component is extracted. Therefore, in this research, homomorphic filtering is applied to each band of the denoised LRHS image to extract the spatial component of each band. The homomorphic filtering processing is based on the following image imaging model: for k = 1, 2, . . . , B, where X RNH LR_H represents the high frequency component of the denoised LRHS image, X RNH LR_L represents the low frequency component, and (X RNH LR_H ) k and (X RNH LR_L ) k represent the kth band of X RNH LR_H and X RNH LR_L , respectively. Based on Equation (4)-(6), Logarithmic transformation, Fourier transform, and high-pass filtering operations are applied to Equation (11): for k = 1, 2, . . . , B, where S LR is the high-pass filtered imageh, (S LR ) k is the kth band of S LR , represents Fourier transform, and H is the high-pass filter, defined as: where D 0 is the cut-off frequency, D is the distance between (x, y) and the center, β H and β L are the high and low frequency gains. Figure 3 shows the 3-D mesh of the high-pass filter. Since homomorphic filtering aims to reduce the low frequency component and extract the high frequency component, β H is greater than 1 and β L is smaller than 1. By adjusting the value of the cut-off frequency D 0 , the sharpness of the transition between β L and β H can be controlled. In practice, the values of these parameters are generally determined empirically. In this paper, empirically, β H , β L , and D 0 are set to 2, 0.25, 40, respectively. S LR is the high-pass filtered image in which the low frequency component has been weakened. Then, the spatial component of each band is obtained by applying the inverse Fourier transform and the exponential operation to S LR . for k = 1, 2, . . . , B, where X I LR denotes the spatial component of each band of the denoised LRHS image, (X I LR ) k denotes the kth band of X I LR , and −1 denotes the inverse Fourier transform. After introducing homomorphic filtering to obtain the spatial component of each band of the denoised LRHS image, a weighted root mean squared error (RMSE)-based method is presented to extract the spatial intensity information of the HS image. Let I LR = B k=1 λ k (X I LR ) k denote the total spatial information of the LRHS image, where [λ 1 , λ 2 , . . . , λ B ] is the weighted vector. To determine the values of the weighted vector, we utilize the RMSE index to measure the deviation of two images. A smaller value of RMSE indicates a better result, and the optimal value is 0. In the RMSE-based method, the spatial information of the PAN image is considered, and the RMSE value between the total spatial information I LR and the PAN image X PAN is calculated. The smallest value of RMSE is computed to obtain the optimal values of the weights [λ 1 , λ 2 , . . . , λ B ]: where T = m × n represents the total pixel number of one band of the LRHS image, ↓ represents down-sampling operation, ↓ X PAN represents that the PAN image is down-sampled to the size of k , respectively. The laplacian pyramid networks (LapSRN) [46] super-resolution method can effectively improve the spatial resolution of an image, and has the advantages of parameter sharing, local skip connections, and multi-scale training. So it is adopted to super-resolve the spatial information of the LRHS image I LR for an I HR with super-resolution spatial information. Hyperspectral Image Spatial Information Extraction Homomorphic filtering is a filtering method that transforms the nonlinear problem into a linear problem. It transforms the nonlinear multiplicative mixed problem into an additive model by the logarithmic transformation, and then uses linear filtering to process it. The homomorphic filtering suppresses the low frequency illumination component and enhances the high frequency reflectance component. For an HS image, the high frequency component of each band is considered as the spatial component for each band. To obtain the spatial information for each band, we apply homomorphic filtering to each band of the denoised LRHS image. Through the use of homomorphic filtering, the low frequency component of each band of the denoised LRHS image is suppressed, and the high frequency component is extracted. Therefore, in this research, homomorphic filtering is applied to each band of the denoised LRHS image to extract the spatial component of each band. The homomorphic filtering processing is based on the following image imaging model: represents Fourier transform, and H is the high-pass filter, defined as: Panchromatic Image Preprocessing and Total Spatial Information Acquisition To make the spatial information of the PAN image clearer, the Laplacian of Gaussian (LOG) [47] image enhancement algorithm is applied to the PAN image, which uses a Gaussian filter to reduce noise followed by a Laplace operator for enhancement. Let I PAN s represent the enhanced PAN image. The HS and PAN images contain the different and complementary information for a scene. To acquire the total spatial information, we should consider simultaneously the spatial structure details of these two images, and then propose an optimized weighted tensor matrix-based method. I HR and I PAN s include the spatial structure information of the HS and PAN images, respectively. Based on Equation (2) where () T is the transpose operation, v w1 and v w2 are the two eigenvalues, v w1,p and v w2,p are the two eigenvalues at pixel p, e w1,p = [ e w11,p e w12,p ] T and e w2,p = [ e w21,p e w22,p ] T are the eigenvectors corresponding to the two eigenvalues at pixel p, respectively. The two eigenvalues generally have a larger value and a smaller value. We assume that v w1 is the larger eigenvalue. When v w1 ≈ v w2 ≈ 0, v w1 > v w2 ≈ 0, and v w1 > v w2 > 0, the structure region of this pixel are flat area, edge area, and corner, respectively. We test on some images to study the eigenvalues of weighted tensor matrix. Figure 4 shows the two eigenvalues at each pixel of the weighted tensor matrix for the Salinas scene data. It can be seen that for many pixels, the smaller eigenvalues shown in Figure 4d are approximately 10 −5 , and are very small. By experimenting on other numerous images, we have also discovered that the smaller eigenvalues are mostly very small. Thus, the approximation of M HP w,p is expressed as: where M HP w,p is the approximation of M HP w,p . Based on Equation (1), the structure tensor matrix satisfies that M = ∇I·∇I T . The weighted gradient G w at pixel p satisfies that M HP w,p = G w,p ·(G w,p ) T = v w1,p e w1,p (e w1,p ) T . So, G w,p is deduced as G w,p = √ v w1,p ·e w1,p . Since the direction of the eigenvector corresponding to v w1 is not unique, the direction of the weighted gradient G w,p is also not unique. We specify the direction of the weighted gradient G w,p as the gradients average of the individual multiple source images [I HR , I PAN s ]: where ∇I HR,p = [ ∂(I HR,p )/∂x ∂(I HR,p )/∂y ] and ∇I PAN s,p = [ ∂(I PAN s,p )/∂x ∂(I PAN s,p )/∂y ], ·, · represents the inner product of two vectors, and sign(·) represents the sign function. Once the weighted gradient G w,p is acquired from the multiple images [I HR , I PAN s ], an optimization model is proposed to obtain the total spatial information I HP T as: where ∇I HP T = [ ∂I HP T /∂x ∂I HP T /∂y ], ∂I HP T /∂x, and ∂I HP T /∂y denote the x and y partial derivatives of I HP T . Equation (20) is an unconstrained optimization problem, and we solve it by the conjugate gradient method. Equation (20) can effectively ensure that the total spatial information I HP T contains the spatial structure details of both the HS and PAN images. The two eigenvalues generally have a larger value and a smaller value. We assume that 1 w v is the larger eigenvalue. When > , the structure region of this pixel are flat area, edge area, and corner, respectively. We test on some images to study the eigenvalues of weighted tensor matrix. Figure 4 shows the two eigenvalues at each pixel of the weighted tensor matrix for the Salinas scene data. It can be seen that for many pixels, the smaller eigenvalues shown in Figure 4d are approximately -5 10 , and are very small. By experimenting on other numerous images, we have also discovered that the smaller eigenvalues are mostly very small. Thus, the approximation of HP , w p M is expressed as: Fused High Spatial Resolution Hyperspectral Image Generation The LRHS image X HS LR is interpolated to the scale of the HRPAN image. By constructing a suitable gain matrix R, the total spatial information I HP T is injected into the interpolated HS image to generate the fused HRHS image X HS HR . For the gain matrix R, it is beneficial to preserve the ratio of each HS pair band unchanged to reduce the spectral distortion. Thus, R should satisfy that R k ∝ where X HS IN is the interpolated HS image, (X HS HR ) k and R k are the kth band of X HS IN and R, respectively. Then, a tradeoff parameter ε is defined to regulate the amount of the injected details to reduce the spatial distortion. This process can be expressed as: for k = 1, 2, . . . , B, where X HS HR is the fused HRHS image, and (X HS HR ) k is the kth band of X HS HR . Datasets and Experimental Setup In order to evaluate the effectiveness of the proposed HFWT hyperspectral pansharpening method (named as HFWT), experiments were performed on two simulated hyperspectral datasets which were a Washington DC and a Salinas scene, and one real hyperspectral dataset, the Hyperion dataset. The Salinas scene hyperspectral dataset was collected by Airborne Visible Infrared Imaging Spectrometer (AVIRIS) [48], and the Washington DC dataset was acquired by the Spectral Information Technology Application Center of Virginia. The used real dataset is provided by the EO-1 spacecraft. The EO-1 spacecraft has a Hyperion instrument which provides the real LRSH images and an Advanced Land Imager (ALI) instrument acquires the HRPAN images [48]. Table 1 lists the characteristics of each dataset. The proposed HFWT method is compared with several state-of-the-art hyperspectral pansharpening methods: Gram-Schmidt (GS) [23], guided filter principal component analysis (GFPCA) [33], coupled nonnegative matrix factorization (CNMF) [18], Bayesian sparsity promoted Gaussian prior (Bayesian) [13], and HySure [14]. Four typical quantitative evaluation indexes are adopted: cross correlation (CC) [49], spectral angle mapper (SAM) [50], root mean squared error (RMSE), and erreur relative globale adimensionnelle desynthèse (ERGAS) [51]. The CC and SAM measure the spectral and spatial distortion, respectively. The larger value of CC and the smaller value of SAM indicate the better fusion result. The RMSE and ERGAS are the global indexes that measure both the spatial and spectral performance, and their value ranges are all (0,1), with 0 being the optimal value. In order to perform the objective fusion evaluation of the simulated hyperspectral datasets, the available HS image is used as the reference HS image. The simulated LRHS image and HRPAN image are generated according to Wald's protocol [52,53]. The reference HS image is blurred and down sampled 4 times to obtain the simulated HS image. The simulated PAN image is obtained by averaging the visible light band of the reference HS image. For the real datasets, the real LRHS and HRPAN images are available, and the reference high resolution HS image is not available. In order to test the objective quality of the real hyperspectral images, the real LRHS image is served as the reference image. The real LRHS and HRPAN images available are degraded, and the two degraded images are fused to obtain a fused image. This fused image is compared to the real LRHS image to evaluate the objective quality. In the proposed HFWT method, we define a tradeoff parameter ε which regulates the amount of the injected details and ensures spatial performance. In practice, the optimal value is determined based on experience. By adjusting different values of ε, the optimal value can be determined by the fusion result. In this paper, by experience, the values of tradeoff parameter ε are set as 0.25, 0.05 and 0.2 for the Washington DC, Salinas scene and Hyperion dataset, respectively. Validity Discussion of the Open-Closing Denoising Operation To verify the effectiveness of the open-closing HS image denoising operation, the proposed HFWT method was conducted on the Washington DC dataset with different denoising processing. The compared denoising algorithms contain average filtering, Gaussian filtering, open operation, closed operation, and open-closing operation. Table 2 shows the fusion performance of different image denoising processing. As outlined in Table 2, the HFWT method without HS image denoising had the worst fusion results. The proposed methods with each HS image denoising preprocessing have the better fusion results compared with the proposed method without HS image denoising, which demonstrates that the HS image denoising preprocessing is significative and effective. By contrast, the proposed HFWT method with the open-closing operation achieves the best fusion performance, and it demonstrates that the open-closing operation is an effective HS image denoising preprocessing. Figure 5 shows the fusion experimental results for the Washington DC dataset, where Figure 5(a1) shows the reference HS image, and the subjective fused HS images of each method are displayed in Figure 5(b1-g1). Moreover, Figure 5(a2) shows the enlarged subareas of the reference HS image, and the two enlarged subareas of each fused image are shown in Figure 5(b2-g2). The reference error image is shown in Figure 5(a3), and the error images between each fused HS image and the reference HS image are reported in Figure 5(b3-g3). Except for the first column, each column in Figure 5 shows the experimental results corresponding to each method. By visually comparing the fused HS images with the reference HS image, the fused result of the GS method suffers from serious spectral distortion. For example, the two enlarged subareas of the GS method are distorted seriously. The GFPCA approach generates fuzzy spatial details in some regions, such as the enlarged subareas shown in Figure 5(c2). This is because the spatial information of the GFPCA approach is injected insufficiently. As depicted in Figure 5(d1,d2), the spatial information of the fused image is well enhanced using the CNMF method, but some slight spectral distortion is appeared in the roof of the buildings. A closer inspection revealed that the HySure method seems to generate some distortion in the circular building in the upper left corner. By contrast, the fused HS images obtained by the Bayesian and HFWT methods achieve superior performance in terms of both spectral and spatial aspects. In order to further compare the performance of each fusion method, the third row of Figure 5 shows the error images of different methods. The error image is the difference (absolute value) of pixel values between the fused HS images and the reference HS image. We can see that, the GS, GFPCA and CNMF methods have larger differences, the HySure and Bayesian approaches generated relatively smaller differences, and the proposed HFWT approach shows the smallest differences in most areas, which demonstrates the excellent fusion capacity of the proposed method. Experiments on Simulated Hyperspectral Datasets aspects. In order to further compare the performance of each fusion method, the third row of Figure 5 shows the error images of different methods. The error image is the difference (absolute value) of pixel values between the fused HS images and the reference HS image. We can see that, the GS, GFPCA and CNMF methods have larger differences, the HySure and Bayesian approaches generated relatively smaller differences, and the proposed HFWT approach shows the smallest differences in most areas, which demonstrates the excellent fusion capacity of the proposed method. Remote Sens. 2019, 11, 1005 12 of 18 Figure 6a4, and the SAM images of enlarged subarea obtained by each method are shown in Figures 6b4-g4. The spectral distortion caused by the GS method is very obvious, and the degree of spatial enhancement is also not acceptable for the GS method, as depicted on Figure 6b1 and b2. Compared with the GS approach, the GFPCA method performs better in terms of the spectral quality. However, the fused HS image obtained by the GFPCA approach shows an indistinct area in the left region of Figure 6c1. Despite having a preeminent spatial quality, the CNMF method generates significant spectral distortion in the triangle region in the lower half of Figure 6d1. From the visual analysis, the HySure, Bayesian and HFWT methods effectively improve spatial performance while maintaining spectral information, and the HFWT method shows better spectral quality compared with the HySure and Bayesian methods in some regions, such as the upper area of the enlarged subarea. The SAM images and the SAM images of the enlarged subareas of different approaches are shown in the third and fourth rows of Figure 6, to further verify the fusion performance of the proposed method. It can be seen that the proposed HFWT method yields the lowest SAM values for most regions. These results demonstrate that the proposed HFWT algorithm performs well in both the spatial and spectral aspects. Similar to the previous experiments, for the Salinas scene dataset, the fused results are shown in Figure 6. Figure 6(a1-a3) show the reference HS image, the enlarged subareas of the reference HS image, and the reference SAM image, respectively. Figure 6(b1-g1) in the first row show the subjective pansharpened results of each algorithm, and Figure 6(b2-g2) in the second row display each enlarged subarea. The SAM images of each approach are shown in Figure 6(b3-g3). The reference SAM image of enlarged subarea is shown in Figure 6(a4), and the SAM images of enlarged subarea obtained by each method are shown in Figure 6(b4-g4). The spectral distortion caused by the GS method is very obvious, and the degree of spatial enhancement is also not acceptable for the GS method, as depicted on Figure 6(b1,b2). Compared with the GS approach, the GFPCA method performs better in terms of the spectral quality. However, the fused HS image obtained by the GFPCA approach shows an indistinct area in the left region of Figure 6(c1). Despite having a preeminent spatial quality, the CNMF method generates significant spectral distortion in the triangle region in the lower half of Figure 6(d1). From the visual analysis, the HySure, Bayesian and HFWT methods effectively improve spatial performance while maintaining spectral information, and the HFWT method shows better spectral quality compared with the HySure and Bayesian methods in some regions, such as the upper area of the enlarged subarea. The SAM images and the SAM images of the enlarged subareas of different approaches are shown in the third and fourth rows of Figure 6, to further verify the fusion performance of the proposed method. It can be seen that the proposed HFWT method yields the lowest SAM values for most regions. These results demonstrate that the proposed HFWT algorithm performs well in both the spatial and spectral aspects. quality, the CNMF method generates significant spectral distortion in the triangle region in the lower half of Figure 6d1. From the visual analysis, the HySure, Bayesian and HFWT methods effectively improve spatial performance while maintaining spectral information, and the HFWT method shows better spectral quality compared with the HySure and Bayesian methods in some regions, such as the upper area of the enlarged subarea. The SAM images and the SAM images of the enlarged subareas of different approaches are shown in the third and fourth rows of Figure 6, to further verify the fusion performance of the proposed method. It can be seen that the proposed HFWT method yields the lowest SAM values for most regions. These results demonstrate that the proposed HFWT algorithm performs well in both the spatial and spectral aspects. In addition to visual inspection, the performance of each algorithm for the Washington DC and Salinas scene datasets is analyzed quantitatively in Table 3, where the best results for each quantitative index are marked in bold. As can be seen from Table 3, the objective quantitative results are roughly consistent with the subjective qualitative effects. Same as the subjective results, the GS and GFPCA algorithms produce worse objective performance compared with other algorithms. The HySure approach obtains the best RMSE value for the Washington DC dataset and the optimal ERGAS value for the Salinas scene dataset. Most of the quality indexes generated by applying the proposed HFWT method are the best, in which the SAM, CC and ERGAS values are the best for Washington DC, and the RMSE, SAM and CC indexes are ranked first for the Salinas scene. In addition to visual inspection, the performance of each algorithm for the Washington DC and Salinas scene datasets is analyzed quantitatively in Table 3, where the best results for each quantitative index are marked in bold. As can be seen from Table 3, the objective quantitative results are roughly consistent with the subjective qualitative effects. Same as the subjective results, the GS and GFPCA algorithms produce worse objective performance compared with other algorithms. The HySure approach obtains the best RMSE value for the Washington DC dataset and the optimal ERGAS value for the Salinas scene dataset. Most of the quality indexes generated by applying the proposed HFWT method are the best, in which the SAM, CC and ERGAS values are the best for Washington DC, and the RMSE, SAM and CC indexes are ranked first for the Salinas scene. Figure 7 shows the pansharpened images of each method for the Hyperion dataset to confirm the fusion performance of the proposed HFWT method in the real dataset. Figure 7a-c show the real HS, real PAN, and interpolated HS images, respectively. The GS method shown in Figure 7d generates obvious spectral distortion, especially in the wharf area. In spite of good spatial improvement, the spatial details shown in the fused images obtained by using the GS and HySure approaches are too sharp. Spectral quality of the GFPCA and Bayesian methods seems be acceptable, but the GFPCA and Bayesian methods perform poorly from the spatial aspect. By contrast, the subjective effects of the CNMF and HFWT approaches are the best, and the HFWT method yields better spatial capacity compared to the CNMF method. The objective quality evaluation for the Hyperion dataset are presented in Table 4. As reported in Table 4, the HFWT method provides the best quantitative evaluation results in terms of the RMSE, SAM, CC, and ERGAS indices, which indicates that the HFWT method successfully maintains the spectral information of the original LRHS image and improves the spatial resolution. Figure 7 shows the pansharpened images of each method for the Hyperion dataset to confirm the fusion performance of the proposed HFWT method in the real dataset. Figures 7a, b, and c show the real HS, real PAN, and interpolated HS images, respectively. The GS method shown in Figure 7d generates obvious spectral distortion, especially in the wharf area. In spite of good spatial improvement, the spatial details shown in the fused images obtained by using the GS and HySure Computational Complexity Analysis and Time Comparisons The proposed HFWT algorithm contains simple sequential statements, several loop statements without nesting, and a two-layer loop statements with nesting (In the proposed HFWT algorithm, the program statement of Equation (19) which are applied on each pixel is a two-layer loop statements). The simple sequential statement is naturally O(1) time, and the loop statement without nesting is O(n) time. The two-layer loop statement is O(n 2 ). According to the summation rule of algorithm complexity, the total algorithm complexity is O(n 2 + n + 1) = O(n 2 ). The proposed HFWT algorithm which is O(n 2 ) time belongs to the polynomial time, and is considered a fast algorithm. The computing time (in seconds) of each method for three datasets is shown in Table 5. The experiments in this paper were all performed using MATLAB R2015b, and tested on a PC with an Intel Core i5-7300HQ CPU @ 2.50 GHz and 8 GB of memory. The GS and GFPCA methods are very efficient, but the fusion performance of the GS and GFPCA methods is unsatisfactory. The proposed HFWT method is faster than the CNMF algorithm, and takes much less computing time than the HySure and Bayesian algorithms. The time cost of the proposed HFWT is acceptable. Conclusions This paper presents a novel hyperspectral pansharpening method based on the merger of the homomorphic filtering and weighted tensor matrix. The proposed HFWT algorithm introduces the open-closing morphological operation and homomorphic filtering to remove noise and extract spatial information of each band of an HhS image, respectively. Moreover, we propose a weighted RMSE-based method to obtain the total spatial information of the HS image. In order to generate the adequate spatial information from both the HS and the corresponding PAN images, an optimized weighted tensor matrix based method is proposed. Specifically, the weighted tensor matrix, eigenvalues and eigenvectors are deduced and analyzed to obtain the weighted gradient, and an optimization model is presented to acquire the integrated spatial information. Compared with the state-of-the-art methods, experiments performed on the Washington DC, Salinas scene and Hyperion datasets demonstrate the proposed method performs superiorly in terms of both subjective and objective assessment.
2019-05-17T14:19:52.300Z
2019-04-27T00:00:00.000
{ "year": 2019, "sha1": "92d77f48703e820a59460a557b5fb13dc6d37bbb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/11/9/1005/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5f4e8bcc1bf32e8f401e789abdbc2724baa671a2", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
1005875
pes2o/s2orc
v3-fos-license
Cancer incidence in the south Asian population of California, 1988–2000 Background Although South Asians (SA) form a large majority of the Asian population of U.S., very little is known about cancer in this immigrant population. SAs comprise people having origins mainly in India, Pakistan, Bangladesh and Sri Lanka. We calculated age-adjusted incidence and time trends of cancer in the SA population of California (state with the largest concentration of SAs) between 1988–2000 and compared these rates to rates in native Asian Indians as well as to those experienced by the Asian/Pacific Islander (API) and White, non-Hispanic population (NHW) population of California. Methods Age adjusted incidence rates observed among the SA population of California during the time period 1988–2000 were calculated. To correctly identify the ethnicity of cancer cases, 'Nam Pehchan' (British developed software) was used to identify numerator cases of SA origin from the population-based cancer registry in California (CCR). Denominators were obtained from the U.S. Census Bureau. Incidence rates in SAs were calculated and a time trend analysis was also performed. Comparison data on the API and the NHW population of California were also obtained from CCR and rates from Globocan 2002 were used to determine rates in India. Results Between 1988–2000, 5192 cancers were diagnosed in SAs of California. Compared to rates in native Asian Indians, rates of cancer in SAs in California were higher for all sites except oropharyngeal, oesophageal and cervical cancers. Compared to APIs of California, SA population experienced more cancers of oesophagus, gall bladder, prostate, breast, ovary and uterus, as well as lymphomas, leukemias and multiple myelomas. Compared to NHW population of California, SAs experienced more cancers of the stomach, liver and bile duct, gall bladder, cervix and multiple myelomas. Significantly increasing time trends were observed in colon and breast cancer incidence. Conclusion SA population of California experiences unique patterns of cancer incidence most likely associated with acculturation, screening and tobacco habits. There is need for early diagnosis of leading cancers in SA. If necessary steps are not taken to curb the growth of breast, colon and lung cancer, rates in SA will soon approximate those of the NHW population of California. Background The south Asian (SA) population of United States was 1,893,723 in the year 2000 [1], and between 1990 and 2000 this population grew in size by 106%. Persons with origins in India, Pakistan, Bangladesh, and Sri Lanka are classified as SA and they are now the third largest Asian subgroup in the United States, comprising 16% of all U.S. Asians. Approximately 21% of SAs in the U.S. reside in California, the state with the largest concentration of SAs. From 1990 to 2000, the number of SAs living in California increased from 168,457 to 343,731 (104% increase) [2,3]. 90% of SAs are Asian Indian (people with origins in India). In the year 2000 the SA population of California comprised 1.15% of the total population and this proportion is increasing. There are no published studies on the incidence of cancer among SAs in United States, except for one study which reported breast and colon cancer incidence in Asian Indians and that analysis was based on a very small sample size [4]. Another study by Divan et al. reports the current available literature on this issue and emphasizes the need to conduct more studies on cancer incidence and mortality [5]. The reason for lack of cancer studies in this population may be multiple; including controversy regarding which communities are included under the title 'South Asian', the relatively recent growth of this community in the US, and the belief that SAs are part of a 'model minority' and therefore have better health status than other minority groups. In previous studies all Asians have been grouped into one category, which may mask important differences in incidence and survival among various subgroups. Most of the cancer studies in SAs residing outside of south Asia have been done in the UK or Canada [6]. Many cancer studies have been conducted in the SA population of UK, mainly because they form the largest ethnic minority of UK. Much attention has been focused on breast and lung cancer epidemiology [7][8][9][10][11]. Studies focusing on multiple cancer sites are few [11,12] although some attention has been given to childhood cancers, mainly because childhood cancers are increasing with time [13][14][15][16]. Initial studies suggested that English SA rates for all sites combined were lower than the non-SA rates but higher than Indian subcontinent rates (especially for lung cancer in males, breast cancer in females, and lymphomas in both sexes). But a sub-site analysis revealed that, English South Asian rates were significantly higher than the non-SA rates for Hodgkins disease in males, and oral, esophageal, thyroid, leukemias in females, and cancers of the pharynx, liver and gall bladder in both sexes [12]. Recent studies in UK indicate, that younger SA, particularly children are at increased risk of cancer than the non-SA population and although generally cancer rates have fallen over the last decade, they are increasing among SAs [11]. Studies on cancer in the SA population of Canada pertain primarily to cancer screening, and no studies on cancer incidence have been reported [17,18]. Studies of cancer incidence in immigrant populations can provide valuable insights into etiology and changes towards the pattern of disease seen in the host country may indicate environmental factors in etiology [19]. Therefore, in this analysis we have calculated age adjusted rates for cancer in the SA population of California and compared these rates to native Asian Indians (people living in India) as well as the Asian/Pacific Islander (Asian/ PI) and non-Hispanic White (NHW) populations of California in the same time period. We also conducted a time trend analysis to study the patterns of cancer incidence in this population for the period 1988-2000. Where appropriate, we have also compared these rates to those reported in Great Britain. Methods The California Cancer Registry (CCR), a population-based registry, commenced operation in 1988. The methodology of the CCR has been fully described by Morris et al. [20]. The CCR collects information on all cancers except for non-melanoma skin cancers and in situ cancers of the uterine cervix. Information on several demographic variables, diagnostic variables (including stage at diagnosis, tumor size, histology and grade of tumor), and first course of treatment are collected for all cases. Cases are routinely coded with regard to anatomic stage of disease using the general summary stage schema for 1988-1993 [20], and SEER extent of disease for 1994-1997 [21]. Race and ethnicity are categorized into four mutually exclusive groups in the CCR database: White, non-Hispanic, Black, non-Hispanic, Hispanic, and Asian/Pacific Islander. Under the last category there are further breakdowns for several Asian ethnic groups, including the category 'Asian Indian/ Pakistani', which includes people of SA origin. Our analysis included cancer cases diagnosed during the period 1988-2000. Incidence rates were calculated for this population for all major sites and several specific cancer types. Due to small numbers for some of the cancer sites, the rates for individual years were grouped into three-year categories to reduce the instability of rates. In addition, an age-adjusted trend analysis of the rates was completed for the period 1988-2000 to determine the Annual Percentage Change (APC) (using the non-weighted least squares approach) along with p-values for APCs. Ethnic Classification The SA group is heterogeneous, not only in national origin, sub-ethnicity (and therefore heritable features), and religion, but also in specific details of pertinent lifestyle including alcohol, tobacco, and various levels of vegetarianism. Secondly, individual hospitals, from where most cancer cases are identified by the CCR, do not have the resources to correctly categorize race/ethnicity. Hence many SA cancer patients may be classified as "Asian, not otherwise specified" by the hospital. Due to the above situation, a British developed software program called 'Nam Pehchan' [22] (literally means name identification in Hindi) was used in this study in order to address the issue of misclassification of race/ethnicity. This software is a computer program for the identification of names, which originate in the Indian subcontinent and Sri Lanka, which collectively we call here "South Asia". It provides a reasonably accurate way of identifying people belonging to "South Asian" and "Other" ethnic groups. It also identifies the religious and linguistic origins of the names where possible. Both surnames and forenames can be matched against the program's stored lists. Given the possibility that different elements of a name may meet with varying recognition from the lookup table, the final result is not simply "South Asian" or "not South Asian", but rather a numeric code indicating the outcome of the search and match process. Knowing the limitations of this program [23], we used this software program, as well as birthplace and a visual case-by-case review to correctly identify approximately 5,200 cancer cases of SA origin, from the 106, 653 Asian/Pacific Islander cancer database at CCR, 1988-2000. We identified 30% more SA cases as compared to CCR (CCR identified approximately 4000 SA cases in the same time period). Calculation of incidence rates Numerators, comprised of all newly diagnosed cancer cases, were derived by applying the Nam Pehchan software to all cancer cases classified as Asian/Pacific Islander by the CCR, 1988-2000. The numerators were coupled with age, gender and yearly specific denominator data for the SA population in California (population counts) obtained from the U.S. Census Bureau. Detailed population counts and demographic characteristics for SA subgroups for both the 1990 and 2000 decennial census are available from the US Census Bureau [2,24]. Electronic population data by age and sex for all SA subgroups were identified and obtained. Hard copy population data for the California 1990 SA subgroups were also identified and key-entered and are available at the cancer registry. Using these census data sets, interpolation between the two decennial censuses was completed and extrapolation back to the years 1988 and 1989 was completed to create the best estimates of the SA subgroups at risk on an age and sex specific basis. The interpolation and extrapolation was done assuming a linear growth in the SA population subgroups. Finally, the subgroup estimates were combined on an age and sex specific basis for each individual year from 1988-2000 to form one SA population group for each individual year. Using these data age-specific and age-adjusted cancer incidence rates were calculated for the time period 1988-2000. We used the 2000 U.S (5-year groups) population as the standard population. For purpose of comparison between cancer rates in native Asian Indians (living in India) and SAs in California, we calculated Age Standardized Rates (ASRs), using the world standard for the California SAs and compared them to ASRs in India, obtained from the Globocan 2002 [25]. Globocan is a publication of the International Association of Research for Cancer (IARC), and rates for India are for the time period 1993-1997, and cover eight regional registries in India. We used rates from India as our comparison parameter, as 90% of SAs in the U.S. are of Asian Indian origin. In addition, we calculated incidence rate ratios (IRRs) by taking a ratio of California SA ASRs and Indian ASRs, calculated Confidence Intervals (CIs) and determined the significance [26]. Grouped analysis Rates for the period of 1988-2000 were divided into 4 time periods by grouping the years of diagnosis into four categories, namely 1988-1991, 1992-1994, 1995-1997 and 1998-2000. Incidence rates were calculated for each of these time periods. We also compared these rates to the Asian/PIs as well as the NHW population of California for the same time periods. Time Trend analysis We performed a time trend analysis for each of the cancer sites separately for males and for females, using the 'ageadjusted trend analysis' feature of SEER-STAT [27]. For this purpose, we used the annual data versus the categorized grouped data. We calculated the Annual Percentage Change (APC) (identifies the percent change by computing the slope of the best-fitting regression line around the data points-rates for each individual years in this case) and p-values for APCs. Results In total, 5192 cases of cancer were diagnosed in SA population of California between 1988-2000, including 2411 males, and 2781 females. The median age at diagnosis of cancer was 63 years in males and 54 years in females. A comparison of overall age-adjusted invasive cancer incidence rates for the three ethnic groups revealed that the SA average annual incidence rate was 307.5/100,000, compared to 325.2/100,000 for Asian/PI and 489.1/100,000 for NHW ( Figure 1). In the recent years the overall invasive cancer rates for California SAs have been higher than those of the Asian/PIs of the state. Table 1 summarizes the cancer counts by major cancer sites and Figures 2 and 3 show the top five leading cancers and their trends in the SA males and females respectively. Leading cancers in SA males include prostate, colorectal, urinary system, lung and bronchus, and lymphomas. The leading cancer in SA females is breast cancer followed by colorectal, uterine, ovarian and cervical cancer. In this section we have categorized cancers into two groups namely; common cancers (cancers common to males and females) and gender specific cancers (reproductive organ cancers). Comparison of cancer incidence between California SAs and native Asian Indians Age standardized rates for California SAs and those for India as well as Incidence Rate Ratios (with statistical significance) are presented in Table 2. IRR of more than one indicates that California SAs are at higher risk of developing that particular cancer than the native Asian Indians. Overall, California SA males and females are at double risk for developing cancer than native Asian Indians. Common cancers California SAs were at lower risk of oropharyngeal and esophageal cancers than the native Asian Indian population, which occur very commonly in India. The California SA population was at higher risk for gastrointestinal can-cers (namely colorectal, hepatic, and pancreatic cancers). They were also at higher risk for hematopoietic and lymphoreticular and endocrine malignancies. The SA population of California also experienced a higher risk for other organ systems such as, urinary system and brain & CNS cancers. Gender specific cancers SA men experienced 15 fold risk of prostate cancer than the native Asian Indian population. California SA females experienced higher risk of all reproductive organ cancers except cervical cancer. Comparison of incidence rates between SAs and Asian/PIs of the state of California Incidence rates and time rends between 1988-2000, for California SAs as well as the Asian/PI and NHW population are presented in Table 3 and Table 4. Common cancers In general, the SA population of California experienced more brain & CNS cancers, hematopoietic and lymphoreticular cancers than the Asian/PI population of the state. SA females also experienced higher oropharyngeal, esophageal and gall bladder cancer than the Asian/PI women of California. As regards to other cancer sites, the SA population of California was at equal or lower risk than the Asian/PIs of the state. Gender-specific cancers SA males experienced more prostate cancer than the Asian/PI males and SA females experienced more reproductive organ cancers than the Asian/PI women, except for cervical cancer. Comparison of SA rates with NHWs of the state Common cancers The SA population of California experienced more Gastro Intestinal cancers (mainly hepatic, gall bladder and stomach cancers) and myelomas than the NHW population of the state. SA females experienced more oropharyngeal and esophageal cancers than the NHW women. SA males experienced recent increase in leukemia incidence as compared to the NHW males. Gender specific cancers As far as the reproductive cancers are concerned, the SA population was at lower risk of these cancers than the NHW population of the state, except for cervical cancer. Common cancers The SA population of California experienced a significantly decreasing trend of oropharyngeal cancers. On the other hand they experienced an increasing trend of hepatic and renal cancers. In addition, SA males experienced an increasing trend of hematopoietic & lymphoreticular cancers (NHL, multiple myelomas, leukemias) and brain & other CNS cancers. SA females experienced an increasing trend of gastrointestinal cancers (esophageal colon, hepatic, and stomach), lung and thyroid cancers. Gender-specific cancers As far as the reproductive organs were concerned, SA females experienced an increasing trend of breast and uterine cancers. All other sites experienced either a decreasing or steady trend over time. Discussion The present study reveals several unique cancer patterns among SAs in California. Firstly, the median age at diagnosis of cancer in this population is 58 years compared to 68 years for all other races [28]. Secondly, the most common cancers in the Indian subcontinent are not the most common cancers in SAs of California. The most common cancers among men in India are oral cavity and pharynx, lung, esophagus, laryngeal and stomach cancers [25]. In India, cervical cancer is most common in women, followed by breast, oral cavity, esophagus and ovarian cancer [25]. In India about half the cases among men and onefifth cases among women are in cancer sites affected by tobacco use (tobacco smoking as well as tobacco chewing) [29], which was not seen in SAs of California. Common cancers (cancers common to both males and females) Oropharyngeal cancers Our findings indicate that California SAs are at lower risk of oral and esophageal cancers than the native Asian Indians. This directly reflects the general tendency of the SA immigrants to avoid use of tobacco products (especially chewing 'paan' (tobacco rolled up in betel nut leaves) and smoking 'bidi' (cigarette made out of tobacco leaves, with no filters) in a foreign country. Besides, majority of SA immigrants in California tend to be educated and do not have such habits even in South Asia. Esophagus cancer Esophageal cancer is increasing in SA females and is higher than both NHW and Asian/PI females. Such findings of increasing trend are not seen in the SA males. This finding also seems contradictory to the general decreasing trend of oropharyngeal cancers, as esophageal and oropharyngeal cancers share similar etiologies. The etiology of esophageal cancer is mainly associated with consumption of tobacco (smoking or smokeless) and alcohol. In addition Barrett's esophagus, diet and nutrition, reflux disease also play an important role in etiology of esophageal cancer [30,31]. There are no published studies about smoking/tobacco/alcohol use prevalence in the SA population in the U.S. Because of lack of such data we cannot correlate our findings with the smoking prevalence. The rise of esophageal cancer in California SA females as well as histological subtype evaluation of this cancer is needed. Stomach cancer IRRs suggest that California SA females are at a higher risk for stomach cancer than native Asian Indian females, but this is not true for males. The time trend analysis suggests that male stomach cancer is decreasing, but female stomach cancer is on the rise. Infections with Helicobacter pylori and genetic predisposition of host have been suggested to be the most important causes of stomach cancer in general population [32,33]. Cancers of the liver and intrahepatic bile duct These cancers are of common occurrence in Asians. HBV (hepatitis B virus) infection, with and without aflatoxin exposure, and alcoholic liver cirrhosis are responsible for most cases of hepatocellular cancer in developing countries [34]. There is widespread contamination of foods with aflatoxin and moderately high prevalence of HBV and hepatitis C (HCV) virus-related chronic liver disease in India [35]. IRRs suggest that California SA population is at higher risk (more than two-fold) of hepatic cancers than native Asian Indians. Our findings are similar to the studies done in the past in UK on migrants of Indian ethnicity as well as British ethnicity, to the UK [16,36]. Gall bladder cancer The major causative factors for gall bladder cancer include gallstones and genetic susceptibility, and liver flukes in Asian countries have also been suggested to be causative [37]. In one study done in India, the prevalence of gallstones in adult population was 6.12% (3.07% in males, 9.6% in females) [38]. All these above stated factors could explain our finding of much higher rates in the SA population than Asian/PI or NHW population. Similar findings have been reported by studies in SA immigrants to the UK [12,36,39]. Nevertheless, it is encouraging that there is a significantly decreasing trend of this cancer in California SAs. Colon and rectal cancer Both SA males and females of California experienced more than four-fold risk of developing this cancer compared to the native Asian Indian population. Studies in the general population estimate that 13% of this cancer can be attributed to being physically inactive, 12% to eating a Western style diet, and 8% to having a first degree relative with colorectal cancer [40]. The diet of Asian Indians in the United States has changed from one featuring low-fat, high-fiber foods to one characterized by higherfat animal protein, low fiber, and high levels of saturated fat. There is an increased tendency among Asian Indians in America to consume fast foods and convenience foods [41]. The significantly rising trend of colon cancers seen in SA females, which is otherwise a low-risk population, may be related to migration and subsequent acculturation and adoption of Western diet and lifestyle. Lung and bronchus cancers As compared to the native Asian Indian rates, the SAs of California are at higher risk for this cancer. The five-fold risk in California SA females as compared to the native Asian Indian females and an increasing trend is noteworthy. A decreasing trend of lung cancer in SA males is not in concordance with a recent study done in the UK SA population, which reports recent increase in incidence of lung cancer in both SA men as well as women [7]. Non-Hodgkin's Lymphomas (NHL) IRRs suggest that the California SAs are at a much higher risk (3-6 fold higher risk) of developing NHL than native Asian Indians. In addition, an increasing trend of NHL has been observed in the SA population of California. While the incidence of NHL has doubled in the U.S., etiology of lymphomas remains elusive. Epidemiological studies suggest the role of hereditary factors, immunosuppression, viruses (HIV, EBV, HTLV, H.pylori, HHV8, HCV), chemical and agricultural exposures and other factors in the etiology of NHL [42]. Recent studies have also associated menstrual and reproductive factors (higher parity and early menarche offer a protective effect for NHL) with risk of NHL [43,44]. Lack of immune stimulation/challenge ('hygiene hypothesis') [45] and acculturation could explain the higher risk seen in this population. Leukemias Three-fold higher risk of developing leukemias in California SAs as compared to the native Asian Indians, and a rising trend of this cancer over time shows similarity with results from UK SA studies [14,16,19]. Types of leukemias and their causes vary widely and are age dependant. Further investigation, especially age specific and leukemia subtype analysis is needed into this finding. Multiple myelomas IRRs suggest that California SA population experience a much higher risk (four-five fold) of developing myelomas than the native Asian Indians, as well as higher rates than the Asian/PI or NHW population of California. Risk factors for multiple myelomas include, monoclonal gammopathy of unknown significance, chronic immune stimulation (as in infections with tuberculosis, malaria, hepatitis, etc.), autoimmune disorders, and occupational exposures [46]. Every year, approximately two million persons in India develop tuberculosis, and incidence of malaria is 2-3 million cases per year [47,48]. Exposure to these chronic diseases before migration could explain the high rates of myelomas seen in California SAs. Findings of elevated risk of haematopoietic and lymphoreticular malignancies (lymphomas, leukemias and myelomas) in SAs after migration needs further investigation. Similar results have been reported in SA immigrants of UK [12,16,36]. Thyroid cancer IRRs indicate that California SA females are four times more likely to get thyroid cancer than Indian females; this is not true in males. The incidence of congenital hypothyroidism and prevalence of goiter in India is much higher than the worldwide average [49]. A large fraction of the Indian population suffers from iodine deficiency disorders [50]. The major etiological factors for thyroid cancers have been iodine deficiency and ionizing radiation [51][52][53]. We cannot explain the higher IRR observed in California SA females. Brain and other nervous system cancers California SAs experienced higher IRRs of these malignancies as compared to native Asian Indians. SA males experience higher rates of these malignancies than the Asian/ PIs and SA females recently experienced higher rates than Asian/PIs as well as NHWs. This finding is not in concordance with the other studies done in the UK SA population [14,16]. These cancers are infrequent in India and frequent amongst the U.S. Whites, making the SA population a low-risk population [54,55]. In spite of being a lowrisk population, higher IRRs and rates of these cancers observed in SAs need further investigation. Gender-specific cancers Prostate cancer Prostate cancer is the most common cancer in SA males and has increased from 1988-2000. California SA males experienced fifteen fold-increased risk of this cancer as compared to Indian males. Also, rates are higher in California SA males than in Asian/PIs of California. Epidemiological studies suggest that endogenous risk factors like family history, androgens, race, aging, oxidative stress and exogenous factors including diet and environmental agents have been associated with this cancer [56]. Other studies suggest that screening for this cancer has dramatically increased the number of men with local disease at diagnosis [57]. The fifteen-fold risk of prostate cancer in this population as compared to the native Asian Indians could be explained by early detection (measurement of serum PSA), rather than true differences in underlying risk. The other factors explaining this difference could be lead-time, case identification, detection and reporting biases. Breast cancer Breast cancer is the number one cancer in the California SA females and they are 3.5 times more likely to develop this cancer as compared to native Asian Indian females. Our time-trend analysis suggests that, although in situ breat cancer diagnosis has significantly increased, invasive breast cancer diagnosis has increased alarmingly more in SA than in Asian/PIs and NHWs. In the general population major risk factors include, late maternal age at first parity (>30 years of age), having one child vs. 4, use of oral contraceptives (OCs), use of hormone replacement therapy (HRT), obesity and alcohol [58][59][60]. Adoption of above-mentioned lifestyle practices by SA women and inadequate screening could be related to the increase in breast cancer in this population. Cervical Cancer Although HPV has been proposed as the first identified necessary cause of cervical cancer [61,62], we attribute the decreasing trend and very low IRRs of cervical cancer in California SA women to screening success. California SA women are getting screened at very early stages and hence treated completely as compared to the Indian women (cervical cancer ranks number one in India). Ovarian and uterine cancers Risk factors for epithelial ovarian cancer include older age, being White, positive family history, nulliparity, infertility, and obesity (high saturated fat and carbohydrate intake), postmenopausal HRT and use of cosmetic talc. Conversely, preventive factors include OC use, vegetable consumption, gravidity, lactation, tubal ligation, and hysterectomy. Genetic influence also plays a role, women with mutations in the BRCA1 or BRCA2 genes having an elevated risk [63][64][65]. Rates of this cancer in the SA women are higher than the ASIAN/PIs and almost approximating those of NHWs. Almost two-fold elevated risk of ovarian cancer in California SA women compared to native Asian Indian women can be explained by all the adoption of above mentioned western life-style factors. Similarly uterine/endometrial cancer is a disease of the developed world. Epidemiological studies have shown that majority of the incidence can be attributed to excess body weight (in turn due to 'unopposed estrogens'), lack of physical activity, exogenous hormones and chronic hyperinsulinemia along with genetic predisposition [66,67]. California SA women face a five-fold risk of this cancer as compared to the native Asian Indians and they show much higher rates than the Asian/PIs and their rates seem to be fast approaching the NHWs of the state. Clearly, acculturation can explain these findings. Limitations Certain limitations in the methods employed in this study deserve comment. The assumption of a linear growth of population may not be completely tenable, and various factors such as birth/death rates and immigration/migration related issues could impact patterns of population growth. While performing incidence studies on sub-ethnic populations, the issue of small number of cases is inevitable. This could create instability of rates, especially in analyses pertaining to trends over time. To overcome this, forming groups and performing a grouped analysis in those groups was completed. Conclusion Our findings are in general agreement with studies completed in the UK and suggest a strong role for acculturation, screening and lifestyle factors in explaining the patterns of cancer in SA in California. Minor disagreements with findings in UK studies are to be expected, as there are minor underlying differences in methodology. For example, some studies have used absolute numbers for comparison or a proportionate approach for comparison. But most of the studies have reported incidence rates based on data available from the cancer registries and census bureaus/corresponding organizations in UK (with whom we have compared our data). More studies are needed to evaluate gender differences in this population, especially the rising trend of gastrointestinal cancers seen in SA females vs. males, needs more investigation. Our study also reveals the need for additional screening measures and early diagnosis in this population. Our overall impression is that, if measures are not taken to improve screening, and curb smoking in this population and if the current conditions prevail, the rates of colon, lung, and breast cancer in the SA population will approximate those of California NHWs. We have presented a general picture of cancer in the SA population in this paper. It is beyond the scope of this paper to discuss subtypes of each cancer. Hence we conclude that more studies are needed on this issue and subtype analysis of cancer sites needs to be conducted. Authors' contributions RVJ conceptualized and designed the study, as well as carried out the data analysis and prepared the manuscript. PKM was responsible for the study design, acquisition of funding and data, as well as interpretation of the data. APP helped prepare the manuscript and gave technical advice.
2014-10-01T00:00:00.000Z
2005-11-10T00:00:00.000
{ "year": 2005, "sha1": "087d9c0e001ed62356e896b609b961b5d42f2bae", "oa_license": "CCBY", "oa_url": "http://carcinogenesis.com/images/pdf/JCarcinog_2005_4_1_21_42246.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "087d9c0e001ed62356e896b609b961b5d42f2bae", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
56478407
pes2o/s2orc
v3-fos-license
Bromopyrrole Alkaloids from the Sponge Agelas kosrae Two new sceptrin derivatives (1,2) and eight structurally-related known bromopyrrole-bearing alkaloids were isolated from the tropical sponge Agelas kosrae. By a combination of spectroscopic methods, the new compounds, designated dioxysceptrin (1) and ageleste C (2), were determined to be structural analogs of each other that differ at the imidazole moiety. Dioxysceptrin was also found to exist as a mixture of α-amido epimers. The sceptrin alkaloids exhibited weak cytotoxicity against cancer cells. Compounds 1 and 2 also moderately exhibited anti-angiogenic and isocitrate lyase-inhibitory activities, respectively. During the course of our search for bioactive metabolites from tropical sponges, we encountered the purple elongated sponge Agelas kosrae from Kosrae Island, the Federated States of Micronesia, and the organic extract of this sponge exhibited moderate cytotoxicity (IC 50 279 µg/mL) against the diverse chromatographic methods led to the isolation of 10 bromopyrrole-bearing alkaloids similar to sceptrin and related structural classes including two new compounds. We report here the structural determination of dioxysceptrin (1) and ageleste C (2) by combinations of spectroscopic analyses ( Figure 1). These sceptrin alkaloids exhibited weak cytotoxicity against six cancer cell lines (K562, A549, HCT116, MDA-MB-231, SNU628, SK-Hep-1). In addition, compounds 1 and 2 moderately exhibited anti-angiogenic and isocitrate lyase (ICL)-inhibitory activities, respectively. Results and Discussion The molecular formula of dioxysceptrin (1) was deduced as C22H24Br2N10O4 by HRFABMS analysis (m/z [M + H] + 651.0432, calcd 651.0427) aided by isotopic clusters in both positive (m/z 651.0/653.0/655.0) and negative ion modes (m/z 648.9/650.9/652.9) with intensities in a 1:2:1 ratio, indicating a dibrominated compound ( Figure S13). However, an interesting phenomenon was found in the NMR spectra of this compound. That is, two sets of highly disproportionate signals existed in both the initial 1 H and 13 C NMR spectra. Then, during storage, the ratio between the intensities of these sets of peaks gradually reached equilibrium (from 6:1 to 1:1 according to the 1 H NMR spectrum). Since several attempts to separate these compounds under various HPLC conditions were not successful, 1 was thought to be a mixture of either epimers or conformational isomers (1a and 1b), and their structures were determined from the mixture. Results and Discussion The molecular formula of dioxysceptrin (1) was deduced as C 22 H 24 Br 2 N 10 O 4 by HRFABMS analysis (m/z [M + H] + 651.0432, calcd. 651.0427) aided by isotopic clusters in both positive (m/z 651.0/653.0/655.0) and negative ion modes (m/z 648.9/650.9/652.9) with intensities in a 1:2:1 ratio, indicating a dibrominated compound ( Figure S13). However, an interesting phenomenon was found in the NMR spectra of this compound. That is, two sets of highly disproportionate signals existed in both the initial 1 H and 13 C NMR spectra. Then, during storage, the ratio between the intensities of these sets of peaks gradually reached equilibrium (from 6:1 to 1:1 according to the 1 H NMR spectrum). Since several attempts to separate these compounds under various HPLC conditions were not successful, 1 was thought to be a mixture of either epimers or conformational isomers (1a and 1b), and their structures were determined from the mixture. In the 13 C NMR spectrum of 1a, three carbons at δ C 174.3, 160.2 and 158.7 were thought to be amide carbonyl and/or guanidine carbons (Table 1). This interpretation was supported by the IR absorption bands at 1680 and 1635 cm −1 . Four additional carbons at δ C 126.6 (C), 121.3 (CH), 111.8 (CH), and 95.1 (C) in conjunction with the protons at δ H 6.95 (1H, br s) and 6.84 (1H, br s) in the 1 H NMR data were indicative of a substituted pyrrole moiety. The remaining carbons were the protonated ones in the more shielded region: δ C 60.3 (CH), 41.9 (CH 2 ), 38.2 (CH) and 37.1 (CH). A very similar set of carbon and proton signals was also found for 1b. Given this information, the planar structure of 1a was determined by a combination of 2D NMR experiments. First, all of the carbons were matched to their attached protons by an HSQC experiment. Then, a direct connection was found between an aromatic proton (H-2) and an NH proton (NH-1) at δ H 6.95 and 11.73, respectively, by a COSY experiment. The HMBC correlations of these protons and an additional proton at δ H 6.84 (H-4) with the neighboring carbons readily identified a 2,4-disubstituted pyrrole moiety (1-NH-C-5) ( Figure 2). The significant shielding of C-3 at δ C 95.1 confirmed the attachment of bromine at this position. Similarly, although it was not directly found from the HMBC data, the shift of C-5 at δ C 126.6 revealed the presence of a carbon substituent, possibly a carbonyl carbon, at this position. The COSY data revealed a long proton spin system of alkyl protons with NH groups at both termini (7-NH-12-NH), and this assignment was supported by several HMBC correlations among the carbons and protons in this moiety. An amide linkage was found between this group and the previously identified bromopyrrole by an HMBC correlation between 6-NH and C-6 (δ C 126.6). Similarly, a guanidine carbon and a carbonyl carbon were placed at C-13 and C-15, respectively, at the other terminus by a series of HMBC correlations: H-10/C-15, H-11/C-13 and C-15, and 12-NH/C-13 and C-15. Although it was not directly found from the 2D NMR data, the characteristic chemical shifts of the carbons and protons of C-11-C-13 and C-15, as well as an isolated proton signal at δ H 9.16 (2H, br s), were indicative of an aminoimidazolinone moiety ( Figure 2). Thus, 1a was found to possess a C 11 bromopyrrole-aminoimidazolinone moiety. The formula identified for 1a based on its NMR spectra accounted for C 11 H 12 BrN 5 O 2 , exactly half of the molecular formula. Furthermore, the methine groups at C-9 and C-10 required the attachment of additional groups at these positions. Overall, the dimerization of the bromopyrrole-imidazoline moiety Mar. Drugs 2018, 16, 513 4 of 11 through a cyclobutane group at C-9 and C-10 could easily account for the substituents missing from these positions. Thus, the planar structure of 1a was identified as a dimeric sceptrin-type alkaloid. By utilizing the same NMR experiments, the planar structure of the other constituent, 1b, was confirmed to be the same as 1a (Table 1). A literature survey showed that oxysceptrin from the sponge Agelas conifera had the same kind of oxidation pattern as was seen in one of the imidazoles of sceptrin [14]. The COSY data revealed a long proton spin system of alkyl protons with NH groups at both termini (7-NH-12-NH), and this assignment was supported by several HMBC correlations among the carbons and protons in this moiety. An amide linkage was found between this group and the previously identified bromopyrrole by an HMBC correlation between 6-NH and C-6 (C 126.6). Similarly, a guanidine carbon and a carbonyl carbon were placed at C-13 and C-15, respectively, at the other terminus by a series of HMBC correlations: H-10/C-15, H-11/C-13 and C-15, and 12-NH/C-13 and C-15. Although it was not directly found from the 2D NMR data, the characteristic chemical shifts of the carbons and protons of C-11-C-13 and C-15, as well as an isolated proton signal at H 9.16 (2H, br s), were indicative of an aminoimidazolinone moiety ( Figure 2). Thus, 1a was found to possess a C11 bromopyrrole-aminoimidazolinone moiety. The formula identified for 1a based on its NMR spectra accounted for C11H12BrN5O2, exactly half of the molecular formula. Furthermore, the methine groups at C-9 and C-10 required the attachment of additional groups at these positions. Overall, the dimerization of the bromopyrrole-imidazoline moiety through a cyclobutane group at C-9 and C-10 could easily account for the substituents missing from these positions. Thus, the planar structure of 1a was identified as a dimeric sceptrin-type alkaloid. By utilizing the same NMR experiments, the planar structure of the other constituent, 1b, was confirmed to be the same as 1a (Table 1). A literature survey showed that oxysceptrin from the sponge Agelas conifera had the same kind of oxidation pattern as was seen in one of the imidazoles of sceptrin [14]. The nature of 1a and 1b, as well as the configurations at the cyclobutane and aminoimidazolinone stereocenters, were determined by 1D selective gradient ROESY experiments. First, conformers and diastereomers could be distinguished by NOE irradiation of paired protons The nature of 1a and 1b, as well as the configurations at the cyclobutane and aminoimidazolinone stereocenters, were determined by 1D selective gradient ROESY experiments. First, conformers and diastereomers could be distinguished by NOE irradiation of paired protons [15]. For these compounds, the irradiations of 7-NH (δ H 8.04) and H-11 (δ H 4.45) of 1a increased the signal intensities of only the protons in this compound, while those in 1b were unaffected. The same phenomenon was also observed for 1b; the irradiations of 7-NH (δ H 8.24) and H-11 (δ H 4.28) only changed the intensities of the signals of the protons in this compound ( Figure S14). In addition, variable-temperature NMR experiments showed that the relative intensities of the key protons of 1a and 1b remained constant ( Figure S15). Alternatively, the possibility of 1 as a mixture of carbonyl-enol tautomers was eradicated by the 1 H NMR spectrum in MeOH-d 4 in which signals of both H-11 and H-11 were clearly observed ( Figure S15). Thus, 1a and 1b must be epimers at either the cyclobutane or α-amide positions. The relative configuration of the cyclobutane was assigned by ROESY experiments. The NOE cross-peaks of H 2 -8/H-10 and H-9/H-11 (also H 2 -8 /H-10 and H-9 /H-11 ) assigned the 9S*, 10R*, 9 S*, and 10 R* configurations for 1a. The same ROESY cross-peaks for 1b clearly indicated the bis-epimerization at the α-amido C-11 and C-11 positions between these molecules. However, due to the absence of reliable ROESY correlations, the configurations at these positions remained unassigned. Both the epimerization and the unassigned configuration at the α-amide positions were consistent with what has been reported for oxysceptrin [14]. The absolute configurations of the cyclobutane core were assigned by ECD calculations. Since 1a and 1b, bis-epimers at the C-11 and C-11 stereocenters, existed as a mixture (1:1 v:v) in 1, both the experimental and calculated ECD data were determined in their dimeric form. The comparison of their ECD profiles clearly assigned the 9S, 10R, 9 S, and 10 R absolute configurations, which are consistent with known sceptrins (Figure 3) [9,13]. Thus, the structure of 1, designated dioxysceptrin, was determined to be a mixture of 11,11 -dioxo derivatives of sceptrin alkaloids. The absolute configurations of the cyclobutane core were assigned by ECD calculations. Since 1a and 1b, bis-epimers at the C-11 and C-11' stereocenters, existed as a mixture (1:1 v:v) in 1, both the experimental and calculated ECD data were determined in their dimeric form. The comparison of their ECD profiles clearly assigned the 9S, 10R, 9'S, and 10'R absolute configurations, which are consistent with known sceptrins (Figure 3) [9,13]. Thus, the structure of 1, designated dioxysceptrin, was determined to be a mixture of 11,11'-dioxo derivatives of sceptrin alkaloids. The molecular formula of ageleste C (2) was established to be C18H18Br2N4O6 (m/z [M+H] + 544.9677, calcd 544.9671) by HRFABMS analysis. The NMR data of this compound showed signals of nine carbons with attached protons, indicating a dimeric nature ( Table 1). Comparison of the 1 H and 13 C NMR data with those of 1 revealed that the signals of the imidazoline moiety had been replaced with those of a carboxylic group at δC 174.0 (C-9), while those of the bromopyrrole and cyclobutane were intact (Table 1). This interpretation was confirmed by a combination of 2D NMR analyses in which the oxidative cleavage of the two imidazole moieties to carboxylic acids was clearly observed ( Figure 2). Further supporting evidence was provided by comparing the spectroscopic data with those of congeners 3 and 4 in which the protons and carbons showed virtually identical chemical shifts. After the assignment of the relative configurations by ROESY experiments (Figure 2), the absolute configurations of the cyclobutane moiety were defined to be the same as those of their congeners by a comparison of their CD data ( Figure S16). In addition to 1 and 2, eight known structurally-related bromopyrrole-bearing compounds were isolated and identified by combinations of spectroscopic methods. These compounds were ageleste A (3) [16], ageleste B (4) [16], nakamuric acid (5) [16,17], nakamuric acid methyl ester (6) Table 1). Comparison of the 1 H and 13 C NMR data with those of 1 revealed that the signals of the imidazoline moiety had been replaced with those of a carboxylic group at δ C 174.0 (C-9), while those of the bromopyrrole and cyclobutane were intact (Table 1). This interpretation was confirmed by a combination of 2D NMR analyses in which the oxidative cleavage of the two imidazole moieties to carboxylic acids was clearly observed (Figure 2). Further supporting evidence was provided by comparing the spectroscopic data with those of congeners 3 and 4 in which the protons and carbons showed virtually identical chemical shifts. After the assignment of the relative configurations by ROESY experiments (Figure 2), the absolute configurations of the cyclobutane moiety were defined to be the same as those of their congeners by a comparison of their CD data ( Figure S16). Sceptrins and structurally-related alkaloids are known to exhibit a broad range of bioactivities, such as anticancer, antibacterial, antifungal, anti-inflammatory, and anti-biofilm activities [1,3,4]. In our measurement of cytotoxicity, sceptrins were incubated with cancer cells for 72 h to assess the anti-proliferative activity. Compound 1 exhibited the most potent anti-proliferative effects against six cancer cell lines (Table 2). For anti-angiogenic activity, compounds 1-4 showed no cytotoxicity in HUVEC cells when treated up to 40 µM for 24 h. Subsequently, in the tube formation assay using the non-cytotoxic concentrations range (5-20 µM), only 1 exhibited moderate anti-angiogenic activity comparable to sunitinib, a positive control. The anti-angiogenic activity of compound 1 was (Figure 4a-c). In antimicrobial bioassay, all the compounds were inactive (MIC > 128 µM) against a variety of human pathogenic bacterial and fungal strains. In a subsequent bioassay, contrarily, compound 2 displayed moderate inhibition of Candida albicans-derived isocitrate lyase (ICL), a key enzyme in microbial metabolism. Data are presented as the mean fold changes ± SD of three independent experiments. *P < 0.05, **P < 0.01, ***P < 0.005 by t-test. General Experimental Procedures Optical rotations were measured using a JASCO P-1020 polarimeter (Easton, MD, USA) with a 1 cm cell. CD spectra were obtained using an Applied Photophysics Chirascan Plus spectrometer (Applied Photophysics Ltd., Leatherhead, Surrey, UK). UV spectra were acquired using a Hitachi U-3010 spectrophotometer (Tokyo, Japan). IR spectra were recorded on a JASCO 4200 FT-IR spectrometer (Easton, MD, USA) using a ZnSe cell. NMR spectra were recorded in DMSO-d6, with the solvent peaks (δH 2.50/δc 39.50) as internal standards, on a Bruker Avance 600 MHz spectrometer (Billerica, MA, USA). High-resolution FABMS spectrometric data were obtained at the National Center for Inter-university Research Facilities (NCIRF), Seoul National University and acquired using a JEOL JMS 700 mass spectrometer with 6 keV-energy, emission current 5.0 mA, xenon as inert gas, and meta-nitrobenzyl alcohol (NBA) as the matrix. HPLC separations were performed on a SpectraSYSTEM p2000 equipped with a refractive index detector (SpectraSYSTEM RI-150 (Waltham, MA, USA)) and a UV-Vis detector (Gilson UV-Vis-151 (Middleton, WI, USA)). All solvents used were in the presence of VEGF for 24 h was measured by MTT and compared to the control. Data are presented as the mean fold changes ± SD of three independent experiments. * p < 0.05, ** p < 0.01, *** p < 0.005 by t-test. General Experimental Procedures Optical rotations were measured using a JASCO P-1020 polarimeter (Easton, MD, USA) with a 1 cm cell. CD spectra were obtained using an Applied Photophysics Chirascan Plus spectrometer (Applied Photophysics Ltd., Leatherhead, Surrey, UK). UV spectra were acquired using a Hitachi U-3010 spectrophotometer (Tokyo, Japan). IR spectra were recorded on a JASCO 4200 FT-IR spectrometer (Easton, MD, USA) using a ZnSe cell. NMR spectra were recorded in DMSO-d 6 , with the solvent peaks (δ H 2.50/δ c 39.50) as internal standards, on a Bruker Avance 600 MHz spectrometer (Billerica, MA, USA). High-resolution FABMS spectrometric data were obtained at the National Center for Inter-university Research Facilities (NCIRF), Seoul National University and acquired using a JEOL JMS 700 mass spectrometer with 6 keV-energy, emission current 5.0 mA, xenon as inert gas, and meta-nitrobenzyl alcohol (NBA) as the matrix. HPLC separations were performed on a SpectraSYSTEM p2000 equipped with a refractive index detector (SpectraSYSTEM RI-150 (Waltham, MA, USA)) and a UV-Vis detector (Gilson UV-Vis-151 (Middleton, WI, USA)). All solvents used were of spectroscopic grade or were distilled prior to use. Animal Material Specimens of the Agelas kosrae sponge (Demospongiae: Agelasida: Agelasidae) were collected by hand using SCUBA offshore of Kosrae Island in the Federated States of Micronesia at a depth of 15 m on 23 October 2013. The sponge had an elongated repent form with several branches and had dimensions of 6 cm wide and up to 20 cm long. The texture was firm and compressible, and the color was purple on the surface and beige in the choanosome. The skeleton was composed of spicules cored primary fibres, echinated secondary fibres, and very rare achinated tertiary fibres with diameters of 100-200, 30-60, and 10-20 µm, respectively. The spicules, acanthostyles, (110-140 × 6-8 µm) and acanthoxeas (150-170 × 6-8 µm) were identical to those in the literature [24]. A voucher specimen (registry No. spo. 80) was deposited at the Natural History Museum, Hannam University, Korea, under the curatorship of C.J.S. ECD Calcualtions All conformational searches were performed using Macromodel (Version 9.9, Schrodinger LLC. (New York, NY, USA)) software with "Mixed torsional/Low Mode sampling" in the MMFF force field. The searches were conducted in the gas phase with a 50 kJ/mol energy window limit and a maximum of 10,000 steps to thoroughly examine all low-energy conformers. The Polak-Ribiere conjugate gradient (PRCG) method was utilized for minimization processes with 10,000 maximum iterations and a 0.001 kJ (mol Å) −1 convergence threshold on the RMS gradient. Conformers within 10 kJ/mol of each global minimum for compounds 1a and 1b were used for gauge-independent atomic orbital (GIAO) shielding constant calculations without geometry optimization employing TmoleX Version 4.2.1 (COSMOlogic GmbH & Co. KG (Leverkusen, Germany)) at the B3LYP/6-31G(d,p) level in the gas phase. The CD spectra were simulated by overlapping each transition, where σ is the width of the band at 1/e height. ∆E i and R i are the excitation energies and rotatory strengths, respectively, for transition i. In the current work, the value of σ was 0.10 eV. Anti-Proliferative Activity Assay Anti-proliferative activity was evaluated using SRB staining assay in various cancer cell lines (K562, A549, HCT116, MDA-MB-231, SNU638, SK-Hep-1). Cells were purchased from the American Type Culture Collection (ATCC, Rockville, MD, USA). They were cultured in media supplemented with 10% fetal bovine serum (FBS) and antibiotics-antimycotics (PSF; 100 units/mL penicillin G sodium, 100 ng/mL streptomycin, and 250 ng/mL amphotericin B). All cells were maintained at 37 • C under a humidified atmosphere containing 5% CO 2 . Briefly, cells were seeded in 96-well plates with various doses of compounds and incubated for 72 h. The cells were stained as previously described [25]. First, cells were fixed with 10% trichloroacetic acid and stained with 0.4% SRB in a 1% acetic acid solution. After washing and drying, dyes were dissolved in 10 mM Tris buffer (pH 10.0) and absorbance was measured at 515 nm. The percentage of cell proliferation was determined according to the following formula: cell proliferation (%) = 100 × ((A treated − A zero day)/(A control − A zero day)), where A is the average absorbance. The IC 50 values were calculated through non-linear regression analysis using TableCurve 2D v5.01 (Systat Software Inc., San Jose, CA, USA). Anti-Angiogenic Activity Assay Anti-angiogenic activity was evaluated using human umbilical vein endothelial cells (HUVEC) using previously described experimental methods [26]. HUVEC cells were purchased from the American Type Culture Collection (ATCC, Rockville, MD, USA) and cultured in EGM-2 (Lonza, Walkerswille, MD, USA) supplemented with 10% fetal bovine serum (FBS) and antibiotics-antimycotics (PSF; 100 units/mL penicillin G sodium, 100 ng/mL streptomycin, and 250 ng/mL amphotericin B). Briefly, HUVEC cells were mixed with tested compounds in 0.5% FBS EBM-2 media stimulated with or without VEGF (50 ng/mL) on matrigel-coated 96-well plates for 6 h at 37 • C under a humidified atmosphere containing 5% CO 2 . After the tube formation, cells were photographed using an inverted microscope (Olympus Optical Co. Ltd., Tokyo, Japan), then images were quantified with Angiogenesis Analyzer using Image J software. Tube formation activity was calculated using the following formula: (Total segment # (tested compound) − Total segment # (VEGF−))/(Total segment # (VEGF+) − Total segment # (VEGF−)) × 100. The IC 50 value was calculated through non-linear regression analysis using TableCurve 2D v5.01 (Systat Software Inc., San Jose, CA, USA). Cell viability with HUVEC cells were measured independently. First, HUVEC cells were seeded into a 96-well plate and the culture medium was replaced with a serum-free medium when it reached 60% confluency. After overnight starvation, cells were treated with samples and VEGF (50 ng/mL) in 2% FBS EBM-2 medium. Cells were further incubated for 24 h, and MTT assay was used to measure the cell viability. The formazan products were dissolved in dimethyl sulfoxide (DMSO). The absorbance was measured at 570 nm using VersaMax ELISA microplate reader (Molecular Devices, Sunnyvale, CA, USA). Isocitrate lyase (ICL) Activity Assay A 1 mL aliquot of the reaction mixture contained 20 mM sodium phosphate buffer (pH 7.0), 1.27 mM threo-DL-(+)-isocitrate, 3.75 mM MgCl 2 , 4.1 mM phenylhydrazine, and 2.5 µg/mL of recombinant ICL. The reaction was immediately initiated following the addition of a substrate with or without a prescribed concentration of the inhibitor dissolved in DMSO (final concentration, 1%). Glyoxylate phenylhydrazone formation was spectrophotometrically assessed at 324 nm after incubation at 37 • C for 30 min. The percent inhibition of ICL enzyme activity for each compound was calculated relative to the inhibitor-free control and the IC 50 values were calculated using nonlinear regression analysis (percent inhibition versus concentration). 3-Nitropropionic acid was used as a positive control. Protein concentrations were measured using the Bradford method with the Bio-Rad protein assay kit (Bio-Rad) and bovine serum albumin as the standard. Antibacterial Activity Assay Gram-positive bacteria (Staphylococcus aureus ATCC 25923, Enterococcus faecalis ATCC 19433 and Enterococcus faecium ATCC 19434) and Gram-negative bacteria (Klebsiella pneumoniae ATCC 10031, Salmonella enterica ATCC 14028 and Escherichia coli ATCC 25922) were used for antibacterial activity tests. Bacteria were grown overnight in Mueller Hinton (MH) broth at 37 • C, harvested by centrifugation and washed twice with sterile distilled water. Stock solutions of the compound were prepared in DMSO. Each stock solution was diluted with MH broth to give serial two-fold dilutions in the range of 128 to 0.06 µg/mL. The final DMSO concentration was maintained at 1% by adding DMSO to the MB broth. Aliquots (10 µL) of the broth containing approximately 5 × 10 5 colony-forming units (cfu)/mL of the bacteria were added to each well of a 96-well plate. The plates were incubated for 24 h at 37 • C. The minimum inhibitory concentration (MIC) values were determined as the lowest concentration of the test compound that inhibited bacterial growth. Ampicillin and tetracycline were used as reference compounds. Antifungal Activity Assay Potato dextrose agar (PDA) was used to cultivate Candida albicans ATCC 10231. After incubation for 48 h at 28 • C, yeast cells were harvested by centrifugation and washed twice with sterile distilled water. Aspergillus fumigatus HIC 6094, Trichophyton rubrum NBRC 9185 and Trichophyton mentagrophytes IFM 40996 were plated on PDA and incubated for 2 weeks at 28 • C. Spores were harvested and washed twice with sterile distilled water. Stock solutions of the compound were prepared in DMSO. Each stock solution was diluted with RPMI 1640 broth (Difco) to give serial two-fold dilutions in the range of 128 to 0.06 µg/mL. The final DMSO concentration was maintained at 1% by adding DMSO to the broth. Aliquots (10 µL) of the RPMI 1640 broth containing approximately 10 4 cells/mL were mixed with the test compound solutions in each well of a 96-well plate. The plates were incubated for 24 h (for C. albicans), 48 h (for A. fumigatus) and 96 h (for T. rubrum and T. mentagrophytes) at 37 • C. A culture with DMSO (1%) was used as a solvent control, and a culture supplemented with amphotericin B was used as a positive control. Disk Diffusion Assay Gram-positive bacteria (S. aureus ATCC 25923 and Bacillus subtilis ATCC 6633) were used for disk diffusion assay. Bacteria were grown overnight in Mueller Hinton (MH) broth at 37 • C, harvested by centrifugation and washed twice with sterile distilled water. Stock solutions of the compound were prepared in DMSO. Bacterial cells (5 × 10 5 colony-forming units (cfu)/mL) were inoculated and 128 µg/disk of the compound was added to each agar plate. The plates were incubated for 24 h at 37 • C. Ampicillin was used as a reference compound. Conclusions Two new sceptrin derivatives (1,2) and eight structurally-related known bromopyrrole-bearing alkaloids were isolated from the tropical sponge Agelas kosrae. The structure elucidation of compounds 1 and 2 were established by combined spectroscopic methods. Dioxysceptrin (1) was also found to exist as a mixture of α-amido epimers. Absolute configurations were determined by comparison of the experimental and calculated ECD data. The sceptrin alkaloids exhibited weak cytotoxicity against cancer cell-lines. Compounds 1 and 2 also moderately exhibited anti-angiogenic and isocitrate lyase-inhibitory activities, respectively.
2018-12-20T14:03:06.140Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "77bc717f704f4badd7efa6bbb6e7a9a8ff95ecb7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/16/12/513/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0b1123c414fa25e734bdd3e025f711bd641800a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
1784600
pes2o/s2orc
v3-fos-license
Hardness of Liar's Domination on Unit Disk Graphs A unit disk graph is the intersection graph of a set of unit diameter disks in the plane. In this paper we consider liar's domination problem on unit disk graphs, a variant of dominating set problem. We call this problem as {\it Euclidean liar's domination problem}. In the Euclidean liar's domination problem, a set ${\cal P}=\{p_1,p_2,\ldots,p_n\}$ of $n$ points (disk centers) are given in the Euclidean plane. For $p \in {\cal P}$, $N[p]$ is a subset of ${\cal P}$ such that for any $q \in N[p]$, the Euclidean distance between $p$ and $q$ is less than or equal to 1, i.e., the corresponding unit diameter disks intersect. The objective of the Euclidean liar's domination problem is to find a subset $D\; (\subseteq {\cal P})$ of minimum size having the following properties : (i) $|N[p_i] \cap D| \geq 2$ for $1 \leq i \leq n$, and (ii) $|(N[p_i] \cup N[p_j]) \cap D| \geq 3$ for $i\neq j, 1\leq i,j \leq n$. This article aims to prove the Euclidean liar's domination problem is NP-complete. Our work A unit disk graph (UDG) is an intersection graph of a family of unit diameter disks in the plane. Given a set C = {C 1 , C 2 , . . . , C n } of n circular disks in the plane, each having diameter 1, the corresponding UDG G = (V, E) is defined as follows: each vertex v i ∈ V corresponds to a disk C i ∈ C, and there is an edge between two vertices v i and v j if and only if C i and C j intersect. In this paper we consider the geometric version of the liar's domination problem and we call it as Euclidean liar's domination problem. In the Euclidean liar's domination problem we are given a UDG and a set P of n disk centers of the given UDG in the plane. For p ∈ P, N [p] is a subset of P such that for any q ∈ N [p], the Euclidean distance between p and q is less than or equal to 1. We define ∆ = max{|N [p]| : p ∈ P}. The objective of the Euclidean liar's domination problem is to find a minimum size subset D of P such that (i) for every point in P there exists at least two points in D which are at most distance one, and (ii) for every distinct pair of points p i and p j in P, in other words, the number of points in D that are within unit distance with points in the closed neighborhood union of p i and p j is at least three. Complexity In this section we show that the Euclidean liar's domination problem is NPcomplete for UDGs. The decision version of liar's dominating set of a UDG can be defined as follows. UDG LIAR'S DOMINATING SET (UDG-LR-DOM) Instance : A unit disk graph G = (V, E) and a positive integer k. Question : Does there exist a liar's dominating set L of G such that |L| ≤ k? We prove the NP-completeness of UDG-LR-DOM by reducing dominating set problem defined on a planar graph with maximum degree 3 to it, which is known to be NP-complete [2]. The decision version of dominating set of a planar graph with maximum degree 3 can be defined as follows. PLANAR DOMINATING SET (PLA-DOM) Instance : A planar graph G = (V, E) with maximum degree 3 and a positive integer k. Question : Does there exist a dominating set D of G such that |D| ≤ k? . A planar graph G = (V, E) with maximum degree 4 can be embedded in the plane using O(|V |) area in such a way that its vertices are at integer co-ordinates and its edges are drawn so that they are made up of line segments of the form x = i or y = j, for integers i and j. Algorithms to produce such embeddings are discussed in [3,4]. Many standard graph theoretic problems on UDGs are shown to be NP-complete with the aid of Lemma 1 [1]. Lemma 2. Let G = (V, E) be a planar graph with maximum degree 3 and |E| > 2. G can be embedded in the plane such that its vertices are at (4i, 4j) and its edges are drawn as a sequences of consecutive line segments drawn on the lines x = 4i or y = 4j for some integers i and j . In summary, we can draw a planar graph G = (V, E) of maximum degree 3 on a grid in the plane, where each grid cell is of size 4 × 4, such that : 2. The co-ordinates of each point p i (corresponding to a vertex v i ) are (4i, 4j) for some integers i and j (see Figure 2). 3. An edge between two points is represented as a sequences of consecutive line segments and is drawn on the lines x = 4i or y = 4j for some integers i or j (these consecutive line segments may bend at some positions of the form (4i , 4j )). 4. No two lines representing edges of G intersect each other, i.e., any two set of consecutive line segments correspond to two distinct edges of G can not have a common point unless the edges incident at a vertex in G. can be constructed from the embedding in polynomial time. Proof. Let us first embed the graph G in the plane and divide the set of line segments in the embedding into two categories, namely, proper and improper. We call a line segment is proper if none of its end points corresponds to a vertex in G. For each edge (p i , p j ) of length 4 units we add four points such that two points at distances 1 and 1.5 units from p i and p j respectively (see edge (p 4 , p 6 ) in Figure 2(a)). For each edge of length greater than 4 units, we add the following points : for an improper line segment four points at distances 1, Figure 2(a)) corresponds a vertex v i in G without coinciding with the line segments that had already been drawn before. Observe that adding this line segment on the lines x = 4i or y = 4j is possible with out loosing the planarity as the maximum degree of G is 3. Now, add three points (say x i , y i , and z i ) at distances 0.2, 1.2, and 1.4 units respectively from p i . For convenience we name the points added (i) correspond to vertices of G by node points (ii) on the line segments of length greater than or equal to 4 by joint points, and (iii) on the line segments of length 1.4 by support points. Let us denote these three sets of points by N , J, and S respectively. In Figure 2(a) these sets of points respresented as set of solid circles, solid squares, and circles respectively. Let N = {p 1 , p 2 , . . . , p n }, J = {q 1 , q 2 , . . . , q m }, and S = {x i , y i , z i | 1 ≤ i ≤ n}. After defining the above sets, remove all the line segments. Now we construct a UDG G = (V , E ), where V = N ∪ J ∪ S and there is an edge between two points in V if and only if the Euclidean distance between the points is at most 1 (see Figure 2(b)). Observe that, |N | = n, |J| = 4l(= m), where l is the total length of the segments having length greater than or equal to 4, and |S| = 3n. Hence, |V | = 4(n + l) and l is bounded by a polynomial of n. Therefore G can be constructed in polynomial time. Proof. For any given subset L of V and a positive integer k , it is easy to verify that the subset L is a liar's dominating set of size at most k or not. Hence UDG-LR-DOM belongs to the class NP. We prove the hardness of UDG-LR-DOM by reducing PLA-DOM to it. Let an instance, G = (V, E), of PLA-DOM has been given. Construct an instance, a UDG G = (V , E ), of UDG-LR-DOM as discussed in Lemma 3. We now prove the following claim : G has a dominating set of size at most k if and only if G has a liar's dominating set of size at most k = k + 4l + 3n. Necessity : Let D ⊆ V be the given dominating set of G with |D| ≤ k. Let L = D ∪ J ∪ S. We prove that L is a liar's dominating set of G . (ii) Now consider every distinct pair of points in V . Every point p i in N is dominated by x i and some q i in J. Therefore, In the same way we can prove that the rest of the pair combinations have at least three points of L in their closed neighborhood union. Thus every distinct pair of points in V satisfies the second condition of liar's dominating set. So L is a liar's dominating set of G and |L| = |D|+|J|+|S| ≤ k+4l+3n = k . Thus the necessity follows. Sufficiency : Let L ⊆ V be a liar's dominating set of size at most k = k + 4l + 3n. We prove that G has a dominating set of size at most k. Observe that we added points x i , y i , z i in such a way that p i is adjacent to x i , x i is adjacent to y i and y i is adjacent to z i i.e., {(p i , x i ), (x i , y i ), (y i , z i )} ⊂ E for each i. Hence, z i and y i must be in L due to the first condition of liar's domination. Also, every component of L must contain at least three vertices due to the second condition of liar's domination. Hence, x i ∈ L. Therefore, any liar's dominating set of G must contain {x i , y i , z i }, 1 ≤ i ≤ n i.e., S ⊂ L. These account for 3n vertices of L. Let L = L \ S. Now we shall show that, by removing or replacing some points in L , k node points can be chosen such that the corresponding vertices in G is a dominating set of G. Note that L is a dominating set of the UDG G = (V , E ), where V = V \ S, E = E \ {(p i , x i ), (x i , y i ), (y i , z i ) | 1 ≤ i ≤ n} and |L | = k + 4l. In order to ensure the liar's domination, every segment of length greater than or equal to 4 in G should have at least two joint points in L . If there are more than two joint points corresponding to a segment in L , then we remove and/or replace the joint points so that each segment will have only two joint points while ensuring the domination. Now, L has been updated. Let L is the set obtained after updating L and L is also a dominating set of G with cardinality at most k + 2l. We obtain the required dominating set D of G from L as follows : consider a series of line segments, say I = [p i , p j ], corresponding an edge (p i , p j ) of G , where |I| = 4l i.e., I has l segments. If none of p i and p j are in L , then replace a point in L by p i with out loosing the domination property (existence of such a point is guaranteed as L is a dominating set). We apply this to all I's. After applying the above process to all I's, if there is an edge (p i , p j ) such that none of p i and p j are in L , then there must exist I 1 = [p s , p i ] and I 2 = [p t , p j ] with lengths 4l 1 and 4l 2 corresponding to some edges in G such that p s and p t are in L . From the above preprocessing it is clear that I 1 and I 2 have at least 2l 1 and 2l 2 joint points in L . From the above argument, there are at least 2l joint points in L , where l is the total number of line segments used in G . This means that there are at most |L | − 2l(= k) node points in L . Let D = {v i ∈ V | v i corresponds to a node point in L }. So, D is a dominating set of G and |D| ≤ k. Thus the sufficiency follows. Conclusion In this article we considered the liar's domination problem on unit disk graphs and proved that the problem belongs to NP-complete class.
2016-11-23T14:21:02.000Z
2016-11-23T00:00:00.000
{ "year": 2016, "sha1": "4d142c8ca1edb9b47ab572a3464d787299eedab8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4d142c8ca1edb9b47ab572a3464d787299eedab8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
149936706
pes2o/s2orc
v3-fos-license
Big Data Analysis, Use of Facebook Data. 90% of disponible data are created in recent years. Big Data term was know for the first time since 2005, and even before in Mesopotamia, in order to register the increased of their productions. But evolution erea of Big Data started at 20 century. Early data are from 1887, when Herman Hollerith created a computer that read wholes made on a card to organize registered data. Every our device is connected with internet of things (IoT), from which we can use and collect data. Collected data can help business understanding consumer model and behaviors. But big data is more than that. Big Data can help schientifics to face global problems, and business to face the right decision. The best example how big data had changed our live are social media. Use of big data collected from social media network help business to understand consummator behavior, audience groups and their dedication on studied situation. Our research focused in building an analysis informatic model, to analyse data collected from facebook pages. Introduction Social platforms like Twiter,Instagram and Facebook, are the main enivroment for politic, product, idea, notification marketing. More websites integrates user social profiles in their recomandation system. Big data is not only e technology, but a combination of technology old and new one, that help today companies to gather important and useful information. Big Data is an ability to manage a large volume of diffent type of data. Characterists of big data are listened below: 1. Velocity: how rapidly is the process of data 3. Diversity: Different type of data 4. Correclty: How correct are those data 5. Value: how valuable are those data. The main idea of this reseach was to offer large inovative business a fast way to analyse in real time, a large amount of data, collected from social media. Collected data will be used from business to answer question like where and what to do. Figure 1 ilustrate cycle of big data management. After fulfillement of this phase, data are avalaible for analysis depend of addressed problem. After that business management is able to make a decision depend on the analysis result. 2 Figure 1 Cycle of big data management. Architecture of data management, must involve a variety of services, that create opportunities for company to use efficienty and faster data. Figure 2 ilustrate basic level of architecture. Importance of big data is not how large is the company but how this company use collected data. Bussiness must collect data from diferent source and analyse them for a reason like: • Cost reduction: Use of Hadoop present cost reduction for business. Hadoop help business in identification of effective methods to do business. Below is a scheme of Hadoop architecture and use • Development of new product: By knowing trends of customer needs and desires, bussines are ables to create new innovative and successful products. • Understand trade conditions: By analysis data, business management can understand more deeper trande condition. For example, analyzing customer behavior, a company can understand which is the product 3 best seller in order to produce more of this product. • Check online reputation: If company use big data method, management was able monitoring their online presence and improve a best presence One of the greatest improvement of today bussines is to predict changes in the future to success. Companies desires to applied gained knowledge from big data to improve increasements on bussines revenue. Planification process througt big data passes on four phases: Phase 1: Data Planning For business managers to mae e good decision, it's important to understand connetion between data. In many cases managers does'nt have enough data when make a decision. If bussineses need to enlarge their activities, they must take in consideration data from diferent resources and directions, in order to made a deeper analysis about what they want. Planning process requires a variety of data in order to test diferent bussines assumptions and ideas. Phase 2: Analysis After than business management have understand the main organization aim, must analyzing data. Data analysis should traducet to bussines knowledge. Phase 3: Data control During the process of data analysis must be made a control to check if this analysis have inpact on bussines needs, if collected data are consistent. In this phase companies will be sure that data sources will not sent them in wrong direction. Phase 4: Decision made After analysis process, managers can perform an action plan. Any time that process create a new bussines strategy, it's important to use a evaluation cycle for data. The bussines Succes key is to make a decision depend on big data analysis result and test if this decision match a sucesful bussines strategy. Methods The application is designed in Visual Studio 2017 and database in SQL Management Studio 2012. Technology used is ASP.net Core MVC. Application start with login page. Only loged users can view informations.After success log-in, interfaces shown as below. Conclusions One bussines can fail if the bussines process is not well designed and efficient. This bussines consume more for works that are easy to do. In such cases resourses are not well used. Many company spend 20% to 30% of their revenue inefficiently every year. Bennefits of using big data are: • Corect definition of what to measure. • Ecxact Measurement of what will happen in any phase of bussines process. • Data analysis and judge base of result. • Use of analysis on process enhancement. • Commitment of improvement and measure again. • Use of an notification system for any failure process.
2019-05-12T13:11:01.771Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "3d1487a432917944af1f893dc902588c8f517e76", "oa_license": "CCBY", "oa_url": "https://www.iiste.org/Journals/index.php/CTI/article/download/46785/48315", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "42db46d8b3a1d55b65fd781c49a3b027d5a0578e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
243772585
pes2o/s2orc
v3-fos-license
Shifting Power to Improve First Nation Peoples’ Access and Outcomes in Kidney Care with Indigenous peoples’ perspectives about Cultural Safety in kidney care. Smith et al posit 2 knowledge systems: the Eurocentric academic process and a process that centers on Indigenous peoples and storytelling. The review identified 2,232 articles, of which 15 relevant articles addressed the research question on Cultural Safety within the context of Indigenous kidney health care. The review focused on research from New Zealand, Australia, Canada, and the United States, dating back to 2002, when Cultural Safety as a term was proposed by Ramsden and gained academic acceptance. Smith et al introduce the academic knowledge system in the methodology of the paper, but first comes the story of Indigenous peoples lost, missing, and dying in care systems that fail them. Indigenous people seek access to health care as well as optimal health care interactions and outcomes, just as non-Indigenous people do. The reader is challenged to consider Cultural Safety in kidney care for Indigenous peoples, with a prompt to readers to recognize “triggered unease” as expected experiences when engaging “issues of racism.” Racism has persisted in health systems and continues to harm Indigenous peoples in Canada, the United States, New Zealand, and Australia. This paper is a timely exploration of the depth and breadth of knowing of Cultural Safety, with its source material harvested from academic knowledge repositories. The 15 articles that Smith et al identified expose the differing assumptions of academic knowledge systems, in which academic repositories archive materials whose evidence ratings have been developed without the knowledge or values of Indigenous peoples. Nevertheless, this repository is used for purposes that fundamentally affect the nature of care provided to Indigenous peoples, including the teaching curricula of kidney health practitioners, designing health care systems, and accreditation of hospital practices. In the area of kidney disease, the voices of Indigenous people have been largely absent from this repository, either as authors, researchers, or storytellers. Smith et al invite readers to reflect on sustained legacies of colonialism when considering the body of Cultural Safety in kidney care publications deposited in academic archives. Processes that intentionally make space (through shifting power) for Indigenous peoples’ self-determination in health care and those that promote improved health outcomes need I n this internationally important systematic review 1 identified and given voice by a First Nation-led multidisciplinary group in Canada and amplified internationally in this issue of Kidney Medicine, readers are invited to engage with Indigenous peoples' perspectives about Cultural Safety in kidney care. Smith et al 1 posit 2 knowledge systems: the Eurocentric academic process and a process that centers on Indigenous peoples and storytelling. 1 The review identified 2,232 articles, of which 15 relevant articles addressed the research question on Cultural Safety within the context of Indigenous kidney health care. The review focused on research from New Zealand, Australia, Canada, and the United States, dating back to 2002, when Cultural Safety as a term was proposed by Ramsden 2 and gained academic acceptance. Smith et al 1 introduce the academic knowledge system in the methodology of the paper, but first comes the story of Indigenous peoples lost, missing, and dying in care systems that fail them. Indigenous people seek access to health care as well as optimal health care interactions and outcomes, just as non-Indigenous people do. The reader is challenged to consider Cultural Safety in kidney care for Indigenous peoples, with a prompt to readers to recognize "triggered unease" as expected experiences when engaging "issues of racism." Racism has persisted in health systems and continues to harm Indigenous peoples in Canada, the United States, New Zealand, and Australia. 3 This paper is a timely exploration of the depth and breadth of knowing of Cultural Safety, with its source material harvested from academic knowledge repositories. The 15 articles that Smith et al 1 identified expose the differing assumptions of academic knowledge systems, in which academic repositories archive materials whose evidence ratings have been developed without the knowledge or values of Indigenous peoples. Nevertheless, this repository is used for purposes that fundamentally affect the nature of care provided to Indigenous peoples, including the teaching curricula of kidney health practitioners, designing health care systems, and accreditation of hospital practices. 4 In the area of kidney disease, the voices of Indigenous people have been largely absent from this repository, either as authors, researchers, or storytellers. Smith et al 1 invite readers to reflect on sustained legacies of colonialism when considering the body of Cultural Safety in kidney care publications deposited in academic archives. Processes that intentionally make space (through shifting power) for Indigenous peoples' self-determination in health care and those that promote improved health outcomes need research funding prioritization and the editorial commitment of assigning editors and journal reviewers to support research archiving. Utilization by editorial teams of research quality appraisal tools, which are designed by First Nations people, 5,6 can systematically promote the epistemic publication value of research produced from and by First Nation communities. Important power shifts and process shifts are occurring in international journals, such as Kidney Medicine, to amplify Indigenous peoples' knowledge. The kidney health writing collaboration of Smith et al 1 was First Nation-led and multidisciplinary. Through this lens, readers are invited to witness the profound cultural strength of Indigenous peoples. These were revealed in the clustered identification of Cultural Safety concepts of relationality, engagement, and health care self-determination. These profound strengths occur among systemic issues (barriers and access) that are an ongoing legacy of colonialism and resonate with Australian experiences among Indigenous peoples as health care users. Smith et al 1 advocate that Cultural Safety in kidney care within Indigenous communities requires further understanding and delineation. We agree but also provide an Australian example of Indigenous leadership in transforming kidney care. Australian processes that enable access to specialized kidney treatments, specifically to kidney transplantation, have only recently invited engagement of Aboriginal and Torres Strait Islander people. 7,8 In early 2019, the Transplant Society of Australia and New Zealand secured federal government funding to establish a National Indigenous Kidney Transplant Taskforce, the first in more than 50 years of Australian transplantation. This Taskforce was formed to improve Aboriginal and Torres Strait Islander peoples' access to kidney transplantation, recognizing the inequity of being a minoritized population (3% of the Australian population), with a 4-5 times higher incidence of end-stage kidney disease than other Australians, yet with lower access to kidney transplantation. 9 The Taskforce champions, who were Indigenous and non-Indigenous clinicians and transplantation leaders, met with the federal Minister of Indigenous Health, Hon Ken Wyatt AM, MP, in the preceding year, and all parties supported the Aboriginal and Torres Strait Islander peoples' self-determination for equitable kidney transplant care 8 and recognized that systemic cultural bias contributed to lower access to transplantation. 10 This work has been shared in community meetings, conferences, and a recent publication. 11 With 3 years of funding (2019-2022), the Taskforce includes 25 members from the Aboriginal Community Controlled Health Sector, primary and tertiary health care, transplant units and medical, nursing and allied health Related article, p 896 Kidney Med Vol 3 | Iss 6 | November/December 2021 professionals, and patient leaders. We committed to 4 principal activities: 11 (1) convening an Aboriginal and Torres Strait Islander Consumer community network; (2) defining data variables and data capture to define barriers to access transplant workup; (3) Access and Equity Sponsorships, which provisioned $1 million across 7 pilot projects to support regionally defined initiatives to improving access to transplant waitlisting, and which privileged community-led and or community-health care partnerships; and (4) a review of Cultural Bias initiatives in Australian renal units. For this activity, the Taskforce commissioned the Lowitja Institute to review the depth of publications describing Cultural Bias initiatives in kidney transplantation in Aboriginal and Torres Strait Islander peoples in Australia. 12 Consistent with Smith et al 1 , there is a critical need for Indigenous peoples to self-determine Culturally Safe Kidney Care as well as a critical need for research documentation that supports Indigenous academic, clinical, and methodological leadership and authorship. In conclusion, Smith et al 1 entreat us in their systematic review, advocating for shifting power differentials so that all people (health providers and Indigenous recipients of health care) feel safe and respected in the health care system. There is a critical need to improve Cultural Safety in all health care interactions and for Indigenous peoples' kidney care. Creating a body of knowledge within the academic repositories is crucialwe look forward to the day when systematic reviews can find hundreds, not a dozen, reports supporting programs that address these issues. Indigenous peoples' self-determination in improved kidney health outcomes is critical; engagement, involvement and leadership in research, and design and service improvement of health care systems will be required to ensure that kidney health care is fit for this purpose. Financial Disclosure: The authors declare that they have no relevant financial interests.
2021-11-06T15:09:18.952Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "05964d52593e40bb33fc50b436f41dd68d152232", "oa_license": "CCBYNCND", "oa_url": "http://www.kidneymedicinejournal.org/article/S2590059521002296/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "72607d5f2703203e235434f280091bdb5282e3e6", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
13471799
pes2o/s2orc
v3-fos-license
Adaptation of Bird Communities to Farmland Abandonment in a Mountain Landscape Widespread farmland abandonment has led to significant landscape transformations of many European mountain areas. These semi-natural multi-habitat landscapes are important reservoirs of biodiversity and their abandonment has important conservation implications. In multi-habitat landscapes the adaptation of communities depends on the differential affinity of the species to the available habitats. We use nested species-area relationships (SAR) to model species richness patterns of bird communities across scales in a mountain landscape, in NW Portugal. We compare the performance of the classic-SAR and the countryside-SAR (i.e. multi-habitat) models at the landscape scale, and compare species similarity decay (SSD) at the regional scale. We find a considerable overlap of bird communities in the different land-uses (farmland, shrubland and oak forest) at the landscape scale. Analysis of the classic and countryside SAR show that specialist species are strongly related to their favourite habitat. Farmland and shrubland have higher regional SSD compared to oak forests. However, this is due to the opportunistic use of farmlands by generalist birds. Forest specialists display significant regional turnover in oak forest. Overall, the countryside-SAR model had a better fit to the data showing that habitat composition determines species richness across scales. Finally, we use the countryside-SAR model to forecast bird diversity under four scenarios of land-use change. Farmland abandonment scenarios show little impact on bird diversity as the model predicts that the complete loss of farmland is less dramatic, in terms of species diversity loss, than the disappearance of native Galicio-Portuguese oak forest. The affinities of species to non-preferred habitats suggest that bird communities can adapt to land-use changes derived from farmland abandonment. Based on model predictions we argue that rewilding may be a suitable management option for many European mountain areas. Introduction Changes and loss of biodiversity can directly influence ecosystem structure and functioning [1], reduce ecosystem resilience to disturbances such as global warming [2], and jeopardize vital ecosystem services that support human well-being [3]. Currently, land conversion is recognized as the main factor driving global biodiversity change [4]. In Europe, during the last decades, agricultural intensification and industrialization of former extensively managed arable lands have promoted land abandonment and marginalization of many remote mountain areas [5]. This socio-ecological trend is mostly driven by human migration to urban areas [6], reflects the generalized demand for better life conditions (namely material well-being; [7]) and exhibits high chances of irreversibility [8]. The reduction of use or the complete abandonment of farmland has had a profound impact on the dynamics of many mountain landscapes. As land is abandoned, vegetation disturbance is highly reduced and secondary succession takes place, allowing the regeneration of native vegetation. The secondary expansion of shrubs and the regeneration of forest on former farmland and pastures lead to a simplification of the traditional landscape mosaic [5,9], which affects regional biodiversity [10,11]. Responses to farmland abandonment vary across and within taxonomic groups [12][13][14][15]. Interestingly, the development of forest in former farmland may not necessarily favour all forest specialist taxa. Evidence exists that some forest beetle and bat species [e.g., 16,17], benefit from certain traditional management practices that restrain excessive vegetation closure and maintain open areas. In birds, the diversity of performed ecological functions [18] and the differential affinity of species to different habitats underlie the wide range of responses to farmland abandonment observed within this group: while some species suffer detrimental effects, other species increase their abundance [9,15,[19][20][21]. The observed trends also differ between regions. For instance, as agricultural intensification in lowlands increases, uplands may be the only remaining suitable grounds for openhabitat bird species [21]. At the same time, rewilding of traditional farmland in mountain areas may bring advantages such as improved ecosystem services [22,23], and increasing bird species richness as a function of forest development towards climax [19,20,24]. Birds are vital mobile links for maintaining ecosystem function [25], acting as ecosystem service providers at the genetic, resource and process levels [18]. Consequently, it is crucial to understand how landscape dynamics affect bird diversity patterns in order to understand, and remediate, the dramatic declines of bird populations registered across Europe during the last decades (see PECBM -Pan-European Common Bird Monitoring scheme: http://www.ebcc.info/pecbm.html, and references therein). Species-area relationships (SAR) constitute a valuable framework to study biodiversity patterns. Nested SAR are curves constructed by estimating mean richness across sampled subplots within larger areas, that assist in understanding the processes underlying patterns of biodiversity across scales [26]. However, SAR have been used preferably at large spatial scales, to estimate the biodiversity of large regions [27], since at small scales, habitat heterogeneity is a major determinant of diversity patterns [28]. For example, bird diversity is highly influenced by habitat diversity. In order to accommodate this habitat effect, several studies proposed the incorporation of the multi-habitat context in the SAR framework [29][30][31][32]. The countryside-SAR model proposed by Pereira and Daily [31] is unique in considering that different species groups differentially use extant set of habitats in a given area. In spite of being a powerful tool to study species diversity patterns [33], nested SAR focus only in numerical species gains and they do not explicitly consider the loss of species in additional sampled area [34]. The composition of species assemblages between two areas changes through various processes derived from species traits (e.g., dissimilar dispersion strategies) and landscape/regional characteristics (e.g., diversity of habitats and their spatial configuration) [35][36]. Therefore, the difference in species composition between two areas, or the species similarity decay (SSD) with distance, is a fundamental aspect of species spatial patterns [35][36] and should be taken into account in landscape and regional scale studies. The SSD constitutes a good surrogate to understand beta-diversity patterns of species groups complementing the SAR analyses. Following the tendency observed within many European mountain areas [5], the Peneda-Gerês Mountains (NW Portugal) have been subject to farmland abandonment during the last decades, which led to a land-use alteration across the region's landscape. In this study we aim at predicting the effects of current (and possible future) landscape transformations on species richness patterns of bird communities in the Peneda-Gerês Mountains. We predict that bird species use multiple habitats in the landscape and can adapt to land-use change caused by farmland abandonment. To address this prediction we analyse classic and multi-habitat SAR at the landscape level and SSD at the regional level. Specifically, we ask the following questions: (i) what are the bird diversity patterns in the different land-uses at the landscape scale? (ii) does species composition similarity decays with distance at the same rate for different land-uses at the regional scale? (iii) are species richness patterns better described by classic or by (multihabitat) countryside-SAR model? (iv) what are the consequences of different land-use change scenarios for the regional bird communities? Ethics Statement Permission to access privately owned land was given by all the land owners. This study did not require any approval for animal care and use because it was an observational field study, not involving the capture and handling of wild animals nor their maintenance in captivity. Study Region Our study region consists of the Peneda-Gerês National Park (PNPG), NW Portugal ( Figure 1A). The region encompasses the Peneda and Gerês Mountains covering an area of 69 592 ha of extensively managed woodland-pasture-agriculture mosaic. The region is located in the transition between the Mediterranean and Eurosiberian biogeographic zones in the proximity of the Atlantic coast. Topographic relief is complex with a high plateau, slopes with various bedrocks and narrow valleys, with an elevation ranging from 300 m to 1340 m. The core area of our study is the landscape of the Castro Laboreiro Valley (ca. 42u01'N, 8u09'W) ( Figure 1B) which covers 4725 ha. Formerly harbouring a self-sufficient community based on agriculture and pastoralism, this area has been characterized by a marked rural exodus since the 1960's that triggered the abandonment of traditional agricultural practices. Albeit this trend, agriculture is still the main economic activity in the region. Most of the land is privately owned but some areas are communal and mainly used for pasture. Land-Use Characterization The definition and categorization of the different land-uses was based on available land-use maps for Portugal (IGEOE: http:// www.igeoe.pt) and Galicia (SITGA: http://sitga.xunta.es). Similarly to other European mountain landscapes, our study region has a complex structure composed by a large set of natural and seminatural habitats that resulted from the anthropogenic modification of the natural landscape. All these habitats were grouped into three main land-use categories: farmland, shrubland and oak forest. In this paper we use habitat and land-use interchangeably. Farmland: nowadays few fields are used for crops and most farmland is occupied by semi-natural pastures used for cattle grazing or fodder production. Some vegetable patches and fruit trees are maintained. Human made structures (villages and scattered houses) were included in this category. Shrubland: this broad category includes areas dominated by heaths (Erica sp.), gorses (Ulex sp.) and Genista tridentata, and areas of tall shrublands of brooms (Cytisus sp.), gorses and heaths. Some of these areas also include bedrocks and/or dispersed trees. Oak Forest: the native Galicio-Portuguese oak forests of Quercus robur and Q. pyrenaica constitute the climax vegetation of the region. Although the area of Galicio-Portuguese oak forest is much reduced relative to its biogeographic potential, in the PNPG there are extremely well preserved patches. Within the Castro Laboreiro Valley, native oak forests represent 92% of the forested area, with the remnant area corresponding to small pine plantations of Pinus sylvestris and P. pinaster, and scattered patches of other natural broadleaved species. Therefore, the total area of forest was included in this category, although bird data were only collected in oak forests. The region is clearly dominated by shrubland which, acting as the matrix land-use in the landscape, represent 73% of the study area ( Figure 1B). Farmland and oak forest are equally represented, accounting respectively for 12% and 15% of the Castro Laboreiro Valley's landscape. Bird Data and Experimental Design Bird data were obtained from 30 m fixed-radius point-counts (approximately 0.3 ha) [37]. We set our sampling unit to 0.3 ha due to the particular fine-grained aspect of the Castro Laboreiro Valley landscape, as 30% of the agricultural fields in our study area are smaller or equal to 0.3 ha [38]. Although 0.3 ha may be small relatively to the territories of some open area bird species, we believe that all species occurring in farmlands were effectively sampled. Point-counts were visited once by the same observer (JLG) to avoid between-observer variations, during the breeding season of 2009 (from late April to mid-June). All the birds heard or seen in a ten-minute period were recorded. No counts were performed under strong wind, rain or cold weather. Birds of prey, nightjars and owls, and aerial feeders (swifts and swallows) were excluded from the statistical analysis, as this survey method is not adequate for these groups [39]. Juvenile birds were also excluded from the analysis. In order to study SAR at the landscape scale point-counts were set according a nested sampling scheme ( Figures 1B, 1C): pointcounts (approximately 0.3 ha) were aggregated in groups of five forming the centre and corners of a 2756275 m plot (approximately 7.56 ha), such that within each plot the minimum distance between point-counts was 152 m (distance from centre to corner point-counts); five such plots form the centre and corners of a 137561375 m local-square (approximately 189 ha); finally, five local-squares form the centre and corners of a landscape polygon (4725 ha) corresponding to the Castro Laboreiro Valley. We assume that breeding and foraging territories of the species used in our analysis are within the Castro Laboreiro Valley landscape unit. We studied SSD based on five 137561375 m local-squares placed in the study region according to a gradient of distance (0, 5, 10, 20 and 40 km). The central and the 5 km distant local-squares were the same as used in the SAR study, whilst three additional local-squares were placed 10, 20 and 40 km from the centre of the landscape ( Figures 1C, 1D). Local-squares were placed strategically to have variable representation (percentage) of each land-use, being the number of point-counts in each local-square stratified according to the area of each land-use category. Our experimental design totals 200 point-counts distributed in the region; however, two point-counts were excluded since they were inaccessible. We surveyed 54 pointcounts in farmland, 76 in shrubland and 68 in oak forest. Species Groups Description For the definition of species groups by their habitat affinity we performed a correspondence analysis (CA) using data from all the 198 surveyed point-counts (i.e. regional scale). The Levins index (B~1= P x 2 i , where x i is the relative abundance of each species (individuals/point-count) in land-use category i, in relation to species total abundance across the three land-use categories [40]), was used as a measure of habitat breadth to sort generalist species from specialists. The correspondence analysis was robust (15.3% of explained variation) in identifying the species associated with the three landuses ( Figure 2). The first axis (CA1, exp. variance = 8.1%; eigenvalue = 0.64) distinguishes oak forest from shrubland, while the second axis (CA2, exp. variance = 7.2%; eigenvalue = 0.57), discriminates farmland. Based on the CA outputs and the habitat breadth calculated for each species (Table S1), four bird species groups were identified: three groups were considered habitat specialists (farmland, shrubland and forest species) while the species equally distributed across land-uses (i.e., with a wide habitat breadth) formed a fourth group of generalists. Of the 43 bird species found, 10 were classified as farmland species, 7 as shrubland species, 16 as (oak) forest species and the remnant 10 were considered generalists (Table S1). Diversity Patterns Analysis For studying bird diversity patterns at the landscape scale we analysed species-area patterns. Classic and countryside SAR curves were fit to data using non-linear regressions. For the classic-SAR we used the power model: S classic~c A z , where the number of species S (response variable), grows with sampled area A (predictor variable), influenced by c and z, two parameters that are dependent on the taxonomic group and the sampling scheme respectively [41]. The classic-SAR of total species and of each species group were fitted by adding average species richness values from presence-absence data recorded in all sampling units within the landscape (i.e. point-counts), accumulating data from 0.3 ha to 7.56 ha, 189 ha and 4725 ha (curve type IIIA, sensu [21]). We assumed that the nested clusters of 0.3 ha point-counts are appropriate for sampling each scale (e.g., 7.56 ha plots were sampled by five 0.3 ha point-counts in the centre and corners of the plot, and in turn each 189 ha local-square was sampled by grouping five plots corresponding to 25 point-counts, see Figure 1B). Classic-SAR of total species and each species group were also obtained for each land-use. In either case, curves were fitted using only points-counts sampled in each of the land-uses in relation to the habitat area cover in every 7.56 ha plot, 189 ha local-square and 4725 ha landscape. In order to consider the multi-habitat context of the landscape the countryside-SAR model was fitted to data set. The countryside-SAR model accounts for the differential use of habitats by different species groups, with species groups characterized by species with similar habitat preferences (i.e. affinity) [31]. Thus, the number of species in each group S i (response variables), depends on the raw affinity h à ij of the group i to habitat j, with A j (predictor variables) representing the area of that habitat: . A non-linear regression was performed for each species group, estimating the raw habitat affinities h à ij . Next we normalized the habitat affinities by dividing each estimated affinity by the maximum estimated affinity: . The normalized affinities can be interpreted as the proportion of area of each habitat that can be effectively used by a species group ( Figure 3). For comparison to the classic-SAR, we re-wrote the Table S1). doi:10.1371/journal.pone.0073619.g002 Adaptation of Birds to Farmland Abandonment PLOS ONE | www.plosone.org number of parameters in the model, including the estimated variance). Species similarity decay (SSD) was studied at the regional scale by comparing the slope of the relationship (simple linear regression) between the turnover of species and the distance between samples [42]. Species turnover was measured using Sørensen index: Q sor~2 a= 2azbzc ð Þ, where a refers to the number of shared species in samples A and B, and b and c refers to the species solely found in samples A and B respectively. The index was calculated for all pairwise comparisons at the point-count scale for total diversity, for intra-habitat diversity and for each species group in the intra-habitat context. All the analyses were performed in R 2.15.2 environment [43]. Scenarios of Land-Use Change The estimated countryside-SAR model for the total number of bird species in the Castro Laboreiro Valley landscape was used to project the number of bird species in the landscape under four scenarios of land-use change for the PNPG. Although the scenarios were based in previous studies [7,44], they represent idealized situations. We assume the area lost by a habitat is replaced in equal proportions by the other two habitats [33]. The story lines for the four scenarios and details on land-use transitions are given in Table 1. Scenario 1 assumes the steadily abandon-ment of agriculture as human population ages, with the progressive homogenization of the landscape, due to the replacement of farmland by shrubland associated with early succession stages [38], and to the increase of native oak forest [7]. Scenario 2, assumes a dramatic depopulation of the study area leading to the nearly complete abandonment of agriculture in the study area (we considered a reduction of farmland to 1% of the study area) accompanied by progressive rewilding of the landscape as native oak forest matures and expands. Under this scenario, landscape management targets the re-establishment of ecological processes at the landscape scale, envisioning nature conservation and ecosystem services enhancement [44]. Scenario 3 assumes a reversal of the current population patterns as a consequence of the return of people to farming activities. A renewed society, concerned with the environment, would adapt innovative techniques to traditional farming knowledge foreseeing high quality farming products [7]. The area of farmland would increase such as the area managed by each farmer. Finally, scenario 4 assumes that a global crisis could lead to a dramatic increase of farmlands and agricultural intensification for high production of direct goods, with the dramatic decrease of oak forest (we considered a reduction to 1%), as a function of clearing for agriculture or its substitution by exotic forest plantations. Results We found significant species-area relationships for all species groups in each of the three land-uses ( Figure 4). Species richness of each specialist group, as given by the c-values of tested SAR, was highest for the favourite land-use of the group. Generalist species had the highest c-value in farmland, but also showed relatively high values in oak forest and shrubland. The z-values of classic-SAR suggest a lower degree of spatial turnover of each specialist species group in its favourite habitat within the landscape compared to the other land-uses (Figure 4). At the regional scale we found stronger species similarity decay in farmland and shrubland than in oak forest ( Table 2). In farmland this pattern is primarily due to the decay of generalist and shrubland specialists, as farmland specialists do not display a pattern of regional species turnover (p-value n.s.). Moreover, farmland species also show non-significant relationships between turnover and distance in shrubland and oak forest. Shrubland, in spite of supporting lower number of species, show high SSD at the regional scale, due to the variation of shrubland specialists and generalist species. Oak forest, although exhibiting a much smoother pattern of total SSD compared to farmland and shrubland, display a significant turnover with distance of generalists and forest specialists within the region (Table 2). When considering the multi-habitat context of the landscape, the results for the two tested models were different ( Table 3). The c-values were much higher when the affinities of each species group to the different habitats were considered. Differences were very marked, with c-values of farmland and shrubland specialists two times higher when estimated by the countryside-SAR, and even higher for forest specialists. On the other hand, z-values were similar when estimated by both models. The affinities of each specialist species group, as estimated by the countryside-SAR, have maximum values in the respective preferred habitat, while generalist species showed similar preference for farmland and forest grounds. The countryside-SAR model was the best model (based upon AICc) to describe the data for each species group (Table 3). The countryside-SAR also had a much better fit for the total bird species richness compared to the classic-SAR (respectively AICc classic~3 21 andAICc countryside~2 82). We used the countryside-SAR model to project the number of bird species that can be found in the Castro Laboreiro Valley according to different land-use scenarios ( Figure 5; Table 1). Since the classic-SAR does not account for the multi-habitat context, the total number of species in the landscape would be the same (48 species) under the four scenarios. On the other hand, the countryside-SAR model forecasts different numbers of bird species as a consequence of landscape transformation, by considering the different conservation value of the available habitats and the affinity of the species groups to each habitat ( Figure 5). Bird Diversity Patterns Bird richness and diversity were similar in farmland and native oak forest but lower in shrubland. There was a considerable overlap of species groups across the three land-uses, with about one-third of the species generalists, few specialist species (i.e., with narrow habitat breadth), and even fewer exclusive species (Figure 2; Table S1). In Peneda Mountains agricultural fields are isolated and native oak forest is still fragmented into small patches embedded in a shrubland matrix [38]. We suggest that the observed species overlap across land-uses is a consequence of the highly fragmented nature of the landscape because fine-grained landscapes improve the connectivity of habitats which allows more specialists to be found outside their preferred habitat. Compared to several other taxa, bird diversity is primarily influenced by landscape-scale heterogeneity, due to their dispersal ability [10]. Birds actively choose which habitats to explore and connect the habitats by actively moving in the landscape [25]. Moreover, in farmland-forest systems bird distributions may be influenced by interactions between distance to edge, habitat selection and dependence on shrubs [45]. For example, forest specialists such as coal tit Parus ater, crested tit P. cristatus, firecrest Regulus ignicapilla Table 1. Story lines and proportion of area covered by each land-use of four land-use scenarios for the Peneda-Gerês National Park. and short-toed treecreeper Certhia brachydactyla, were observed on farmland, actively using the live sedges of broadleaved trees and shrubs that act like corridors connecting forest patches [46]. As to farmland species, like tree pipit Anthus trivialis, landscape fragmentation increases their occurrence in forest patches due to edge effects [47]. In the case of plants, Proença and Pereira [33] found that the degree of species overlap between habitats in Peneda Mountains was much lower compared to birds. They found larger differences in the affinity values (given by the countryside-SAR) between the preferred and the alternative habitats of each species group than what we report here for birds, probably because the habitats are characterized by dominant plant formations [48]. Fragmented heterogeneous landscapes may favour generalist species and limit the landscape scale diversity of specialists: first, there can be a saturation of the species pool at smaller scales in fragmented landscapes than in larger forest ecosystems [33], as forest bird diversity is a function of forest matureness, growing differentiation [19,24] and continuous area [46,47]; and second, as shrubland develop into early stage forest, the community of birds may range only from generalist farmland species to forest specialists [15]. Still, at the landscape scale all species groups responded positively to increasing areas of their preferred habitat with classic-SAR z-values between 0.17 and 0.22. Surprisingly even higher z-values were found for each species group in their nonpreferred habitats (Figure 4). However, these higher z-values do not necessarily correspond to a higher species turnover of specialists in non-preferred habitats, as the regional SSD of specialists is not consistently higher in their non-preferred habitats relatively to their preferred habitats ( Table 2). We hypothesize, instead, that this pattern is driven mainly by a sampling effect: at small scales specialist species go undetected (note the low c-values of specialists in their non-preferred habitat; Figure 4), but at larger scales the infrequent use of non-preferred habitats by these species can be detected [41]. Species-Area Relationships and the Multi-Habitat Context Species-area patterns of total species and of each species group in the Castro Laboreiro Valley landscape were better described by the countryside-SAR model. The better performance of the countryside-SAR emphasizes the role of habitat heterogeneity as a key descriptor of species richness, as landscape composition and area are correlated and both contribute to species richness [41]. The dramatic loss of one habitat does not imply the complete disappearance of the species associated to that habitat, as species may survive in the landscape using alternative habitats depending on their affinity to each habitat [31]. Different authors have proposed modifications to the classic-SAR models in order to include various habitats. The models proposed by Tjørve [29] and Triantis et al. [30] account for the multi-habitat landscape, differing in that the former suggests the combination of multiple species-area curves (i.e. of different habitats) to describe species diversity, and the latter, i.e. the choros model, assumes the number of existing habitats in a given area, but ignores the available surface of each habitat. Although both models consider the role of habitat heterogeneity, they do not consider that different taxa use the available habitats differently. Recently, Koh and Ghazoul [32] proposed the matrix-calibrated model, which partitions the z-value of the power model in two components: a constant that describes the complete unsuitability of the matrix to the analysed taxa, and a parameter that represents the sensitivity of the taxa to modified habitats. The extinction risk of endemic birds across 20 biodiversity hotspots was better predicted by the matrix-calibrated model compared to the classic-SAR and the countryside-SAR [32]. However, the authors did not discriminate species groups and assumed the same affinity to human-modified habitats for all species analysed, which is an inadequate assumption to test the countryside-SAR model, because different species adapt and persist differently in the landscape after habitats are subject to land-use alteration [31]. The countryside-SAR estimates the affinity of selected taxa to the available habitats, and hence gives more accurate understanding of the impacts of land-use change on avian and other taxa diversity. Many species have the potential to use and adapt to different habitats. However, many biological attributes (e.g., minimum population size, migratory strategy, habitat breadth) that give key information to understand the variation of species responses to land-use change [15], are not accounted for in the countryside-SAR model. Nonetheless, the model has the potential to adequately forecast the bird dynamics derived from the ongoing rural abandonment dynamics common to most European mountain areas. Land-use Change and Conservation Implications Land-use scenario analyses stress the role played by native oak forest and farmland in sustaining bird diversity in the Castro Laboreiro Valley. High bird diversity is sustained along a gradient and oak forest). Symbols represent mean species number for each area category; error bars represent 61 standard error. Parameters c and z are given for all the SAR; p-values of all regressions are shown with z parameter (p,0.05*, ,0.01**, ,0.001***). The fit of the model to the data is given by corrected Akaike Information Criteria (AICc). doi:10.1371/journal.pone.0073619.g004 Table 2. Regional species similarity decay of each species group and of the total species in the three land-uses. of loss and gain of these land-uses, and higher richness could be expected through the expansion of both habitats. According to the countryside model, the nearly complete loss of farmland or native oak forest results in the loss of bird diversity (scenarios 2 and 4, Figure 5). However, it does not imply the complete disappearance of the species associated to the receding habitat, as species may survive in the landscape using alternative habitats depending on their affinity to each habitat. The outputs of these extreme situations, however, are different, suggesting that the reduction of farmland is potentially less dramatic in terms of species richness loss compared to the clearing of native Galicio-Portuguese oak forest. In addition, this projection may be conservative because the return to agriculture would imply the clearing of hedgerows for mechanization facilitation, while these natural corridors are important to maintain woodland connectivity and woodland species diversity [46]. Our predictions are in accordance with our results and agree with the theoretical predictions by Navarro and Pereira [23], suggesting that natural habitats may host as much species diversity as farmland, and birds can adapt to land-use change caused by farmland abandonment. Two processes may explain these results: first, open-area species may use and persist in alternative natural habitats that mimic farmland (e.g., in forest clearings), compared to forest species that require more vegetation structured habitats. Nonetheless, narrow farmland specialists could become locally extinct (e.g., skylark Alauda arvensis, due to improved negative forest edge effects [45]), contributing to a simplified bird community. Second, the community of farmland specialist birds may already be simplified. This hypothesis is supported by scenario 3 ( Figure 5; Table 1) which projects a gain of species resulting from farmland expansion. Agricultural intensification is the main driver of the widespread bird declines observed in Europe [49]. Nonetheless, many studies advocate negative impacts from land abandonment on bird communities and call for the development of agri-environment schemes to preserve upland extensive farming systems [15,[19][20]. Perhaps this approach is the main reference for European conservationists (and land stakeholders [44]) because true wilderness areas no longer exist in Western Europe. However, there is little evidence that agri-environment schemes are broadly successful and feasible in the long-term [50,51]. Several authors are therefore suggesting alternative land management strategies, such as rewilding [23,52]. The rewilding of mountain landscapes undergoing farmland abandonment through secondary forest regeneration can bring benefits with regard to particular ecosystem services (e.g., carbon storage, high quality timber and water cycle regulation [6,22]) and biodiversity conservation [22,23]. Forest bird specialists would benefit more from rewilding and forest spread than farmland birds. Nonetheless, having intermediate characteristics between the Atlantic and the Mediterranean climates, Galician-Portuguese native oak forests have the potential to harbour highly diverse bird communities. The occurrence of natural and human disturbances (such as wildfires and grazing [19,21]) should result in sufficient heterogeneity of successional stages of forest dynamics within the landscape. As such, rewilding may maintain enough patches of open habitat, which are important to the persistence and for the dispersal of open-habitat bird species [53], especially so for specialists of Mediterranean origin [20]. Supporting Information Table S1 List of the bird species recorded during pointcounts in the study region. For each species is indicated the species code, land-uses where the species was recorded, habitat breadth and species affinity group: FA -farmland species, SHshrubland species, QF -forest species, Gn -generalist species. (DOCX) Table 3. Classic and countryside species-area relationships in the multi-habitat context of each species affinity group in the multi-habitat landscape. Adaptation of Birds to Farmland Abandonment PLOS ONE | www.plosone.org
2016-05-17T14:44:22.154Z
2013-09-02T00:00:00.000
{ "year": 2013, "sha1": "d0565d8b780dbcb8187e420bd57d906ba7febcef", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0073619&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0565d8b780dbcb8187e420bd57d906ba7febcef", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252634531
pes2o/s2orc
v3-fos-license
Financing obstacles for SMEs: the role of politics Purpose Using data on over 7500 units from the World Bank Enterprise Survey (WBES) for 2014, the paper assesses the impact of political connections on SME financing obstacles in India. Methodology Since the dependent variable has a meaningful response order and involves several categories, it is appropriate to use the ordered logit model (OLM). We employ the OLM of the STATA program for the estimation process. Findings The findings indicate that political connections help alleviate higher-order financing obstacles. In terms of magnitude, senior managers with political connections are 2.5 percentage points less likely to state that there are no financing obstacles, and about 1 percentage point is more likely to state it as a moderate or major obstacle. As well, they are 0.6 percentage point more likely to mention it as a severe obstacle. These results differ across firm ownership type (i.e., male-versus female-owned) and firm size classes and when additional state characteristics are taken on board. Limitations The analysis is limited to a single year based on data availability. A much richer analysis would need to assess how such political connections play out over time and its consequences for SME behaviour. Second, our measure of political connection is indirect, since no other measure is reported in the data. Originality To the best of our understanding, this is one of the earliest studies for a leading emerging economy to assess the interlinkage between SME behaviour and their political connections. Introduction A growing body of research in recent times has explored the role and relevance of political connections (Faccio, 2006;Asher & Novosad, 2017;Chahal and Ahmad, 2022). The role of such connections permeates multiple areas ranging from regulatory to corporate and even to growth and development outcomes and spans across both developed (Bertrand et al., 2018;Brown & Huang, 2020;Hutton et al., 2014;Thakor, 2021) and emerging (Claessens et al., 2008;Khwaja & Mian, 2005;Kumar, 2020) economies. Such political connection is especially prominent in emerging economies so as to develop informal networks to address the lack of well-functioning markets and institutions (Carpenter & Petersen, 2002;Cowling et al., 2015;Du & Girma, 2010;Holton et al, 2013). Political connections also help assuage financing challenges to ensure that it is available at competitive rates (Ayyagari et al., 2008;Banerjee & Duflo, 2014;De Mel et al., 2008). Such two-way interactions benefit both sides. On the one hand, political ties help firms to enjoy preferential access to credit from state-owned entities and favourable regulatory treatment (Faccio, 2006). On the other, providing politicians with pecuniary and non-pecuniary (e.g., campaign) support allows the latter to consolidate their power and improve reelection prospects (Frye & Iwasaki, 2011). A part of the work was done when the author was associated with the Centre for Advanced Financial Research and Learning, Mumbai, India. The author would like to thank, without implicating, two anonymous referees for the incisive comments on an earlier version, which helped to greatly improve the analysis. Needless to mention, the views expressed and the approach pursued in the paper is the personal opinion of the author. As a result, forging political networks is important to ensure preferential access to credit (Khwaja & Mian, 2005) and shield themselves from the "grabbing hand" of the state (Shleifer & Vishny, 1998). Even when political ties are less compelling, the overwhelming dominance of the state in important spheres of economic activity, including finance allows it to allocate credit either on favourable terms (Sapienza, 2004) or to favoured firms (Li et al., 2008). The problem is all the more challenging for small and medium enterprises (SMEs). They account for a significant portion of business and are important contributors to employment and economic development. On average, they represent nearly 80% of value-added and more than 50% of employment worldwide. And yet, they face significant credit constraints (Beck et al., 2005(Beck et al., , 2006. Estimates by International Finance Corporation (2017) show that the financing gap for SMEs in emerging economies is US $5.2 trillion or 16% of their 2017 GDP. Even in India, the credit gap for SMEs is substantial. The World Bank (2018) estimates this gap for SMEs to be US$ 350 billion, or 12% of the country's 2018 GDP. To address this challenge, the government has instituted a whole host of schemes to provide access to finance at low-cost to eligible entities (Government of India, 2022). 1 However, availing to such finance involves closer and continuous interaction with government machinery, making SMEs susceptible to political pressures. Therefore, it is surprising that the role of political connections for small and medium enterprises (SMEs) has not received adequate attention, especially regarding their financing obstacles. To contribute to this debate, we assess the importance of political connections in alleviating SMEs' credit obstacles, using India as a case study. Accordingly, we use World Bank Enterprise Survey (WBES) data at the state-industry level for 2014. In the Indian context, this is the only available survey data which contains information on key variables of interest. Most recently, Jabeen et al. (2021) exploited this database to assess the differences in business obstacles encountered by Indian SMEs. Using this database, we address three inter-related issues: first, do political connections influence SMEs' financing obstacles? Second, do important financial characteristics of states such as their ease of doing business, credit penetration, and foreign bank's presence affect this behaviour? And finally, does gender of the SME owner play a role in this regard? Our key variable of interest is political connections, which is the response to the question "What percentage of senior management time was spent dealing with government regulations?" To elucidate, senior managers often exploit their contacts with government officials to improve the likelihood of credit access. Internationally, evidence suggests that political connections matter for firm behaviour. For example, Acemoglu et al. (2016) report that the appointment of a well-known individual in the USA as Treasury Secretary improved the cumulative abnormal returns of firms with whom there was prior connection by 6-12%. Brown and Huang (2020) demonstrate that such visits significantly boosted stock prices by utilising data on White House corporate executive logs. Even otherwise, closer links with government officials can significantly improve the firm's competitive advantage through cheaper loans or a greater quantum of loans at below-market rates (Li et al., 2008;Peng & Luo, 2000). We integrate this information on political connection with data on financing obstacles and control for other firm-level factors and industry-and state-fixed effects. Our findings suggest that political connections exert a discernible influence on SME financing obstacles and that this impact is economically significant as well. An analysis of this issue in the Indian context is useful for several reasons. First, it is well-recognised that the nexus between business and politics has become widely pervasive in India. Reflecting this fact, Sinha (2019) provides examples of how crony capitalism has been allowed to germinate, imposing high economic and financial costs. Second, even with regard to SMEs, certain products are exclusively reserved for their production. Although the list of products has gradually shrunk over time, yet SMEs depend highly on government funding for their business operations (Balasubrahmanya, 1995). Third, studies focusing on the Indian experience concerning SMEs are limited (Athaide & Pradhan, 2020;Ghani et al., 2014;Raj & Sen, 2015) and even if they exist, they do not address the relevance of politics. Given that these entities are an important driver of growth, assessing various facets of their performance is important to obtain a holistic picture (Government of India, 2019). Finally, COVID-19 significantly impacted SMEs in the country due to reduced (or, loss of) orders, unavailability of raw materials, and loss of markets. Reflecting this fact, estimates suggest that close to 50% of SMEs in India (United Nations Conference on Trade & Development, 2022). Reviving this sector with financial and logistical support requires a comprehensive assessment. The rest of the analysis unfolds as follows. "Theoretical framework"briefly outlines the theoretical motivation underlining the relevance of political connections, followed by the "Received evidence and contribution". Subsequently, we introduce the "Database and variables", followed by the "Empirical framework and results". The final section "Conclusions and managerial implications" concludes. Theoretical framework From a theoretical standpoint, three strands of literature have emerged, highlighting the relevance of political connections. The first is based on the resource dependence theory (Pfeffer & Salancik, 1978). This theory observes that the external ecosystem influences the behaviour of an organisation that it utilises. One such external resource is politicians. Forging networks with politicians facilitates an organisation to gain access to scarce resources or even when such resources are available, on more favourable terms. The second argument is based on the pork barrel theory (Dixit & Londregan, 1998). It refers to the fact that politicians often allocate significant resources to improve local constituents' economic and social prospects, thereby securing their support and votes. In this regard, maintaining political connections is beneficial since it can help facilitate allocating resources to desired entities (or groups). The final line of reasoning is based on the social capital theory. This theory defines the network of relationships among people that facilitates the smooth functioning of the society. In this context of SMEs, this theory observes that the quid-pro-quo relationship between SMEs and politicians act as a means to improve the firms' competitive advantage (Johanson & Mattsson, 1988;Li et al., 2008) and relatedly, as a means to distribute scarce resources. Each theory provides insights as to why SMEs need to forge political connections to further their business. Received evidence and contribution Our analysis makes two distinct contributions. The first is the role of politics in affecting credit obstacles for SMEs. Employing data for Italy, Sapienza (2004) finds that stateowned Italian banks charge lower interest rates to stateowned firms. Other studies have highlighted the role and relevance of politics in examining their vulnerabilities to political exigencies (Dinc, 2005). Within a cross-country setup, Lashitew (2014) shows that greater political connections increase firms' credit. Using data for Vietnam, Minh et al. (2021) show that SMEs with political connections pay anywhere between 7 and 10% lower taxes than their nonconnected peers. In the Indian setup, Dinc and Gupta (2011) show that political patronage compels governments to sidestep firm privatisation, especially when political competition is strong. Similarly, Asher and Novosad (2020) show that the construction of paved roads in rural India are much faster provided the local politician is aligned to the government in power at the state level. We contribute to this literature by investigating whether politics matters for SMEs' financing obstacles. Second, we contribute to the literature on gender by assessing the interlinkage between politics and gender, especially for SMEs. In an influential study, Chattopadhyay and Duflo (2004) find that in West Bengal and Rajasthan, female village councils prefer to expend resources on infrastructure relevant to women in their community. Other studies explore the effects of female representation on related aspects, such as the allocation of public goods (Clots-Figueras, 2011), employment (Ghani et al., 2014), and crime reduction (Iyer et al., 2012). Chaudhuri et al. (2020) distinguish between women-owned and women-managed businesses and show that the performance of the latter category of SMEs is significantly weaker compared with the former. Akin to their analysis, we distinguish between women-owned and womenmanaged firms and explore whether gender matters for SME financing in the presence of political connections. Over and above, we also address three related issues. First, we explore whether ease of doing business (EoDB) at the state level translates into lower financing obstacles for SMEs (World Bank, 2018). 2 Second, it is well-recognised that credit penetration varies widely across states (Reddy, 2012). Therefore, it appears likely that SMEs located in states with lower credit penetration could be relatively more constrained for credit. We examine this aspect in our empirical analysis. Finally, we focus on the role of foreign banks. In particular, we contribute to this evidence by looking at the impact of foreign banks on SME financing obstacles in the presence of political connections. Data source The data source is the World Bank Enterprise Survey (WBES). The survey is an ongoing exercise that collects firm-level data across countries based on a standardised procedure. During 2002-2020, close to 150 countries were covered by the WBES. Besides balance sheet and sales details, the data also provides responses to questions related to government-business relationships, various types of obstacles facing firms, and employment, and capital stock. The key sectors covered for each country are manufacturing, construction, and services. A two-stage stratification is employed to determine the sample size in each sector. The first stage is determined according to its relative importance in the overall economy, and the second stage is based on firm size and geographical location. For the manufacturing sector which is the focus of our analysis, the industry grouping is based on 2-digit ISIC classification. The firm size in the WBES are categorised based on full-time employees as small (between 5 and 19 employees), medium (between 20 and 99 employees) and large (with over 100 employees). This standardisation ensures that the data is comparable over time and across countries. Our sample focuses on the Indian case, where the survey was conducted during June 2013-June 2014. The data was collected based on a sample of 9281 formal businesses in the private sector having a minimum of five employees, categorised by firm size and geography. After filtering and removing non-manufacturing firms, we have a total of 7796 firms across 23 states and 15 industries. 3 Figure 1 shows the distribution of manufacturing firms across states. There is a significant regional variation in SMEs. Without loss of generality, the top three states account for a quarter of the total SMEs, and the share of top 5 states was close to 40%. The high contribution of certain states is the result of several factors such as conducive policy environment, infrastructural support, skilled workforce, and industry-friendly policies. Dependent variable The key dependent variable is the response to the question: "how much of an obstacle is access to finance?" The response to this question is qualitative ranging from "No obstacle," "minor obstacle," "moderate obstacle," "major obstacle," and "severe obstacle." We transform these qualitative responses into a quantitative scale, ranging from one (severe obstacle) to five (no obstacle), so that higher values indicate lower perceived obstacle by the firm. For each state-firm combination, we compute the average value of financing obstacles. We scale the average values for each firm size class by 5 (the maximum value). As a result, the overall financing obstacle for each size class ranges from 0.2 (severe) to 1 (no financing obstacle). We plot the financing obstacle and relatedly, show the "distance-to-frontier" by subtracting this value from 1 for each state-firm size combination. The taller the length of the bar for each state-firm size combination, the lower the financing obstacle for firms for that size class within a state. From this standpoint, Fig. 2 shows that Bihar presents the highest degree of financing obstacle across all firm size classes, whereas such financing constraints are the lowest in Odisha and Punjab. Among others, financing obstacles for small firms are on the higher side in Uttar Pradesh and Goa, in Rajasthan for medium firms, and in Tamil Nadu for large firms (Government of India, 2019). Key independent variable The key independent variable is the response to the question which shows the per cent of senior managers' time spent in dealing with government regulations. The response to this question ranges from zero to 100 and also includes qualitative responses such as "do not know" (which we treat as missing values). We define a dummy variable which equals one if senior managers of firms spend more than the median time with government regulation; the rest are classified as spending less time. We term this variable as Political; it shows the proportion of senior managers who spend more time dealing with government regulations. Figure 3 plots the response. It shows wide variability in the time firms spend dealing with government regulations. Firms in Bihar, Haryana Maharashtra, and West Bengal typically spend more time on average dealing with such regulations, although this magnitude varies across firm size. Control variables To account for other factors, we employ several control variables. The first control variable is size. Evidence suggests that larger firms are typically better performers, all else equal (La Porta & Shleifer, 2014). That being the case, they should face lower financing obstacles. Based on the data, we define firm size as a categorical variable, taking value of 1, 2, and 3, respectively for small, medium, and large firms. Age is a proxy for reputation (Diamond, 1991). Older firms have a better reputation and encounter fewer impediments in accessing finance. Their cost of borrowing is likely to be lower and therefore face lower financing obstacles. Evidence suggests that both the pattern of ownership and legal status are intricately linked with firm performance (Barbera & Moores, 2013). Taking this consideration on board, we control for firm ownership and legal status. Export orientation of a firm has been observed to positively impact its performance. As a result, we control for this fact by using a dummy variable if the firm's export-sales ratio is positive, else zero. To account for the possibility of access to finance, we include a dummy variable that takes value one, provided the firm has access to bank loans or any other credit line, else zero. Firm performance is also linked with its technological sophistication (Farrell, 2004;Grimm et al., 2012). In view of this, we use a dummy variable which equals one if a firm has some recognised certification from an international agency, else zero. Innovation is a key input of firm performance (Aas & Pedersen, 2011). By improving products and processes, innovative firms creatively disrupt less innovative incumbents, and thereby generate profits, leading to a virtuous circle of innovation and performance. Therefore, we use a dummy variable that equals one if the R&D-to-sales ratio of the SME is positive, else zero. We include two variables to capture the importance of human capital (Gennaiolo et al., 2013): the natural logarithm of the number of years of work experience of the top manager and the share of temporary workers to total workers. As an indicator of professionalism, we utilise a dummy which equals one if a firm employs an external auditor, and zero otherwise. Finally, we consider for the industry and state in which the SME is located to control for other unobservables. Table 1 shows the variable definitions, including summary statistics. The average value of obstacle to finance is 3.8, suggesting that access to finance is more than a minor obstacle. Across firms, the value of Political is 0.45, so senior managers appear to spend less than 1% of their time dealing with governments. Following from our previous discussion, although the average values are low, it is highly wide: in 55% cases, senior managers spent no time in dealing with the governments, whereas of the remaining, 38% of the senior managers spent up to 10% of the time in dealing with government. At the firm level, 34% of them are small; 44% are medium and the remaining are large firms. Among others, the age of a firm is 21 years on average, suggesting that firms are in operation for quite a substantive period. At the state level, the average EoDB is 43%; credit penetration is close to 50%; and foreign bank credit is just over 2%. Together, these numbers indicate significant "distance to frontier" in doing business, moderate levels of credit penetration, and very low outreach of foreign bank credit. Table 2 presents the correlation matrix of major variables. The key correlation is of financing obstacle with politics, which is negative and statistically significant with a value of 6.3%; in other words, political outreach of senior managers does appear to ease credit obstacles. We also find that women-owned firms report credit obstacles to be more binding, and political outreach on their part does not alleviate financing obstacles. These raw correlations are less meaningful, since they do not control for firm-level factors. We therefore specify an empirical framework that can take these factors on board. It is to this aspect that we turn our attention next. Impact of political connections To assess the impact of political connections on financing obstacles for SMEs, for firm f, industry i, and in state s, we estimate the following regression: In Eq. (1), obstacle is the measure of financing obstacle. The key independent variable is Political; Z and F are a vector of firm-and state-specific variables; λ and µ are industryand state-fixed effects (which control for other unobservable at the industry and state level); and ε is error term. As mentioned earlier, we transform the (qualitative) outcome variable into a (quantitative) scale, ranging from 1 to 5, with 1 indicating a "severe obstacle" (worst) and 5 proxying for "no obstacle" (best). Given this well-defined order of the outcome variable, we employ the ordered logit model. An important concern in our regression analysis stems from reverse causality from credit obstacles to political connections. This could be likely if senior managers increase their interactions with government officials, after obtaining credit. To address this bias, we incorporate industry dummies. The main findings are presented in Table 3. Column (1) shows that senior managers who spend more time dealing with government regulations end up facing lower financing obstacles. Across columns, these findings manifest in small and medium firms, although there is no impact for large firms. To facilitate better interpretation, we report the average marginal effects (AMEs) for the key coefficients. The AMEs provide a summary statistic that reflects the full distribution of independent variables (Williams, 2021). As a result, while we present the regression table and the AME for the baseline, we report only the AMEs for the key coefficients in subsequent regressions. (1) Obstacle fis = + Political fis + Z fi + F s + i + s + is Table 4 presents the AMEs, based on the estimates of the previous regression. In row (1), we present the results for all firms. The findings indicate that on average, senior managers with political connections are 2.5 percentage points less likely to mention that there are no financing obstacles, and about 1 percentage point is more likely to mention it as a moderate or major obstacle. They are also 0.6 percentage point more likely to mention it as a severe obstacle. Next, we assess the relationship by firm size class. We find that managers of small firms are more likely to mention All firms − 0.025*** − 0.001* 0.011*** 0.009*** 0.006*** Small firms − 0.048*** − 0.004*** 0.018*** 0.022*** 0.012*** Medium firms − 0.030** − 0.001 0.013** 0.011** 0.007** Large firms 0.024 − 0.002 − 0.011 − 0.006 − 0.005 finance as a moderate to severe obstacle. Similar is the case for medium firms, although the magnitudes are lower in this case. Large firms are less likely to encounter financing obstacles. To provide some example, small firms are 1.8 percentage points more likely to mention financing as a moderate obstacle (column 3) and likewise, medium firms are 1.3 percentage points more likely to cite financing as a moderate obstacle. These findings support prior evidence which suggests that 90% of the overall credit gap for SMEs pertains to small and medium firms (World Bank, 2018). Collectively, these results suggest that financing is a non-negligible obstacle stated by senior managers of SMEs. Such obstacles are much more important for small and medium-sized firms. Relevance of state characteristics Next, we examine the relevance of state characteristics. Accordingly, for industry i in state s, we estimate regressions of the following form: Our coefficient of interest is β. This coefficient shows whether political connections influence financing obstacles for different state characteristics (SC). As discussed earlier, we consider three state characteristics: the ease of doing (2) Obstacle fis = + 1 Political fis + 2 SC s + Political fis * SC s + Z fi + F s + i + s + is business (EoDB) score of the state, credit to NSDP ratio as a proxy for credit penetration and the credit share of foreign banks in the state. We report the AMEs of the interaction term for all firms and separately by size class. In Table 5, the estimates suggest that although the ease of doing business lowers financing obstacles for firms overall, moderate to severe financing obstacles are still pertinent. The magnitude of the impact is particularly pronounced for large firms. To illustrate, the coefficient on Political × EoDB in column (4) equals 0.184 for all firms and 0.196 for large firms. Therefore, in spite of spending more time on nurturing political connections, firms are 18 percentage points more likely to experience financing as a major obstacle in general, and this magnitude is close to 20 percentage points for large firms. Although ease of doing business creates a conducive environment and eases financing obstacles by increasing the space for the private sector, such obstacles are not eliminated altogether. The fact that improving the ease of doing business facilitates the growth of new firms has been reported in cross-country research (Canare, 2018). When we look at credit, we find that with an improvement in credit penetration, political connections helps to lower financing obstacles. Although certain minor obstacles remain, other obstacles are greatly reduced, especially for large firms. The point estimates in panel B indicate that with an increase in credit penetration, managers of firms with political connections are 4 percentage points less likely to experience severe financing obstacles. This magnitude is close to 7 percentage points for large firms. This could occur because overall improvement in credit penetration trickles down to SME borrowers, lowering financial obstacles. Finally, when we interact political connections with foreign bank penetration, we find that financing obstacles are still pertinent, especially for medium firms. By way of example, the coefficient on Political × FB under major obstacles for medium firms shows that managers with political connections are 7 percentage points more likely to experience major financing obstacles despite foreign bank presence (panel C). These results support the cherry-picking hypothesis which observes that foreign banks pick the most creditworthy customers for lending transactions, thereby limiting the flow of credit to riskier segments such as SMEs with limited collateral and credit history (Berger et al., 2001;Clarke et al., 2006). To sum up, these results suggest that increasing credit penetration is the best antidote for alleviating credit challenges faced by SMEs with political connections. Other considerations such as improvements in doing business or enriching foreign bank credit penetration are not really useful in addressing such obstacles. Relevance of gender Thus far, our estimations included gender as a control variable in the regressions. Several considerations may drive gender-based differences in loans. First, women's risk appetite might be lower than men's (Coleman & Robb, 2009). Second, women might have different specialisations (e.g. service sector) as compared to men (Heilbrun, 2005). Third, women's human capital levels could be different compared to men's (Boden & Nucci, 2000). Finally, social norms and culture could also be responsible for women's lower reliance on external finance (Klugman et al., 2014). From a theoretical standpoint, two sets of theoretical arguments have highlighted the relevance of gender. The human capital theory argues that greater gender diversity helps to improve the efficacy of decision-making (Carter et al., 2010). That being the case, gender-focused firms are less likely to encounter financing obstacles. As compared to this, the agency theory observes that by addressing the informational biases in decision-making, gender diversity provides a fresh perspective, thereby helping to alleviate financing obstacles (Campbell & Minguez-Vera, 2008). To investigate this aspect, for firm f, industry i, and in state s, we estimate the following regression: where the notations are as earlier, and we focus on firms with women-owner (WO), women-manager (WM), and women as both owner-manager (WOM). Three findings are of interest in Table 6. First, across all firms, the impact is manifest mainly with women as owner and as managers; there is limited impact when women perform dual roles. In terms of magnitude, women as owner are 1.4-2.8 percentage points more likely to mention finance as a moderate to major obstacle. Second, across size categories, the impact is in evidence for medium and large firms (Chaudhuri et al., 2020). And finally, there is evidence to suggest women as manager citing finance as a moderate to (3) WM or WOM) severe obstacle, especially for large firms. Collectively, these findings highlight the challenges facing the "missing middle of SMEs" in India, who outgrow their size and are therefore unable to take advantage of government benefit schemes and, at the same time, not receive adequate finance from institutional sources. Conclusions and managerial implications Using survey data for India, the paper assesses the importance of political connections in alleviating financing constraints for SMEs. The findings suggest that political connections alleviate minor financing obstacles but are not sufficient to assuage higher-order financing obstacles. To be more specific, political connections effectively address financing obstacles for small and medium firms for whom these are most pertinent. From the standpoint of state characteristics, we find that greater ease of doing business and greater credit penetration helps to assuage financing obstacles. In contrast, the impact of foreign banks in redressing financing obstacles is not so compelling. Finally, from the lens of gender, financing obstacles are the least relevant when women perform dual roles of owner-manager of firms. Such evidence provides interesting policy implications. At a broader level, the findings reiterate prior research which suggests that notwithstanding their political connections, small and medium firms bear the brunt of the financing obstacles. In this milieu, political connections are often an antidote for alleviating financing obstacles. Second, at the Table 6 Average marginal effect (AME) of financial obstacles, by gender of the SME *** , **, and * indicate statistical significance at 1, 5, and 10%, respectively level of states, the evidence shows that merely improving ease of doing business or increasing foreign bank outreach might not necessarily lower financing constraints. What is important is to improve the overall credit penetration, which ensures a trickle-own effect towards minimising financing obstacles for SMEs. In this regard, our findings contribute to the evidence as to how political connections influence SME access to finance. Whether and to what extent do politics interact with other related policies to affect SME financing remains an important topic for future research. Over time, there has been a significant improvement in the business environment in India. Reflecting this fact, India's rank on ease of doing business has increased from over 100 during 2015-2016 to 77 in 2019. This is intended to improve the business environment and attract foreign capital. Despite these improvements, micro-level concerns remain prevalent. As a result, given the dependence of SMEs on government support, SME managers need to interact closely with government officials to ensure their business remains afoot. In this regard, this analysis provides valuable insights regarding the magnitude of such interactions and its impact on financing obstacles for SMEs, after controlling for other confounding characteristics. Secondly, an analytical assessment of the differential impact of business obstacles across firms would suggest that managers of firms across different SME sizes indicate that not all SMEs can ensure better policy support for business operations. This has occurred despite the government having provided a significant number of support measures to promote SME development in the country. These findings echo in recent research which reports that corruption is one of the major obstacles faced by Indian firms (Jabeen et al., 2021). In this respect, the study underscores key concerns faced by business enterprises across size classes with a particular emphasis on the importance of political connections, necessitating policy measures that can address such challenges. Several limitations of the study are in order. First, owing to data constraints, we are not able to study the evolution of firm behaviour over time. In addition, the secondary data provides limited choices in determining the variables relating to political connections faced by SMEs. Going forward, research can take on board theoretical advancements and suitable variables by conducting in-depth interviews of respondents to arrive at more comprehensive measures. Data availability The analysis is based on the World Bank Enterprise Survey (WBES) data which is publicly available at the World Bank website and can be made available upon reasonable request. Code availability The relevant Stata code can be made available upon request.
2022-10-01T15:21:51.534Z
2022-09-29T00:00:00.000
{ "year": 2022, "sha1": "1e7c319b1812628770167f1ab78f677e1b48e8aa", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s40497-022-00331-3.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "2b7dd8a63b8b0144136d8c8c5a42ba613ae65c81", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
10602213
pes2o/s2orc
v3-fos-license
High fat diet impairs the function of glucagon-like peptide-1 producing L-cells Highlights • Long term dietary changes impair function of the gut endocrine system.• High fat diet impairs nutrient-triggered GLP-1 release from murine small intestine.• L-cells from HFD-fed mice have reduced expression of many L-cell-specific genes. Introduction Hormones from the gut control food intake and insulin release as well as intestinal motility and secretion [1]. The post-prandial rise in the plasma concentration of GLP-1 signals to the brain that food has arrived in the gastrointestinal tract and to the pancreas that glucose is being absorbed. GLP-1 brings about the sensation of satiety together with the enhancement of insulin release [2]. As dietary habits have changed and people now consume more fat and sugar, this raises concerns about whether dietary intake affects the release of gut hormones, and whether this in turn might contribute to rising levels of obesity and diabetes. GLP-1, like other gut hormones, is released from enteroendocrine cells (EECs) located in the intestinal epithelium. EECs have a life span of only about 5 days, and are constantly replenished from stem cells in the intestinal crypts. Depending on their position along the gastrointestinal tract, EECs exhibit characteristic hormonal signatures. In the duodenum, for example, there is a high number of K-cells producing GIP, whereas in the distal ileum and colon there are more L-cells expressing GLP-1 and PYY [3]. A few published studies suggest that the number of EECs can be altered by environmental stimuli. In intestinal organoid cultures in vitro, for example, the number of L-cells is increased by exposure to short chain fatty acids [4], and in vivo, elevated L-cell numbers have been observed in germ-free mice [5], and in rodents on a high-fiber diet [6]. How EECs are affected by dietary exposure is an area of great interest. If it were possible to increase L-cell number, this could lead to increased GLP-1 release and improved glucose tolerance and satiety. Surgical models have been used to investigate how EEC numbers are affected by exposure of different gut segments to luminal nutrients. In rats, gastric bypass surgery has been reported to result in an increase in L-cell numbers in the roux (alimentary) and common limbs [7,8]. In both cases, it was found that the increase in L-cell numbers was attributable to mucosal hypertrophy, but that there was no change in L-cell density. EECs also developed normally in the biliopancreatic limb. These studies suggest that increased ileal nutrient exposure in the context of a chow diet results in mucosal growth and a corresponding increase in L-cell number, but that the frequency of L-cells is not altered. It is less clear how the intestine responds to diets containing a high fat content. In high-fat diet (HFD) fed rodents, it was reported that there is a reduction in the density of cells staining for chromogranin A and GLP-1, and correspondingly reduced expression of Gcg and Pyy by PCR [9,10]. In this study we have examined the transcriptomic and secretory properties of L-cells from mice fed on a high fat (60%), high sugar diet. Animals All procedures were approved by the UK Home Office and local ethical committee of the University of Cambridge. Male GLU-Venus mice [11] on a C57Bl6 background were weighed at age 8 weeks and divided into 2 groups (n = 4-6 per group), each with a balanced range of body weights. For the 16-week study, they were then single housed and placed on either a standard control chow (Special Diets Services, Rat and mouse breeder and grower diet) or high fat diet (Research Diets, #D12331, containing 60% energy from fat). Mice were weighed weekly for 16 weeks. Mice on the 2-week study were treated similarly, but were not single housed. Mice were killed by cervical dislocation 2-4 h after lights on. Intestinal tissues were harvested and treated as below for flow cytometry (FACS), mRNA extraction or secretion experiments. Small intestine for FACS sorting For purification of cell populations by FACS, intestinal pieces were stripped of the outer muscle layers. Tissue was chopped into 1-2 mm pieces and digested to single cells with 1 mg/ml collagenase in calcium-free Hanks Buffered Salt Solution (HBSS). Single cell suspensions were separated by flow cytometry using a MoFlo Beckman Coulter Cytomation sorter (FL, USA). Side scatter, forward scatter and pulse width gates were used to exclude debris and aggregates, and Venus positive cells collected at ∼95% purity, alongside negative (control) cells that comprised a mixed population dominated by other epithelial cell types. FACS analysis Single cell suspensions were prepared for FACS analysis, as described previously (9). Briefly, cells were fixed with 4% paraformaldehyde (PFA) in phosphate buffered saline (PBS), permeabilised with 0.1% (v/v) Triton X-100, blocked with 10% goat serum and then incubated with primary antibody in PBS-10% goat serum at 4 • C overnight for an hour at RT. The primary antibodies used were: anti-GIP (gifted by Prof. J.J. Holst; 1:1000), anti-CCK (gifted by Prof. G.J. Dockray; 1:500) and anti-PYY (Progen, London, UK; 16066; 1:100). Cells were rinsed 3 times and then incubated with secondary antibody (Alexa-Fluor 555, 633 or 647 Invitrogen, USA; A-21435, A-21105, A-21428 or A-21070, all at 1:800). Cells were analysed using a LSRCyAn advance digital processing Flow Cytometer (Dako Cytomation, CA, USA; Fortessa (BD Biosciences)). Events with very low side and forward scatter were excluded as these are likely to represent debris, and events with a high pulse width were excluded to eliminate cell aggregates. RNA extraction and qRT-PCR Tissues used for RNA and protein extraction were washed in PBS and placed in RNAlater (Ambion, Life Technologies, Paisley, UK) and frozen until processed. Samples were homogenized in Tri-reagent (Sigma), and then treated as below. Total RNA from FACS-sorted cells prepared from GLU-Venus transgenic mice was isolated using a micro scale RNA isolation kit (Ambion). All samples were reverse transcribed according to standard protocols. Quantitative RT-PCR was performed with 7900 HT Fast Real-Time PCR system (Applied Biosystems, Life Technologies), using verified Taqman primer/probe sets supplied by Applied Biosystems. In all cases expression was compared with that of ␤-actin measured on the same sample in parallel on the same plate, giving a CT difference ( CT) for ␤-actin minus the test gene. Mean, standard error, and statistics were performed on the CT data and only converted to relative expression levels (2 CT ) for presentation in the figures. Microarray The quality of RNA samples was determined by Total RNA Pico Chip (Agilent Technologies, Stockport, UK), and only those with RIN values >7.0 selected for microarray analysis. Total RNA from the samples were amplified using the NuGEN Ovation Pico WTA system (NuGEN Technologies, Leek, Netherlands) according to the manufacturer's protocol, labeled with the NuGEN Exon and Encore TM Biotin Modules, and hybridized onto Affymetrix Mouse Gene ST 1.0 arrays. Raw microarray image data were converted to CEL files using Affymetrix GeneChip Operating Software. All the downstream analysis of the microarray data was performed using Gene-Spring GX 12.1 (Agilent). The CEL files were used for both the robust multi-array average (RMA) and Probe Logarithmic Intensity Error (PLIER) analyses. After importing the data, each chip was normalized to the 50th centile of the measurements taken from that chip. To assess the purity of L-cell sorts, each probe on the microarray data was assigned a value indicating its relative expression in L-cells vs non-L-cells (combining all microarrays from HFD and chow-fed mice), and the data were ordered to indicate the top 50 probes most enriched in the non-L-cell population. For each L-cell microarray we then calculated the geometric mean of these 50 non-L-cell probes as a measure of contamination by non-L-cells. One out of the 4 microarrays from HFD L-cells was excluded from further analysis because its mean expression of non-L-cell probes was >2 SD above the mean of all L-cell microarrays. In further analyses, the number of individual microarrays used was: L-cells on chow (3), L-cells on HFD (3), non-L-cells on chow (3) and non-L-cells on HFD (4). Primary intestinal culture Small intestinal and colonic primary cultures were prepared as previously described [11]. Briefly, mice 3-6 months old were sacrificed by cervical dislocation and the small intestine or colon was excised. Luminal contents were flushed thoroughly with PBS and the outer muscle layer removed. Tissue was minced and digested with Collagenase Type XI and the cell suspension plated onto 24well plates pre-coated with Matrigel (BD Bioscience, Oxford, UK). GLP-1 secretion assay 18-24 h after plating, cells were washed and incubated with test agents made up in 0.25 ml saline buffer containing: (in mM: 138 NaCl, 4.5 KCl, 4.2 NaHCO 3 , 1.2 NaH 2 PO 4 , 2.6 CaCl 2 , 1.2 MgCl 2 , and 10 HEPES, pH 7.4 with NaOH) supplemented with 0.1% BSA for 2 h at 37 • C. At the end of the 2-h incubation, supernatants were collected and centrifuged at 2000 rcf for 5 min and snap frozen on dry ice. Cells were mechanically disrupted in 0.5 ml lysis buffer containing (mM): 50 Tris-HCl, 150 NaCl, 1% IGEPAL-CA 630, 0.5% deoxycholic acid (DCA) and complete EDTA-free protease inhibitor cocktail (Roche, Burgess Hill, UK) to extract intracellular peptides, centrifuged at 10,000 rcf for 10 min and snap frozen. GLP-1 was measured using an electrochemiluminescence total GLP-1 assay (MesoScale Discovery, Gaithersburg, MD, USA) and results expressed as a percent of total (secreted + lysate) GLP-1 and normalized to basal secretion in response to saline measured in parallel on the same day. Chemicals were purchased from Sigma (Poole, UK) unless otherwise indicated. Data analysis Results are expressed as mean ± SEM. Statistical analysis was performed using GraphPad Prism 5.01 (San Diego, CA, USA). For GLP-1 secretion data, one-way ANOVA with post hoc Dunnett's or Bonferroni's tests were performed on log-transformed secretion data, as these data were heteroscedastic. Values were regarded as significant when p < 0.05. HFD affects the expression of gut hormone mRNAs in tissue homogenates Two weeks on HFD resulted in reduced expression of Gcg and Insl5 in colonic tissue, and a tendency toward reduced expression of Cck and Pyy, suggesting that HFD causes either a reduction in colonic L-cell density or impaired production of hormones by individual enteroendocrine cells (Fig. 1). No significant changes were observed in the expression of Gcg, Cck, Gip or Pyy in small intestinal tissue homogenates, although we did detect an increase in somatostatin (Sst) mRNA in the ileum of HFD-fed mice. Hormone expression was unaffected in the rectum. HFD alters gut peptide production by individual L-cells To evaluate the frequency of L-cells and their individual production of peptide hormones, we performed FACS analysis of cell suspensions from transgenic mice expressing a yellow fluorescent protein Venus driven by the gcg promoter, which were immunostained for different gut hormones. Quantification of Venus-labeled L-cells in different regions of the GI tract revealed that the large intestine (colon + rectum) of HFD-fed mice contained a significantly lower percentage of Venus positive L-cells than their chow-fed littermates ( Fig. 2A). In large intestinal cell suspensions co-stained with antibodies against CCK and PYY, we observed a particular reduction in the number of L-cells staining strongly for CCK, and a corresponding increase in the intensity of L-cell PYY staining in mice fed on HFD for 2 weeks (Fig. 2B-D). In the upper small intestine, the frequency of Venus-labeled L-cells was not markedly affected by the HFD (Fig. 2E). Both the frequency of GIP-stained cells (Fig. 2E), however, and the proportion of L-cells that were immuno-positive for GIP (data not shown) were reduced in small intestinal cell suspensions from HFD-fed mice. Staining for PYY and CCK were not noticeably different between the chow and HFD groups in the small intestine ( Fig. 2E and data not shown). GLP-1 release is altered in intestinal cultures from HFD-fed mice We next evaluated whether the function of L-cells was modified by high fat feeding. GLP-1 secretion was assessed in small intestinal cultures from mice fed for 2 weeks on either HFD or chow and GIP were quantified as a percentage of the total cell number. Columns represent the mean, and error bars represent 1 SEM of n = 3 (colon) and n = 5 (small intestine). *p < 0.05, ***p < 0.001, by Student's t-test. (Fig. 3). Cultures from chow-fed mice had a basal secretory rate of 2.8% per 2 h, which was stimulated 3.5-fold by 10 mM glucose, 7.9-fold by 0.5% peptone, 1.6-fold by 100 M linoleic acid, 20-fold by glucose/fsk/IBMX, 3.2-fold by 20 mM Gly-Leu, and 5.6-fold by 1 M PMA. Cultures from HFD-fed mice exhibited an increased rate of basal GLP-1 release (5.0% per 2 h, cf. 2.8% in chow-fed animals, p < 0.05). The amplification of GLP-1 secretion above the basal rate by glucose, peptone, forskolin/IBMX and Gly-Leu was reduced in the HFD-fed group (Fig. 3) although absolute % secretory rates in the presence of different stimuli were similar in chow-fed and HFD-fed mice. Basal and stimulated GLP-1 release from colonic cultures were not altered by placing mice on HFD for 2 weeks (data not shown). In mice fed for 16 weeks on HFD, however, basal GLP-1 secretion from colonic cultures was elevated ∼2-fold, from 2.4 ± 0.2% per 2 h in chow fed mice (n = 12 wells from n = 4 mice) to 5.6 ± 0.9% in HFD-fed mice (n = 9 wells from n = 3 mice, p = 0.0005). Effect of HFD on L-cell gene expression The effect of HFD on gene expression in L-cells was evaluated using GLU-Venus mice that were fed on a HFD or chow diet for 16 weeks. HFD-feeding in this group resulted in a significant increase in body weight as well as fat-pad weight. Small intestinal L-cells and non-L-cells from 3 chow-fed and 4 HFD-fed mice were separated by FACS-sorting and subjected to mRNA microarray analysis. Intensity values calculated by RMA analysis indicate the relative expression of individual genes in L-cells and non-L-cells, and were compared in both chow and HFD-fed cohorts (Fig. 4). Gcg expression was not different between L-cells from HFD and chow-fed mice. This was expected, as L-cells were collected based on their fluorescence of Venus driven by the gcg promoter. Expression of mRNAs for the gut hormones Gip, Cck, Pyy, secretin (Sct) and neurotensin (Nts) were significantly reduced in small intestinal L-cells from HFD-fed mice (Fig. 4A). Corresponding with these reduced mRNA levels for gut hormones, L-cells from HFD mice also exhibited lower expression of the prohormone processing enzymes Pcsk1 (prohormone convertase 1/3, p = 0.026) and Cpe (carboxypeptidase E, p = 0.034) (Fig. 4B), as well as members of the granin group, particularly chromogranin B (Chgb, p = 0.009) and secretogranin 2 (Scg2, p = 0.019) (Fig. 4C). To confirm that the data do not represent a global reduction of gene expression in L-cells from HFD-fed mice, we examined expression of members of the fatty acid binding protein family. Fabp1, 2 and 6 were found to be ubiquitously expressed in the different gut cell populations, whereas fabp5 expression was highly enriched in L-cells (Fig. 4D). Expression of the non-L-cell enriched FABPs (1, 2 and 6) was high in both L-cells and non-L-cells and was unaffected by HFD. Expression of Fabp5, by contrast, was enriched in L-cells compared with control cells, and was significantly lower in L-cells from mice on HFD (p = 0.01). Our data therefore suggest that feeding mice with HFD results in reduced expression in L-cells of many genes that determine the characteristics and function of an L-cell. We further examined whether the down-regulation of L-cell genes is accompanied by changes in expression of transcription factors known to be involved in determination of the enteroendocrine-cell lineage. As shown in Fig. 5C, L-cells from mice on HFD exhibited a significantly reduced expression of Etv1 (p = 0.003), Isl1 (p = 0.017), Mlxipl (p = 0.009), Nkx2.2 (p = 0.045) and Rfx6 (p = 0.009). Discussion Our data show that mice fed with a HFD exhibited changes in their gut endocrine system compared with mice on chow diet. There were small changes in enteroendocrine cell number, altered hormone secretion, and reduced expression in L-cells of a number of genes that determine the properties and function of an L-cell. Examination of global gene expression in tissue homogenates from different regions of the guts of mice fed for only 2 weeks on HFD revealed changes in mRNAs for gut hormones, particularly in the colon. The numbers of mice used for these experiments were, however, small and the data do not indicate whether the enteroendocrine cell number has changed, or whether the changes represent alterations in hormonal expression within EECs. FACS analysis with immuno-staining was therefore performed to quantify EECs in different regions of the gut and to confirm the production of gut hormones at the peptide level. This revealed that the number of L-cells was lower in the colons of HFD-fed mice, corresponding with the reductions in gene expression observed in the whole tissue homogenates. By FACS analysis it also appeared that the hormonal signature of colonic L-cells was shifted, with L-cells from the fat-fed mice exhibiting reduced CCK production and a tendency for increased PYY production. In the small intestine, by contrast, L-cell numbers did not appear to be affected by high fat feeding, but we observed a reduced number of GIP-stained cells. As it has been reported that there are sufficient L-cells in the proximal small intestine to account for post-prandial hormone release [12], we examined whether L-cell transcriptomics and hormone release were altered in small intestinal L-cells. A large number of transcriptomic changes were observed in L-cells from mice fed on HFD, with a significant down regulation of expression of mRNAs encoding L-cell specific transcription factors, enteroendocrine hormones, prohormone processing enzymes, granins and nutrient sensing machinery. The data do not appear to represent either a global reduction in gene expression in L-cells, nor contamination of the FACS-purified L-cell pool by non-L-cells. We therefore conclude that the L-cells in mice fed HFD had impaired production of many genes required for the normal function of an enteroendocrine cell. This coincided with reduced expression of a number of EEC transcription factors. Isl1, Nkx2.2 and Rfx6 are transcription factors well known to play a role in the EEC lineage [13][14][15]. As Rfx6 has been reported previously to promote Gip expression [15], the reduction in Rfx6 might account for our observation of reduced Gip mRNA in L-cells. Etv1 and Mlxipl (encoding CHREBP) were identified in our previous analysis of transcription factors enriched in L-cells [16]. The reduced expression in HFD L-cells of Slc5a1, Abcc8, Slc15a1 and Gpr120 would suggest that L-cells from mice on HFD might exhibit reduced nutrient responsiveness [17][18][19]. Indeed the changes in gene expression corresponded with an apparent reduced responsiveness of GLP-1 release to L-cell secretagogues in primary small intestinal cultures. Basal GLP-1 release, by contrast was elevated both in small intestinal cultures of mice fed with HFD for 2 weeks, and in colonic cultures of mice fed on HFD for 4 months. It is unclear why the basal rate of GLP-1 release was increased in the HFD-fed models, but we have observed a similar finding in ob/ob mice fed on chow (AMH, FG, FR unpublished observations). Whilst we cannot exclude the possibility that the Lcells from HFD and ob/ob mice are more fragile, resulting in the lysis of a few cells during the incubation period that would mimic a high basal secretory rate, we believe it is more likely that an imbalance of second messenger signaling pathways underlies this increased rate of basal secretion in the absence of added nutritional stimuli. Our data showed that substantial changes in the properties of L-cells arose when mice were fed on a HFD. Compared with chow, the HFD we used contained a higher percentage of fat (60%) and free sugars, and no fiber. We cannot therefore be certain which of these dietary macronutrient changes was responsible for the alterations in L-cell gene expression. Diets rich in fat and sugar and low in fiber are, however, commonly consumed by humans in the western world. Our data would suggest that such diets may have adverse effects on our gut hormones. Whether humans who eat a diet rich in fat and sugar for prolonged periods exhibit increased basal GLP-1 release and lowered responsiveness to food ingestion will be interesting to examine in the future. A reduced post-prandial elevation of GLP-1 would, however, tend to lower the body's ability to signal satiety, and could result in a vicious cycle of over-eating. Resetting gut hormone responsiveness by a period on a healthy diet might, therefore be a strategy to increase post-prandial satiety and tackle obesity.
2016-05-04T20:20:58.661Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "d577289674cd863c9fe03f6bbeda2d7033d4dfd6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.peptides.2015.06.006", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d577289674cd863c9fe03f6bbeda2d7033d4dfd6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
4032343
pes2o/s2orc
v3-fos-license
Blue organic light-emitting diodes realizing external quantum efficiency over 25% using thermally activated delayed fluorescence emitters Improving the performance of blue organic light-emitting diodes (OLEDs) is needed for full-colour flat-panel displays and solid-state lighting sources. The use of thermally activated delayed fluorescence (TADF) is a promising approach to efficient blue electroluminescence. However, the difficulty of developing efficient blue TADF emitters lies in finding a molecular structure that simultaneously incorporates (i) a small energy difference between the lowest excited singlet state (S1) and the lowest triplet state (T1), ΔE ST, (ii) a large oscillator strength, f, between S1 and the ground state (S0), and (iii) S1 energy sufficiently high for blue emission. In this study, we develop TADF emitters named CCX-I and CCX-II satisfying the above requirements. They show blue photoluminescence and high triplet-to-singlet up-conversion yield. In addition, their transition dipole moments are horizontally oriented, resulting in further increase of their electroluminescence efficiency. Using CCX-II as an emitting dopant, we achieve a blue OLED showing a high external quantum efficiency of 25.9%, which is one of the highest EQEs in blue OLEDs reported previously. where γ is the carrier balance ratio of holes and electrons, and Φ p and Φ d are contributions from prompt fluorescence and delayed fluorescence to the photoluminescence quantum yield (Φ PL ), respectively: The Φ PL of a TADF emitter can be increased by reducing the energy difference ΔE ST between the lowest excited singlet state (S 1 ) and the lowest triplet state (T 1 ), and simultaneously increasing the oscillator strength (f) between the S 1 and ground state (S 0 ) 25,41 . A small ΔE ST and large f are satisfied when the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) of a TADF emitter are moderately separated in space. This HOMO-LUMO separation can be realized in TADF emitters containing covalently linked electron-donating and accepting units 9,25,41 . As well as a small ΔE ST and large f, blue TADF emitters have the additional requirement of a high S 1 energy. These three requirements limit the choice of electron-donating and accepting units, making it difficult to achieve blue TADF-based OLEDs with a high IQE. We chose a carbazole derivative with a deep HOMO level as a donor moiety and xanthone with a shallow LUMO level as an acceptor moiety. Figure 1 shows the molecular structures of CCX-I and CCX-II. Quantum chemical calculations were performed with density functional theory (DFT) implemented in the Gaussian 09 program package 43 . Geometry optimization of S 0 of CCX-I and CCX-II was performed at the PBE0/6-31G(d) level of theory. The f and excitation energies for S 1 and T 1 were calculated by the time-dependent DFT method implemented in Gaussian 09. ΔE ST was calculated as the difference between the S 1 and T 1 excitation energies. The HOMOs and LUMOs of the geometry-optimized CCX-I and CCX-II were predominantly distributed over the electron-donating and electron-accepting units, respectively, and were well separated spatially, as shown in Fig. 1. The torsion angles (α) between the electron-donating and electron-accepting units of CCX-I and CCX-II were calculated to be 50.0° and 51.2°, respectively. These angles allow for moderate HOMO-LUMO overlap. As shown in Table 1, the calculated ΔE ST of CCX-I and CCX-II are smaller than that of 2,4,5,6-tetra(9H-carbazol-9-yl) 44 , a widely used host material for blue TADF-based OLEDs. We fabricated an OLED using CCX-I as the emitting dopant, indium tin oxide (ITO) as an anode, 4,4′-(cyclohexane-1,1-diyl)bis(N,N-di-p-tolylaniline) (TAPC) as a hole-transport layer, 3,3″,5,5″-tetra(pyridin-3-yl)-1,1′:3′,1″-terphenyl (BmPyPhB) as an electron-transport layer, lithium quinolin-8-olate (Liq) an electron-injection layer, and Al as a cathode. The device structure is ITO (50 nm)/TAPC (70 nm)/6 wt% CCX-I:DPEPO (30 nm)/BmPyPhB (40 nm)/Liq (1 nm)/Al (80 nm), termed CCX-I-6A (Device A in Fig. 2a). The CCX-I-6A showed poor device performance: the maximum η EQE was 8.2% and drastically decreased at luminance (L) greater than 100 cd m −2 (x marks in Fig. 2b). Figure 3a shows the EL spectra measured at current densities (J) of 1, 25, 60, and 100 mA cm −2 . The EL intensity in the range of 500-800 nm increased with increasing J. Figure 3b in the bottom shows difference of EL spectra obtained by subtracting the EL spectrum measured at J = 1 mA cm −2 from those measured at J = 25, 60, and 100 mA cm −2 . Two emission bands, with maxima at 520 and 580 nm, appeared in the difference of EL spectra. The latter emission band can be assigned to emission from the TAPC layer 45 . The former emission band may be assigned to emission from an exciplex formed between TAPC and CCX-I; the peak wavelength of 520 nm (2.4 eV) corresponds to the energy difference between the HOMO of TAPC and the LUMO of CCX-I. To verify this exciplex emission, we measured the PL spectra for 50 wt% CCX-I:TAPC, neat CCX-I, and neat TAPC films fabricated by vacuum deposition. The PL spectrum for the 50 wt% CCX-I:TAPC film was clearly different from those for the CCX-I and TAPC neat films, suggesting that the emission from the 50 wt% CCX-I:TAPC film arose from the exciplex formed between CCX-I and TAPC (Fig. 3b). The PL spectrum of the 50 wt% CCX-I:TAPC also agreed well with the emission band at 520 nm in the difference of EL spectra. Meanwhile, the EL spectrum of TAPC agreed well with the emission band at 580 nm in the difference of EL spectra. These observations suggest that emission from the TAPC layer and the exciplex are responsible for the EL emission in the range 500-800 nm, which leads to the poor performance of CCX-I-6A. To prevent the formation of the exciplex, 9,9′-(2,2′-dimethyl-[1,1′-biphenyl]-4,4′-diyl)bis(9H-carbazole) (CDBP) was inserted as an interlayer between the TAPC and emissive layers (CCX-I-6B, Device B in Fig. 2a). CDBP has a high T 1 energy (3.0 eV 46 ) and functions as an exciton-blocking layer. In addition, we replaced DPEPO with dibenzo[b,d]furan-2,8-diylbis(diphenylphosphine oxide) (PPF). PPF shows a higher T 1 energy than DPEPO and hence, T 1 excitons are more effectively confined in the emissive layer when PPF is used as a host. To avoid T 1 energy transfer from the emissive layer to the BmPyPhB layer, a thin PPF layer was inserted as an exciton-blocking layer between the emissive and BmPyPhB layers. The resulting device structure was ITO (50 nm)/TAPC (70 nm)/ CDBP (10 nm)/6 wt% CCX-I:PPF (20 nm)/PPF (10 nm)/BmPyPhB (30 nm)/Liq (1 nm)/Al (80 nm). Figure 3c shows the EL spectra of CCX-I-6B measured at J = 1, 25, 60, and 100 mA cm −2 . Unlike CCX-I-6A, no notable changes were observed in the 500-800 nm range. The EL spectrum shows a single emission band with a peak at 468 nm assigned to emission from the 6 wt% CCX-I:PPF layer. Importantly, the η EQE -L characteristics were considerably improved: the maximum η EQE of CCX-I-6B was more than twice as high as that of CCX-I-6A, resulting in a η EQE of 17.6% (triangles, Fig. 2b). Figure S3a, Supplementary Fig. 2b show the η EQE -L characteristics for the CCX-II-based OLEDs. At X = 6, we obtained the maximum η EQE of 25.9%, which is the highest value reported for blue TADF-OLEDs to date. The J-V-L characteristics of CCX-II-6B are shown in Figure S3a, Supplementary Information. The peak wavelength of the EL spectra for CCX-II-6B was 471 nm, corresponding to CIE coordinates of (0.15, 0.22) (the left photograph, Fig. 2b). The η EQE and colour purity are comparable to those obtained with the blue phosphorescent emitter, FIr6, which has CIE coordinates of (0.14, 0.23) 42 . When using a light out-coupling sheet (CCX-II-6B-OC) we obtained an η EQE of 33.3%. The η EQE remained at 21.9% even at a high luminance of 1000 cd m −2 (open circles and the right photograph, Fig. 2b). Increasing the doping concentration improved the roll-off in the η EQE -L characteristics. For X = 15, a maximum power efficiency of 52.5 lm W −1 and a current efficiency of 47.5 cd A −1 were obtained ( Figure S3b, Supplementary Information). These values are high compared with those of other blue TADF-based OLEDs reported to date [10][11][12][13][15][16][17][18][19][20] . The device characteristics of the CCX-I-and CCX-II-based OLEDs are listed in Table 2. From angular-dependent PL measurements of 6 wt% CCX-I-and CCX-II-doped PPF films, we found that the transition dipole moments of CCX-I and CCX-II were horizontally oriented with respect to the glass substrate, which enhanced their light out-coupling factors. The ratios of the horizontal dipole (Θ) for 6 wt% CCX-I-and CCX-II-doped PPF films were determined to be 0.75 and 0.83, which correspond to the order parameters (S) of −0.17 and −0.36, respectively ( Figure S4a, Supplementary Information). Optical simulations based on these S values showed that CCX-I-6B and CCX-II-6B can potentially exhibit η out of 25.0% and 29.5%, respectively ( Figure S4b,c, Supplementary Information). Using the relation, IQE = η EQE /η out , we calculated the IQE values of CCX-I-6B and CCX-II-6B to be 70.4% and 87.8%, respectively. These IQE values are higher than those of conventional fluorescent OLEDs. Thus, the high performance of CCX-I-6B and CCX-II-6B results from efficient TADF and the horizontal orientation of CCX-I and CCX-II molecules. Figure 4a shows ultraviolet-visible (UV-vis) absorption and photoluminescence (PL) spectra of CCX-I and CCX-II in toluene solution (1.0 × 10 −5 M). The UV-vis absorption intensity was larger for CCX-II than that for CCX-I, reflecting the greater f value of CCX-II than that of CCX-I (Table 1). From the absorption edges (423 nm for both CCX-I and CCX-II), the HOMO-LUMO gaps of CCX-I and CCX-II were confirmed to be sufficiently wide for blue emission. The peak wavelengths of the PL spectra (λ PL ) for CCX-I and CCX-II were 453 and 450 nm, respectively. In dilute toluene solution, CCX-I and CCX-II showed pure blue emission. We also fabricated 6 wt% CCX-I:PPF and CCX-II:PPF thin films by vacuum deposition. The photoluminescence quantum yields (PLQYs) of the 6 wt% CCX-I:PPF and CCX-II:PPF doped films were both nearly 100% (97.2 ± 4% and 104.0 ± 4%, respectively) when CCX-I/CCX-II was directly excited. When PPF was excited, the PLQYs of the molecules decreased to 88.6 ± 4% and 96.8 ± 4%, respectively. The decrease in PLQY suggests that energy loss occurs during the excited energy transfer from PPF to CCX-I/CCX-II. Figure S5, Supplementary Information, shows PL spectra of the 6 wt% CCX-I:PPF and CCX-II:PPF doped films. The λ PL of the CCX-I:PPF and CCX-II:PPF doped film were 468 nm and 465 nm, respectively. The emission spectra of CCX-I and CCX-II in the PPF host were red shifted by 15 nm compared with those in toluene solution. When CCX-I/CCX-II was excited, PLQYs of nearly 100% were obtained and no emission from PPF was observed, suggesting that PPF effectively confined the triplet excitons. Figure 4b shows the temperature dependence of transient PL decay curves for the 6 wt% CCX-II:PPF film. In addition to the prompt fluorescence, the long tailed fluorescence was observed. The delayed fluorescence increased with increasing temperature, indicating that it was involved in a thermal activation process. Rate constants for prompt fluorescence, TADF, ISC, and RISC together with contributions from prompt and delayed components to PLQY (Φ p and Φ d , respectively) are reported in Tables S1 and S2, Supplementary Information. Using a method previously reported 40 , we calculated the IQE at 298 K to be 70.6% and 88.7% for CCX-I and CCX-II, respectively. The IQE calculated from the transient PL decays was in good agreement with that obtained from optical simulations (70.4% and 87.8% for CCX-I and CCX-II, respectively). This agreement indicates that excitons are well confined in the emissive layers and losses occurred only in the emitting layer. Figure 4c shows the temperature dependence of the PLQY, RISC efficiency, and IQE of CCX-II doped films. The PLQY of CCX-II is independent of temperature and remained at nearly 100%. The IQE also remained over 80%, suggesting that ΔE ST was sufficiently small to induce RISC at room temperature. From an Arrhenius plot of the rate constant of RISC, the ΔE ST values of CCX-I and CCX-II were estimated to be 70 and 31 meV, respectively ( Fig. 4d and Figure S6, Supplementary Information), which are smaller than that of 4CzIPN (83 meV). In conclusion, we developed efficient TADF materials, CCX-I and CCX-II with small ΔE ST and large f. When doped into host matrices, CCX-I and CCX-II showed high PLQYs and blue emission. OLEDs containing CCX-II as an emitting dopant achieved an EQE of 25.9%, which is the highest reported to date among blue TADF-based OLEDs. The OLEDs also showed good colour purity with CIE coordinates of (0.15, 0.22). Further device optimization, using host materials that produce a higher IQE than that of PPF, would allow additional improvements in the performance of these CCX-I-and CCX-II-based OLEDs. Methods Quantum chemical calculations. The ground states geometries of CCX-I and CCX-II were optimized by DFT calculations. The minimum excited energies of S 1 and T 1 were obtained by TD-DFT calculations. All calculations were performed at the PBE0/6-31G(d) level. Synthesis and characterization. CCX-I and CCX-II were synthesized, as detailed in Section 1 of the Supplementary Information, with yields of 76% and 100%, respectively. 1 H and 13 C nuclear magnetic resonance (NMR) spectra were recorded on a Bruker Avance III 800-MHz spectrometer (800 MHz for 1 H, 201 MHz for 13 C). CCX-I and CCX-II were used after purification by temperature-gradient sublimation. Device fabrication and measurement of OLED performance. OLEDs with an active area of 4 mm 2 were fabricated by vacuum deposition at ~10 −5 Pa on clean ITO-coated glass substrates with a deposition apparatus (SE-4260, ALS Technology, Japan). After fabrication, devices were encapsulated with a desiccant and a glass cap using epoxy glue in a N 2 -filled glove box. The OLED characteristics were measured with a source meter (2400, Keithley, Japan) and an absolute EQE measurement system with an integrating sphere (C9920-12, Hamamatsu Photonics, Japan).
2018-03-21T13:13:34.214Z
2017-03-21T00:00:00.000
{ "year": 2017, "sha1": "3b12c0a368c39bd8c93df3991224b459484d1701", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-00368-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e932739a0a2b76c04d4344142af6964eb5dc1cb", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
243570171
pes2o/s2orc
v3-fos-license
Natural Lagrangians : In this paper, a probabilistic approach is used to derive a kind of abstract candidate for a natural Lagrangian in general relativity. The methods are very general, and the result is in a certain sense unique. However, to turn this abstract Lagrangian into an ordinary one, expressible in terms of the Riemann tensor, is so far an open problem. Some possible cosmological consequences are discussed. Introduction Is there a "natural" Lagrangian in the theory of general relativity? In other words, does the concept of "action" have a fundamental meaning, or should we just consider Lagrangians to be technical tools that may be designed more or less at our convenience to meet the needs of the specific situation that we are studying? Historically, there seems to have been a general belief that the action should be a natural and essentially unique concept. Still, when the Hilbert-Palatini principle ( [1,2]) was formulated, starting from the simplest possible choice of action, this belief was quite common. However, already Eddington ( [3]) was aware that many different Lagrangians would produce the same experimental predictions as the Hilbert-Palatini principle. Since that time, a large number of Lagrangians have been suggested, however without any particular one of them having been able to prove itself essentially better than all the others. The answer that I will advocate in this paper is that there should be a natural Lagrangian. However, it may at the same time be that this Lagrangian can be something complicated and that the chances of finding it by guessing are very poor. Thus, we are led to the next question: Can we deduce the form of the natural Lagrangian? As it turns out, this may not be easy either. Therefore, what should we do then? Let us start by briefly recalling how it all started. The Principle of Least Action The principle of least action has a long history. Often, it is attributed to Maupertius, who in the 18th Century formulated the belief that the universe develops according to a principle of ultimate economy (see [4]). However, already Leibniz had formulated something similar several decades earlier, and Fermat's principle (which can be viewed as a special case) is still older. Moreover, the mathematical formulation, stating that a physical system must develop in such a way that the action of the system is (locally) minimized, rather seems to be due to Euler and Lagrange. That Maupertius is the one who is most closely associated with this principle may partly depend on the fact that he, more than others, emphasized the metaphysical aspects of the idea: to him, the fact that nature in a sense always chooses the most economical way of developing was the ultimate and indisputable proof of God's existence. However, not all of Maupertius' contemporaries were convinced. Already Euler had noticed that the action of a system can sometimes in a sense instead be maximized, something that apparently does not fit too well with the idea of ultimate economy. In spite of this, the principle of least action can only be regarded as a success story in science. However, at the same time, as this principle has developed from a metaphysical principle into an efficient instrument in any physicist's tool box, the philosophical aspects have so to speak faded away: not only would no scientist today take Maupertius' proof of God's existence seriously, but also, metaphysical ideas about nature's economy in general may seem somewhat out of date. Many physicists have come to the conclusion that questions about minimizing the action are essentially meaningless and that the best we can do is to look for stationary solutions of suitable Lagrangians. However, maybe something got lost on the way? It may in fact appear somewhat unsatisfactory that the perhaps most universal principle of physics that we have is essentially a technical recipe; although references to nature's economy may seem unmotivated from a scientific point of view, the supremacy of stationary solutions in classical physics does not seem to have any acceptable motivation at all. With the advent of quantum physics however, the principle of least action acquired a new meaning. Feynman's "democracy of all histories" approach to physics can in a sense be said to once more convert the principle of least action into a problem about optimization: the macroscopic developments that are actually realized are the ones that maximize the probability, and these turn out to be precisely those that are stationary with respect to the Lagrangian. Part of the purpose of this paper is to suggest something similar for general relativity. The problem, however, is that we (in spite of numerous attempts) obviously do not know enough about the relationship between quantum mechanics and general relativity. For this reason, I will in this paper start by making use of a semi-classical probabilistic approach. This will permit us to formulate a kind of abstract form for a natural Lagrangian. There will however still be a non-trivial step to get from this abstract form to a Lagrangian in the usual sense (expressed as a natural function of the Riemann tensor). The Random Curvature Ensemble In this section, I consider a probabilistic approach to general relativity, which in a sense is a substitute for Feynman's democracy of all histories approach. Thus, let us consider the probability space of all possible metrics on a certain spacetime manifold, subject only to the condition that the total four-volume is a fixed number. Scalar curvature is essentially additive in separate regions. Therefore, what can we say about the probability for a certain value of the total scalar curvature in a region D that is a union of many smaller regions? For each such smaller region, we assume that there is a certain probability distribution for the different possible metrics. Exactly what this probability distribution looks like on the micro-level is of course difficult to know, but the point is that under quite general assumptions, this will not be important. Let us just suppose that it depends only on the scalar curvature. This is in fact very much in the spirit of the early theory of general relativity, where R plays a central role, e.g., in the deduction of the field equations from the Hilbert-Palatini principle (compare [5]). We also suppose, starting from the idea that zero curvature is the most natural state, that the mean value of this distribution is zero. If we now consider the total curvature R in D to be the sum of the contributions from all the smaller subregions and if we (roughly) treat these contributions as independent variables, then the central limit theorem (see Fischer [6]) says that the probability for a certain value of R is: where µ ∆ is a constant depending on the volume ∆ of D. In the following, I will simply take this as the natural probability weight for the metric g in D. What about the probability weight P of the metric g on a larger set U = ∪ α D α ? Assuming multiplicativity (which means that different regions are treated as essentially independent of each other) and that all the regions have roughly the same volume ∆, we get the (unnormalized) probability: Here we have, in the transition from sum to integral, made use of the additive property of the variance in normal distributions, which in this context means that µ ∆ ≈ µ∆ for some fixed constant µ. Therefore, what we obtain is a kind of ensemble of all possible metrics in Ω, where each metric gets a probability weight as above. The word ensemble originally comes from statistical mechanics, and the idea is now to apply methods from classical statistical mechanics to the probability space of all metrics (see, e.g., Huang [7] for some background about ensembles). First, compute the "state sum": Minus the logarithm of the state sum, L = − log Ξ, is what is usually referred to as the "Helmholtz free energy". According to standard wisdom in statistical mechanics, the macrostates that minimize L (among all states with a given volume) are the by far the most probable ones, i.e., the ones that will be realized in practice. Remark 1. Note that these ideas are here applied to four-dimensional states, not to three-dimensional ones as in usual statistical mechanics. In particular, the Helmholtz free energy is not directly connected to ordinary energy. Rather, I use the term here to relate to a very general statistical principle and a traditional way of thinking. Therefore, what does the free energy L look like in our case? Again according to standard wisdom in statistical mechanics, the sum in (4) above is usually dominated by its largest term together with all terms corresponding to nearby metrics. The number of such metrics g + δg near a given metric g (which give approximately the same value of R) defines what is usually called the "density-of-states". If we let Ω α denote the number of such states in each set D α as above, then we can heuristically compute the state sum, in the following way. First, note that according to the very definition of the "density-ofstates", for all terms in the sum in (5) that significantly contribute, the exponential factors will be essentially the same. If we in addition suppose that the metric g + δg can be viewed as given by an independent choice δg α in each D α , then Ξ can formally be rewritten as: Writing as before µ ∆ = µ∆, we note that the approximate independence of the cells D α means that Ω α is an exponential function of the volume of D α . Hence, it is natural to write log Ω α = log Ω g ∆V, where log Ω g is now a measure of the density-of-states of g itself, which is essentially independent of the particular choices of the D α s. Summing up, after a transition to an integral as in (3), we arrive at: or equivalently: The principle of minimizing the free energy now gives us a natural, although of course still heuristic, foundation for the following. Principle 1 (Of least action). The metric g that is realized in U must minimize: In general, finding the states that minimize the free energy can be very difficult, since they are determined by a sensitive interplay between the size of the terms in the state sum and the corresponding density of the state function. In the present situation, however, this difficulty may be overshadowed by a still more difficult problem: How do we compute Ω g ? This is a very non-trivial problem in infinite-dimensional differential geometry. In fact, there is also the problem of how the density-of-states function should be defined. To the mind of the author and again referring to standard methods in statistical mechanics, this second problem may be less serious, since in statistical mechanics, the consequences are usually very insensitive to the details in the exact definition of Ω. Before we consider these questions in general, let us in the next section first consider the simplest case. The Field Equations in a Vacuum Clearly, the very least one must ask from a Lagrangian is that it should be able to reproduce the field equations in a vacuum. In general, the metric that minimizes the action integral in (10) is determined by the interplay between the R 2 term and the density-of-states term log Ω g . However, if the general situation and the boundary conditions allow for a metric g that satisfies Ricci = 0, one can argue that this metric must also be minimizing. The reason is that such a metric will in fact simultaneously minimize both terms in (10). For the first term, this is obvious: clearly, Ricci = 0 implies that R = 0, which of course minimizes the R 2 term. For the second term, the reason is more subtle, and I can here only give a heuristic argument. From statistical mechanics, it is well known that − log Ω will be minimized (log Ω will be maximized) when the minimum of: is as "flat" as possible, i.e., in this case when the derivatives of R g (with respect to various directions in the space of all metrics) are as small as possible. Consider therefore a differentiable one-parameter family g s = g 0 + s · h of metrics passing through a given extremal metric g = g 0 . Computing the s-derivative of the scalar curvature R(s) along this one-parameter family gives that: (see, e.g., [8]. For a more complete, but also more difficult to read, account, see [9] or [5]). In the present context, the scalar curvature is always integrated. If we consider variations h with support small enough to be contained in the domain of integration, the divergence terms will disappear when integrated, and we will be left with: It is clear that if the minimum is flat in all directions, in the sense that the left-hand side in (13) vanishes in all directions, then Ricci = 0, since if the sum on the right-hand side vanishes for all possible choices of h ij , then also all R ij must vanish. Put into other words, this would mean that − log Ω should be minimized exactly when the vacuum field equations are fulfilled. Towards the General Classical Case It is not always the case that the situation allows for Ricci = 0 to be fulfilled. This could be for global or topological reasons, but most commonly simply because there is matter present. Therefore, what can be said in this case? The first thing to observe is that according to general relativity, mass will affect the first term in (10). In fact, any body with mass should give rise to a nontrivial Schwarzschild metric far away from it. However, closer to the particle, such a metric will then by necessity have to have a non-zero Ricci tensor and in general also a non-zero scalar curvature. Remark 2. It can of course be argued that this may not be true if we allow for singularities, like, e.g., in the Schwarzschild metric. However, if we want the theory to allow for singularities, we must at the same time find a consistent way of giving sense to and computing generalized integrals of the curvature tensor at such points. This may be very difficult, so for the time being, all metrics will be assumed to be non-singular. It is much more difficult to say what will happen to the density-of-states term. A direct computation would have to be carried out in the infinite-dimensional space of all metrics. So far, we have not even defined any specific differential structure on this space, and there are certainly many possible choices for such a structure. Even if it seems reasonable to expect the minimizing metric g to be essentially independent of this choice, nevertheless, this appears to be an extremely difficult problem. There may however be another way to proceed. Let us start by asking ourselves what properties − log Ω g should have. First of all, let us observe that it should be an essentially invariant, locally-defined concept, which should only depend on the geometric properties of g. Thus, it seems reasonable to expect it to be expressible in terms of the Riemann tensor. Moreover, according to the discussion in the previous section, it seems very natural to suppose that this tensor should in a vacuum be locally minimized precisely when Ricci = 0. Therefore, instead of trying to compute − log Ω g directly, we could start by asking if these properties determine this tensor more or less uniquely, or if this is not the case, at least reduce the number of possible candidates to a small number. Even if it may still be difficult to know exactly what constraints the presence of mass puts on the metrics (except that they should reduce to the ordinary field equations on a large scale), I would like to suggest this as an interesting open problem: Problem 1. Find and characterize all tensorial densities that are locally minimized exactly when Ricci = 0. A solution to this problem may not give us the final form of the natural Lagrangian, but it could in fact be a major step towards it. Unfortunately, little seems to be known about this. In fact, it does not even seem to be known if such a tensor exists at all. Nevertheless, this essentially algebraic problem seems to be easier to work with than the direct computation method. Remark 3. In principle, it is of course possible to relax the condition that minimization should be equivalent to Ricci = 0, by saying only that Ricci = 0 implies minimization. Such tensors are easier to find (a trivial example being (R ij R ij ) 2 ). However, it is not clear what kind of physics this would lead to, so equivalence would seem desirable. The Concept of Mass-Energy An early idea of the meaning of action can be said to have been the integral: typically along the path of a certain body or, more generally, along the paths of several bodies. Although the interpretation of this concept may have changed somewhat during the development of modern physics, it can be interesting to see what (14) would lead to if combined with the action in (10). In fact, reversing the original idea implicit in (14), we are led to the following: Definition 1. The total mass-energy of a system, as measured during the time interval [T 1 , T 2 ] and in a certain region U in space, can be computed as: As it stands, this energy also contains vacuum energy: even if R = 0 in a vacuum, the second term will in general be expected to be non-zero. However, in most ordinary physical applications, we do not want to include this vacuum energy. In particular, we can study the case of a close to flat space-time with the constant vacuum density − log Ω 0 subtracted away. Definition 2. In the case of a single isolated particle at rest in the given frame of reference, Formula (15) can be used to define its rest mass in terms of curvature alone as: For this to make sense, it is of course necessary to suppose that whatever influence the particle may have on its surroundings far away from the particle, the contribution to the integrand in (16) will be negligible. For the rest of this section, it will be assumed that the vacuum density − log Ω 0 has been subtracted away from the action as in (17). Is this a reasonable definition of mass? First of all, it should be kept in mind that we are here only concerned with gravitation: whatever other forces could contribute is so to speak left out from the beginning. Having said this, there are still many questions that have to be answered, for example: Will particles travel along lines or, more generally, along geodesics? This can hopefully be proven true at least for suitable classes of particles; however, it is not obvious from the above, and the question turns out to be surprisingly complex. Part of the reason for this is that although geodesics are stationary and from the point of view of the democracy of all histories should be probability maximizing, there is also the R 2 -term in (17). This term in general tends to be proportional to the length of the path, which means that it rather tends to maximize the action along geodesics, since in general relativity, these tend to have maximal length. Therefore, will the result still minimize the action (maximize the probability)? It is not possible to estimate the relative importance of the two terms in general without additional information. In the opinion of the author, it is quite possible that one can construct examples of situations where the answer is no. On the other hand, however, it is also quite possible that for certain classes of metrics, the answer will be yes. II. Will mass energy be conserved? Assuming a positive answer to I. above, the answer in this case will in a certain sense also be yes. However, in situations with very high curvature or that, e.g., concern the universe as a whole, it may be that the usual idea of conservation must be modified (see Section 7). However, let us start by considering the most common situation in an essentially flat setting. Principle 2 (Conservation of mass-energy). Consider a collection of particles in some region in space-time where the deviation of the geometry from flat space-time is negligible except in the immediate vicinity of the particles. We assume that they move independently along straight lines except that they may momentarily interact by emission and absorption of (real or virtual) particles, thus changing their states of motion and other properties. If we define the total action as the sum of all actions along the world-lines of all the involved particles, then the principle of least action implies that the usual energy of the system, defined as: is a conserved quantity, where M i :s denote the masses of the particles as in (17) at the given time and u i :s denote the speeds of the corresponding particles. To motivate this principle, we first note that the contribution of a particle with rest mass M to the action, during a time interval of length ∆t where it is not interacting with anything, can be written as MT, where T = ∆t 2 − |∆x| 2 is the proper time elapsed. Now, consider the mass-energy E(t) as measured during a time interval [t, t + ∆t] of length ∆t, where no interaction takes place. Then: where ∆Ξ is the sum of the actions of the individual particles computed in the time-interval [t, t + ∆t]. It is now claimed that E(t) must be constant as a function of t, because otherwise, one can easily construct a volume preserving infinitesimal transformation, which decreases the action by contracting the time scale at some time t where E(t) is large and simultaneously expanding the time scale at some other time t where E(t) is smaller. Remark 4. Although this is a standard idea in statistical mechanics, it is worth pointing out that the argument depends heavily on the fact that the metric is supposed to be minimizing along the paths of the particles. In fact, an infinitesimal deformation as above will bend the path, which may effect the curvature. However, exactly because the metric is supposed to be minimizing along the straight line (geodesic), this contribution will be of second order, hence be negligible in comparison with the first order contribution resulting from the changes in the time scales. We conclude that the time-derivate of the total action must be constant. Hence, with: we can now compute the conserved quantity as: where: is the speed of the ith particle. Thus, the claim follows. Remark 5. Here, we only consider energy conservation. However, since everything is obviously Lorentz invariant, we must have a similar conservation law for the momentum. Is Mass-Energy Conserved Globally? Is mass-energy always conserved in general relativity? Although we may in general want the answer to be yes, the question is more complicated than it may seem to be at first sight (compare, e.g., [10]). In particular, there are situations on the global scale where the behavior that we observe seems to contradict conservation. As an example, one may ask what happens to the negative potential energy between galaxies when they drift apart with accelerating speed? The definition of mass-energy in (15) offers a possible explanation: In usual general relativity, the relation between the mass-energy of particles and the underlying geometry is somewhat asymmetric. However, here, they appear on a more equal footing. In fact, the influence of both comes from their contribution to the integral: and in the same way and on equal terms. In other words, the global curvature of space-time will also contribute and can hence be considered as a kind of geometric energy. Thus, the total conserved quantity E may be considered to consist of two parts: This may give a more fundamental version of the conservation law for mass/energy. For cosmology, this means that each one of the two terms need not be conserved, only the sum. To illustrate this, let us consider an extremely simple model for a closed, homogeneous, isotropic universe with a given fixed volume, where we neglect all physics except the part that comes from gravitation and curvature. The metric of such a universe can be written as: Since we are dealing with a closed model, the function a(t) can naturally be thought of as the radius of the universe at time t. What form will such a universe have if we start from the principle of least action as in Section 3? This is a non-trivial question due to the difficulties in calculating the density-of-states term. If we, however, for the sake of argument, temporarily leave out this term and concentrate on the scalar curvature, then the action can be computed as follows. Computing the scalar curvature of (25) gives: From this, we obtain, in view of the assumption about isotropy and during a certain interval of time I (which essentially could be the lifespan of the universe), In the case of no mass/energy, it is easy to see that this integral will be minimized when the universe is a four-sphere. In fact, the scalar curvature in this case is identically zero. However, what happens if we add to the action an energy term corresponding to a homogeneous mass distribution? If the amount of mass does not change with time, then the mass itself will not influence the minimizing of the action. What will influence it however is the potential energy. Taking this energy into account in a completely classical way amounts to adding in (24) a term that is inversely proportional to the radius of the universe at the given time, we get the following expression for the total action: for some constant β > 0. The potential energy term in (28) can be thought of as a kind of semi-classical substitute for the density-of-states term log Ω. In a more complete theory, one could hopefully do without references to such concepts as potential energy. However, we are not there yet. Minimizing this expression under the condition of a constant volume V, according to the classical theory of the calculus of variation, means computing the Euler-Lagrange equation for the functional: For a general functional of the form: where F(u, v, w) is some sufficiently regular function, the Euler-Lagrange equation (see [11]) is given by: Using Mathematica, we can now compute the Euler-Lagrange equation associated with the functional in (28) and obtain (after multiplying with a(t) 2 /2π 2 ): Analyzing all solutions of this equation for all choices of the parameters is a huge task, which I will not attempt here (see [12] for more technical details about these computations). Just as an example, I used Mathematica to plot one more or less typical solution with β = 1.5 and λ = 1.525 and with a phase of accelerating expansion in Figure 1. Remark 6. It should be noted that the model here does not contain enough assumptions to give a realistic picture of the behavior close to the Big Bang. In particular, it is not possible to solve the Euler-Lagrange equation starting from a point where a(t) is zero, without additional assumptions. Rather, the solution in Figure 1 was obtained starting from the point where a(t) is maximal. In this model, the scalar curvature decreases during the expansion phase, as can be seen by inserting the solution of (32) into (27). Hence, it is only natural that the mass-energy part grows during the same period of time according to the formula: Computing E global geometry in the above model (neglecting the density-of-states term) leads to a qualitative picture as in Figure 2 for the mass-energy that we may observe. Again, the behavior close to the ends may be very inaccurate. Remark 7. The exclusion of the density-of-states term significantly simplifies the computations in this section. However, how realistic is this assumption? To give a definite answer to this question definitely requires more research. However, it can be noted, as in Section 4, that in the case of the vacuum, the scalar curvature term and the density-of-states term both tend to be minimized simultaneously, which at least makes it plausible that for low curvature, these two terms will tend to have similar growth properties in various directions. This might indicate that the Euler-Lagrange equation for their sum could be similar to the one studied above. Conclusions The model in the previous section does of course not claim to give an exact picture to be compared with experimental data, but rather aims at an explanation for the underlying mechanism for the conservation and possible non-conservation of mass/energy. In general, this paper is clearly only a preliminary attempt to formulate a natural Lagrangian for general relativity. It would of course be very interesting to replace the essentially classical ensemble in Section 3 by a quantum mechanical one. This may not be impossible, but there are still many technical problems that have to be solved on the way. One may also ask what the connection is between the ideas in this paper and other current areas of research, like extended theories of gravity and in particular f (R) gravity theories ( [13][14][15][16]). Clearly, there is an obvious common goal, but the approaches are quite different. The main stream in physics has always been to suggest explicit theories and then test them against reality. In general, the method has been enormously successful, but the success depends on our ability to launch the right candidates. The starting point for this paper is the opposite one: it may be very difficult to find the right candidate without previously having reduced the number of possibilities drastically. However, this way of approaching the problem has other drawbacks, and in particular, it seems to lead to extremely difficult mathematical problems. In the mind of the author, it is not impossible that these quite different approaches could complete each other. In fact, the ideas in this paper could lead to a preferred f (R) theory or to some subclass of such theories. Furthermore, and perhaps more likely, they could lead to some larger class of Lagrangians that is not presently included in the f (R) approach.
2021-08-23T20:31:53.017Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "499bdb4ada44431ba0be12991bfa37dd5589dd0b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1997/7/3/74/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7e20ae48706b394c5181d23c09a5ff2188e0df89", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
17081586
pes2o/s2orc
v3-fos-license
Maternal morbidity measurement tool pilot: study protocol Background While it is estimated that for every maternal death, 20–30 women suffer morbidity, these estimates are not based on standardized methods and measures. Lack of an agreed-upon definition, identification criteria, standardized assessment tools, and indicators has limited valid, routine, and comparable measurements of maternal morbidity. The World Health Organization (WHO) convened the Maternal Morbidity Working Group (MMWG) to develop standardized methods to improve estimates of maternal morbidity. To date, the MMWG has developed a definition and provided input into the development of a set of measurement tools. This protocol outlines the pilot test for measuring maternal morbidity in antenatal and postnatal clinical populations using these new tools. Methods In each setting, the tools will be piloted on approximately 250 women receiving antenatal care (ANC) (at least 28 weeks pregnant) and 250 women receiving postpartum care (PPC) (at least 6 weeks postpartum). The tools will be administered by trained health care workers. Each tool has three modules as follows: personal history – socio-economic information, and risk-factors (such as violence and substance abuse) patient symptoms – WHO Disability Assessment Schedule (WHODAS) 12-item, and mental health questionnaires, General Anxiety Disorder, 7-item (GAD-7) and Personal Health Questionnaire, 9-item (PHQ-9) physical examination – signs, laboratory tests and results. Discussion This pilot (planned for Jamaica, Kenya and Malawi) will allow for comparing the types of morbidities women experience between and across settings, and determine the feasibility, acceptability and utility of using a modified, streamlined tool for routine measurement and summary estimates of morbidity to inform resource allocation and service provision. As part of the post-2015 Sustainable Development Goals (SDGs) estimating and measuring maternal morbidity will be essential to ensure appropriate resources are allocated to address its impact and improve well-being. Electronic supplementary material The online version of this article (doi:10.1186/s12978-016-0164-6) contains supplementary material, which is available to authorized users. Plain English summary While there has been a lot of attention to preventing women from dying during pregnancy and childbirth, less attention has been paid to women who survive pregnancy but have health problems. We developed a tool to collect information on the kinds of health problems women may have during pregnancy. This tool includes questions on the woman's pregnancy history; how she feels (emotionally and physically); and an examination. The tool will be tested in three countries (Jamaica, Kenya and Malawi). Approximately 1500 women, who are currently pregnant (28 weeks) or who recently had a birth (six weeks ago), will be asked to participate in testing the tool. Most questions and the examination, are part of normal care for pregnant women. We will analyse the information collected with the tool to understand the most common conditions women experience in each of the three countries, and to figure out the best ways to measure the problems women may experience related to pregnancy. We will share results of this project with the facilities where we conducted the study as well as with the health and academic communities. Background Improving maternal health and reducing related mortality have been key concerns of the international community, particularly as part of the 5 th Millennium Development Goal (MDG-5) and now of the 3 rd Sustainable Development Goal (SDG-3) [1,2]. However, maternal mortality accounts for only a small fraction of the overall burden of poor maternal health as it excludes maternal morbidity. The true extent and burden of maternal morbidity is not known. It has been suggested that for each maternal death, 20 or 30 women suffer from maternal morbidity [3,4]. However, these calculations are not based on standardized, well-documented, or transparent methodologies. There have been significant recent advances in monitoring and improving women's quality of care related to severe maternal morbidity, or near-miss events [5]; however accurate and routine measurements of less-severe maternal morbidity are lacking. Better measures to document and monitor maternal morbidity will help inform policy and program decisions and resource allocations to improve maternal health. This protocol describes a study aiming to develop and test a tool to measure maternal morbidity during the antenatal and postpartum periods. The tool was developed by the Maternal Morbidity Working Group (MMWG) established by World Health Organization (WHO) to improve conceptual and operational understanding of maternal morbidity. Defining maternal morbidity The MMWG, composed of medical professionals, researchers, country programme implementers, and patient advocates, was brought together to develop a definition, identification criteria, a tool and indicators to systematically measure maternal morbidity. Figure 1 visually details the continuum of outcomes from healthy pregnancies to death [6]. The objective of the MMWG was to capture the less severe parts of the morbidity spectrum, excluding mortality and maternal near miss. The detailed methodology of the group's work is documented elsewhere [7]. Based on a consensus process, the MMWG developed and adopted the following operational definition of maternal morbidity: "any health condition attributed to and/or complicating pregnancy, and childbirth that has a negative impact on the woman's wellbeing and/or functioning" [4]. The MMWG operationalized this definition by creating a maternal morbidity matrix (Additional file 1: Table S1; Additional file 2: Table S2; Additional file 3: Table S3; Additional file 4: Table S4). The matrix was informed by literature reviews, and the tenth revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), including the WHO Application of ICD-10 to deaths during pregnancy, childbirth and the puerperium: ICD-Maternal Mortality (ICD-MM) [4,8]. Setting the foundation for the measurement tools: operationalizing the maternal morbidity definition The matrix highlights three dimensions of maternal morbidity which create the foundation for the measurement tools. The first dimension is composed of 121 conditions, 58 symptoms, 29 signs, 44 investigations and 35 management strategies. The following criteria were developed and agreed upon for inclusion in the matrix: 1) Conditions associated with a negative maternal outcome that are either exclusive to pregnancy, childbirth, or the postpartum state, 2) Conditions that occur in >0.1 % in pregnancy; 3) Conditions that are not exclusive to pregnancy, childbirth, or postpartum but which occur more frequently during pregnancy (i.e. pregnancy is a risk factor for the condition). The identified conditions are grouped in line with the ICD-MM, with the intent of showing how data on signs, symptoms, investigations and management strategies may be aggregated together and to ensure continuity between the spectrum of morbidity through mortality [8]. The second dimension of the matrix measures functional impact and disability related to pregnancy, as defined in the International Classification of Functioning, Disability and Health (ICF), and is measured using the WHO Disability Assessment Schedule 2.0 (WHODAS 2.0) [9,10]. The WHODAS covers six domains in line with ICF (cognition, mobility, self-care, getting along, life activities and participation) and produces standardized disability levels and profiles using a short, simple and easy to administer 12-item questionnaire [10]. The third dimension measures maternal history, focusing on social-and health-related characteristics, which might help identify the maternal morbidity as well as influence the risk and severity of the morbidity. Some examples include socio-economic status, pre-existing health conditions, and care seeking during pregnancy. These measures allow for a more comprehensive understanding of the "woman as a whole". Development of maternal morbidity measurement tools Based on the matrix, a set of tools was developed to measure maternal morbidity at two time periods -one to administer during antenatal care (ANC) and another during postpartum care (PPC). Wherever possible, previously validated scales were used such as the WHODAS 12-item for functioning, the 7-item Generalized Anxiety Disorder (GAD-7) scale and the 9-item Patient Health Questionnaire (PHQ-9) diagnostic instruments for anxiety and depression, respectively [10][11][12]. The study is designed to pilot the tool to: 1. determine the feasibility, acceptability and utility of implementing a modified, streamlined tool for measurement and summary estimates of morbidity to inform resource allocation and service provision 2. compare the types of morbidities women experience between and across settings. Study design The study will be cross-sectional, providing a snapshot of maternal morbidity in two study populations (ANC and PPC) in three country settings (Jamaica, Kenya and Malawi). The study will involve the administration of a questionnaire (the aforementioned maternal morbidity tool, presented in Additional files 5 and 6) at the appropriate visit where women are already coming to the facility for care. To describe the different types of morbidity, and stratification by country setting and time of administration (ANC vs PPC), 500 women per country (250 each for ANC and PPC), were deemed adequate for capturing a range of morbidities. Without pooling the data across sites or populations, we will have a 6 % margin of error. Tool development and data quality A systematic literature review was conducted to identify existing tools and scales to measure aspects of maternal morbidity. Existing measures were brought together to ensure all elements of the maternal morbidity matrix were covered. A draft version of the tool was then reviewed by the Principal Investigators (PIs) from each site, for applicability and feasibility, including the burden on participants. Mock interviews were conducted in each setting to evaluate the flow, content and timing for administering the tool. These mock interviews provided preliminary information on the questions in the tool and participant burden. In each of these steps, the questionnaire was further refined and streamlined. The final pilot questionnaire includes three sections: 1) woman's history, 2) current symptoms, and 3) a physical examination, including a brief review of her medical records, where available. The tools will focus on the index pregnancy and the woman's perception of her pregnancy and health. The physical examination will include: a general overview, breast, abdominal, obstetric (for ANC patients) and pelvic (where appropriate) evaluations, in line with routine ANC and PPC examinations. Each country pilot will be led by local investigators who will be responsible for adapting and, where appropriate, translating the questionnaires to ensure their validity and reliability in the study area. Enrolment, training and consent Women attending designated facilities for routine maternal health care will be invited to participate in the study. Women for the ANC tool will be invited to participate if they are in their third trimester of pregnancy (28 or more weeks). Women for the PPC tool will be invited to participate if they are approximately 6 or more weeks postpartum. A convenience sampling strategy will be used so that all eligible women will be invited to participate until 250 women are interviewed for each tool (ANC and PPC). Data collection is anticipated to last 2 months at each site. Non-complicated pregnancies Complicated pregnancies Maternal Death Potentially life-threatening conditions Life-threatening conditions Fig. 1 Maternal morbidity and disability spectrum [6] Local investigators will recruit, train, and supervise data collectors. Data collectors will be compensated for their participation in the research. As part of the training process in each country, teams will carefully review each question and conduct mock interviews with training participants (data collectors) who have experience in both ANC and PPC service delivery. The team will check the final version and update the consent forms as needed based on these experiences. Training will emphasize the importance of informed consent and procedures to reduce the risk of interviewers coercing patients to participate in this study. Data collectors trained specifically for this project, will administer informed consent forms (verbal and paper based) to eligible women. Participation will be completely voluntary and non-participation will not affect a woman's access to or the type of care due to her. This will be expressed to all potential participants during both recruitment and the informed consent session. Additionally, informed consent will ask for access to the woman's medical records, those available at the facility and those she brings with her (mother-baby book, etc). If the woman is unable to give consent due to mental or physical impairment, she will not be asked to participate in the study. Additionally, data collectors will be trained to exclude minors under the age of 15. The data collectors will also be responsible for referring women to appropriate services when their answers and/or physical exam deem it necessary. The local research team will identify the most appropriate places for referring women, in accordance with local standard of care. In cases where referrals will need to be outside of the facility where data collection is taking place, local PIs will contact the referral sites to confirm that the services are available prior to commencing data collection. Local supervisors will monitor and conduct random checks of interviewers to ensure informed consent and appropriate referral procedures are being followed. The team expects that each woman's interview will last approximately 45 to 65 min total for the administration of the tool. The physical exam should take between 15 to 25 min, while the interview portion of the questionnaire should take approximately 30 to 40 min. Information being sought on the PPC tool is more comprehensive than the routinely collected data at standard postpartum visits and participants will be informed of this during the consent process. Data management and statistical analysis Data collectors will receive and be trained to use a tablet for administering the questionnaire/tool and entering the woman's data. The tablets will support prompt data collection, transmission, verification, storage and analysis. In addition to the tablets, data collectors will have access to paper forms of the tool, as back up. All tablets will be password protected to ensure confidentiality. Project data will be inputted into electronic forms of either the ANC or PPC survey using Open Data Kit (ODK) an open source data management application on the tablets. The uploaded data will not include any identifying information on the woman, and only an ID number will be used to identify participants. Data from the tablets will be uploaded to a secure, password protected cloud-based storage system owned by WHO (https://whodcp.org). This system allows for both data entry and uploading and remote review and management of collected data. Using tablets for administration of the tool will help ensure data quality with range checks and reduce mistakes associated with manual data entry. Real-time uploading of data to a cloud server will ensure data quality is continually monitored, by the local team and at WHO. The team based in Geneva, in conjunction with site coordinators and PIs, will be responsible for the data analysis. The process will begin while data collection is still on-going in order to assess progress and determine any data collection problems and/or patterns. Once data collection and clean-up are complete the team will perform in-depth analyses using STATA analytical software in order to synthesize and present results. In addition to the Geneva-based team, core MMWG members will be involved in interpreting the data and providing expertise when necessary. Ethical considerations Ethical approval for this study was provided by the WHO's Research Ethics Review Committee (ERC) as well as by the RHR Research Project Panel (RP2), the external review body of the Department of Reproductive Health, and Research (RHR) including the UNDP/UNFPA/WHO/ World Bank Special Programme of Research, Development, and Research Training in Human Reproduction (HRP) (Additional file 7). Furthermore, relevant entities at each of the three country sites also provided approval. There will be no risk to the women who decide not to participate in the study, they will receive the same standard of care as those who participate in the study. For women who chose to participate, this study may cause some discomfort in terms of the routine physical exams, or when answering personal questions if they are associated with negative experiences (i.e. medical and obstetric history questions about domestic violence or psychological issues). Potential benefits for participants include possible diagnosis and treatment for any reported morbidity or other condition. Only the study team will have access to the information collected and it will remain confidential. Site coordinators will work in conjunction with data collectors to protect participant anonymity. All participants will receive a small token of appreciation for their participation. Discussion Data gathered from this effort will provide better information as to the breadth and depth of pregnancy-related morbidity and disability in the three study settings. By identifying current gaps in the care of pregnant women, this study can enable researchers, policymakers and health professionals to inform program and resource planning to address women's reproductive health needs. This study will pilot and assess the feasibility of employing a tool to measure the health consequences of pregnancy. This pilot study is a step towards finding such a tool and will provide evidence for the first standard global definition and classification of non-severe maternal morbidity. Ultimately, the goal of this project is to produce a valid, comparable, and routine tool for measurement of maternal morbidity. Plans for dissemination and use of project results When the data analysis is complete, the results will be disseminated in pilot study countries, as well as through scientific journal articles. Furthermore, according to the findings, the tool will be revised, simplified and finalized as a standard measurement for monitoring maternal morbidity in country programmes. Conclusion This paper describes a study designed to test a tool measuring the impact pregnancy and childbirth have on the health of women. We describe the design of the study, the tool, and how we will invite women to participate in the study. Also, we discussed ethical issues, including that even if women refuse to participate, they will still receive the same care at the facility. Our objective in conducting this study is find out the health conditions women may experience in the three countries. Based on this study, we will make changes to the tool so that it can be used to improve the health care of pregnant women and those who have recently given birth.
2018-04-03T02:48:13.407Z
2016-06-09T00:00:00.000
{ "year": 2016, "sha1": "c0954b0d175280b9eca713b694915dbb5d0a163f", "oa_license": "CCBY", "oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/s12978-016-0164-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0954b0d175280b9eca713b694915dbb5d0a163f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235269243
pes2o/s2orc
v3-fos-license
Electronic cigarette use intensity measurement challenges and regulatory implications Assessing tobacco use intensity allows researchers to examine tobacco use in greater detail than assessing ever or current use only. Tobacco use intensity measures have been developed that are specific to tobacco products, such as asking smokers to report number of cigarettes smoked per day. However, consensus on electronic cigarette use intensity measures that can be used for survey research has yet to be established due to electronic cigarette product and user behavior heterogeneity. While some survey measures that attempt to assess electronic cigarette use intensity exist, such as examining number of ‘times’ using an electronic cigarette per day, number of puffs taken from an electronic cigarette per day, volume of electronic cigarette liquid consumed per day, or nicotine concentration of electronic cigarette liquid, most measures have limitations. Challenges in electronic cigarette measurement often stem from variations across electronic cigarette device and liquid characteristics as well as the difficulty that many electronic cigarette users have regarding answering questions about their electronic cigarette device, liquid, or behavior. The inability for researchers to measure electronic cigarette use intensity accurately has important implications such as failing to detect unintended consequences of regulatory policies. Development of electronic cigarette use intensity measures, though not without its challenges, can improve understanding of electronic cigarette use behaviors and associated health outcomes and inform development of regulatory policies. Self-report surveys are the approach used most commonly to examine tobacco-related knowledge, attitudes, beliefs and behaviours. One advantage of using surveys in tobacco research is that if consistent survey items and response options are used across studies and years, researchers can compare the results of one study with other studies to identify changes over time in tobacco use. For example, in 1964 the first US Surgeon General's Report on the Health Consequences of Smoking 1 was published. At that time, more than 50% of men and 30% of women were current smokers, defined as those who had smoked at least 100 cigarettes in their lifetime and reported 'currently' smoking. 2 In 2018, 15.6% of men and 12.0% of women in the USA were current smokers (ie, defined as having smoked at least 100 cigarettes in their lifetime and currently smoked cigarettes 'every day' or 'somedays'). Importantly, because nearly identical measures of current cigarette use were used across the 44-year span, researchers are able to document the immense progress that has been made in reducing cigarette smoking prevalence in the USA. Similar core survey items for other tobacco products, such as cigars, waterpipe, smokeless tobacco and electronic cigarettes (e-cigarettes), have been developed and used to examine ever use and current use of tobacco products. Data from surveys that use these measures can be used to monitor trends over time, identify priority areas for research and inform regulatory policy. However, there are not yet standard e-cigarette use measures due, in part, to surveys needing to adapt to the evolving marketplace of e-cigarette products. TOBACCO USE INTENSITY MEASURES While useful for examining prevalence, measuring ever and current tobacco use has limitations. For example, both an individual who smokes 20 cigarettes every day and an individual who smokes 1 cigarette per week can be considered 'current' smokers, despite greatly differing behaviours and exposures to toxicants in cigarette smoke. Thus, survey items have also been developed and used to measure tobacco use intensity (eg, cigarettes smoked per day). These measures serve many purposes, such as identifying more dependent tobacco users or identifying differing levels of risk associated with tobacco use. For example, while other factors are also important, research shows that those who smoke on more days and smoke more cigarettes per day have higher levels of dependence 3 and greater risk for negative health outcomes 4 5 than those who smoke fewer cigarettes or smoke on fewer days. Comparing cigarette smokers based on cigarette smoking intensity measures relies on the key assumption that all cigarettes are approximately the same. That is, each cigarette contains comparable amounts of tobacco leaf with similar nicotine content, thus exposing users to comparable amounts of the dependence-causing chemical nicotine and other toxicants. This assumption is reasonable for cigarettes because while some changes in cigarette design and smoking behaviour have occurred over time 6 and cigarettes are not uniform, the nicotine content, cigarette size and cigarettes per pack are similar across brands. 7 Although differences in these elements between different types of tobacco products (eg, cigars, cigarettes, smokeless tobacco) present challenges in comparing use across users of different tobacco products, tobacco use intensity survey measures allow for comparisons between users of the same tobacco product. COMMON METHODS AND CHALLENGES TO ASSESSING E-CIGARETTE USE INTENSITY As e-cigarette use has increased in recent years, 2 8-17 ever and current e-cigarette use have been examined in surveys using items similar to those used for cigarettes. While the use of numerous terms to describe e-cigarettes (eg, electronic cigarettes, e-cigarettes, electronic nicotine delivery systems, vapes and so on) presents challenges for ensuring that researchers and participants are referring to the same products when developing and answering survey questions, the use of pictures and preambles that describe all products considered to be e-cigarettes can improve assessment of e-cigarette use. 18 Some have called for consensus measures to be established for e-cigarette use 18 and work has been conducted by researchers participating in the National Institutes of Health and the Food and Drug Administration's (FDA) Tobacco Centers of Regulatory Science grant programme to identify core items that might be used to assess key e-cigarette use domains. Importantly, e-cigarette measurement must account for the great heterogeneity of e-cigarette devices on the market, including disposable 'cigalikes' that resemble cigarettes, refillable 'vape pens', variable wattage 'box mods', 'pod mods' that use disposable cartridge/pods filled with e-cigarette liquid, and disposable vapes that resemble computer flash drives, to name a few of the many device type categories. While core items for measuring e-cigarette current and ever use have been identified, 19 the group noted that the development of survey measures to assess e-cigarette use intensity presents a more challenging problem, precisely due to e-cigarette device and liquid heterogeneity. The purpose of this commentary is to discuss possible approaches to measuring e-cigarette use intensity and challenges associated with each approach, as well as to offer considerations for researchers who aim to examine e-cigarette use intensity in the future. Puff topography One approach to measuring e-cigarette use intensity is to examine the number of puffs taken from an e-cigarette per day. There are approximately 10-15 puffs in a single combustible cigarette 20 and standard procedures have been developed to examine toxicant emissions associated with puffs taken from a single cigarette. 21 As a result, researchers can calculate approximate daily puff counts for cigarette smokers based on the number of cigarettes smoked per day. However, this approach cannot be used for e-cigarettes. Unlike cigarette smokers who typically smoke a cigarette from start to finish in a single session, e-cigarette users often puff from the same e-cigarette in multiple sessions, with sessions not being consistent in total puff duration or number of puffs. 20 While some researchers attempt to address this issue by asking e-cigarette users to report number of puffs per day, this approach presents challenges. Research is needed to verify whether participants can recall accurately puffs taken per day. Some puff-activated device product marketing claims that a single cartridge contains an approximate number of puffs (such as 200 22 ), though products may not provide a consistent number of puffs across devices due to poor manufacturing standards or counterfeit devices. 23 Additionally, some e-cigarette devices that use a button to activate the heater include 'puff counters' that record each time the button has been pressed, but these data also needed to be studied to determine their accuracy. Indeed, self-reported times per day and device puff counters appear to be correlated moderately, but some e-cigarette users appear to provide extreme/not reliable self-reported values 24 and not all devices use puff counters. However, even if the validity of these approaches is confirmed, relying on number of puffs per e-cigarette cartridge, puff counters or participant recall of puffs per day are problematic because the length of each session, number of puffs per session and individual puff characteristics vary considerably. While puff duration can vary for other tobacco products, like cigarettes, ultimately total puff duration of a cigarette is limited by the amount of tobacco that can be burned in the cigarette. Therefore, cigarettes smoked per day remains a viable option for examining cigarette smoking intensity. E-cigarettes allow for greater variation in puffs, both in duration and number, which complicates comparing puffs between users. Indeed, laboratory studies where detailed topography data can be collected demonstrate this: one study found that average e-cigarette puff volume ranges from 96.81 to 133.92 mL, 25 whereas another study reported average puff volumes ranging from 331.2 to 519.6 mL. 26 Although these studies used similar protocols (10-puff directed bouts) and only varied devices and liquid characteristics, some puffs were more than five times larger than others . Number of e-cigarette use sessions per day Another approach to assess e-cigarette use intensity is to ask participants to report the number of 'times' they use their e-cigarette or the number of use sessions per day. This approach allows researchers to compare e-cigarette users by number of sessions per day regardless of device type. Indeed, some surveys have attempted to do this and even define a session/time (eg, 'assume that one 'time' consists of around 15 puffs or lasts around 10 minutes' 27 ) and report on e-cigarette use times per day as a measure of e-cigarette use intensity or frequency (eg, Refs. 24 and 28) with greater intensity associated with greater dependence. However, e-cigarette use and cigarette smoking have important differences. Because a cigarette must be lit, cigarette smokers must either smoke an entire cigarette in a single session or extinguish and relight the same cigarette. Previous research suggests less than half to as many as 69% of smokers report ever relighting their cigarettes, 29 30 though this behaviour may not occur regularly among those who do relight. 30 However, because e-cigarettes are activated by puffing or by using an on/off switch and button, e-cigarettes are used intermittently on a regular basis. Thus, e-cigarettes more readily allow for use sessions that can range from a single puff to hundreds of puffs. E-cigarette users may also engage in different use patterns depending on the situation. For example, some surveys indicate that e-cigarette users may 'vape more before going into a situation where vaping is not allowed' 31 such as right before entering a building. Other studies have noted that many e-cigarette users' behaviours are 'far from homogenous' and users have 'difficulty tracking their own use'. 32 Qualitative data demonstrate this heterogeneity of behaviours with some experienced e-cigarette users reporting using e-cigarettes more or less frequently compared with cigarette smoking, some inhaling more deeply and others less deeply compared with cigarette smoking, and some reporting they take longer puffs compared with cigarette smoking. 33 Because an e-cigarette use session could be dependent on the user, device type and characteristics, situation or other factors, defining a standard e-cigarette use session between, or even within, e-cigarette users is challenging. Amount of e-cigarette liquid consumed Almost all e-cigarette liquids contain the same primary ingredients (propylene glycol, vegetable glycerin, nicotine and chemical flavorants 34 ), although in different concentrations. Some researchers have used survey items assessing amount of liquid used per day as a measure of e-cigarette use intensity. Using this approach, it might be assumed that higher amount of liquid used per day is associated with greater use intensity and thus exposure Special communication to nicotine or other toxicants. However, even if the liquids have the same concentration of nicotine and other compounds, this approach cannot account for the effects of highly variable e-cigarette device characteristics across the range of e-cigarettes available, which can have dramatically different abilities to aerosolize liquid. In a study comparing differences between e-cigarette users based on device type, 'third generation' (eg, box mod) e-cigarette users reported using 2.5 times more e-cigarette liquid per week than 'second generation' (eg, vape pen) e-cigarette users. However, third generation device users' cotinine levels were only 1.4 times higher than second generation device users, likely due to the fact that third generation device users' average e-cigarette liquid nicotine concentration was 4.1 mg/mL vs 22.3 mg/mL for second generation device users. 35 These data demonstrate that amount of liquid consumed may be a useful indicator of amount of aerosol inhaled by users, but not necessarily an accurate measure of exposure to nicotine and other toxicants in the aerosol. Thus, when considering e-cigarette intensity measures, researchers must determine whether their goal is to assess quantity of use or exposure to aerosol emitted from e-cigarettes. Additionally, validation is needed to determine whether e-cigarette users can accurately quantify the amount of e-cigarette liquid they consume in a given amount of time. Nicotine concentration in e-cigarette liquid Because nicotine is the primary dependence causing substance in e-cigarettes, some researchers have focused on examining exposure to nicotine as a measure of e-cigarette use intensity. One approach is to measure nicotine in e-cigarette liquids in addition to volume of liquid consumed. Survey items have been developed to examine the content of e-cigarette liquid, specifically the concentration of nicotine. For example, the Population Assessment of Tobacco and Health Study asks e-cigarette users if the e-cigarette 'you use contain[s] nicotine' and 'what concentration of nicotine do/did you use?'. 36 This question presents challenges as the labelling of nicotine content varies across e-cigarette products and liquids (eg, only a number provided without context, concentrations in milligrams of nicotine per millilitre of solution (mg/mL) or a percentage of the total volume of the liquid) and may be difficult to interpret if units are not provided. For example, '3' is a feasible nicotine concentration in mg/mL or per cent nicotine, but 3 mg/mL and 3% represent nicotine concentrations that differ by a factor of 10. Another concern is that e-cigarette liquid nicotine concentrations may be labelled incorrectly. 37 Some liquids advertised as nicotine free have been found to contain quantifiable amounts of nicotine and nicotine labelling may differ by 5%-20% of actual nicotine concentrations in liquids, 38 further complicating attempts to assess nicotine exposure. Additionally, users may mix homemade e-cigarette liquids (ie, 'do-it-yourself ' liquids) resulting in unknown nicotine concentrations or inconsistent concentrations between batches. In cases where the assumption can be made that e-cigarette users know the nicotine concentration in their liquid, assessing e-cigarette liquid nicotine concentration may be useful for assessing nicotine exposure. Previous reports indicate that most experienced or regular e-cigarette users report they know their e-cigarette liquid nicotine concentration, 39 even though they may not be fully aware of how nicotine concentration relates to nicotine 'strength'. 40 Laboratory research demonstrates that when holding other factors constant, increased nicotine concentration in e-cigarette liquid results in increased nicotine exposure for users. [41][42][43] While some e-cigarette users associate lower nicotine concentration in e-cigarette liquid with lower nicotine exposure, 44 in real world settings, a higher liquid nicotine concentration is not necessarily associated with greater nicotine exposure or greater e-cigarette use intensity. A study found that users of higher power e-cigarette devices used liquids with mean nicotine concentrations that were 5.4 times lower than lower power devices, but higher power e-cigarette device users had cotinine levels (a metabolite of nicotine) that were 1.4 times higher than users of lower power devices. 35 This is because nicotine yield from an e-cigarette (and therefore user nicotine exposure 45 ) is dependent on the e-cigarette device, liquid characteristics and user behaviour, 46 rather than e-cigarette liquid nicotine concentration alone (see figure 1). IMPLICATIONS FOR ACCURATE E-CIGARETTE INTENSITY MEASUREMENT These challenges in developing accurate e-cigarette intensity measures have many implications. For example, the inability to measure e-cigarette intensity may impact clinical laboratory researchers' ability to 'screen potential participants, report participant use history, and study factors that influence user toxicant exposure'. 47 E-cigarette use intensity measurement challenges also complicate the development and evaluation of regulatory policies. For example, the European Union established a policy that prohibited the sale of e-cigarette liquids with nicotine concentrations greater than 20 mg/mL with the goal of limiting e-cigarette users' nicotine exposure by only allowing 'delivery of nicotine that is comparable to the permitted dose of the nicotine derived from a standard cigarette…'. 48 To examine the impact of this policy, surveys that measure e-cigarette use intensity solely by examining e-cigarette liquid nicotine concentration in products used before and after the policy may not yield an accurate picture of changes in nicotine exposure, due to e-cigarette device heterogeneity. That is, if e-cigarette users who used devices containing liquids with nicotine concentrations greater than 20 mg/mL transitioned to devices that contained liquids with nicotine concentrations of less than 20 mg/mL, the policy may be viewed as effective in decreasing nicotine exposure. However, data demonstrate that nicotine delivery from devices that operate at higher electrical power (in watts) using liquids with nicotine concentrations of 4 mg/mL can result in nicotine delivery that exceeds that of a cigarette. 35 In this scenario, without an understanding of the relationship between nicotine delivery and device characteristics, e-cigarette users, researchers and policy makers may perceive erroneously that reducing e-cigarette liquid nicotine concentration will result necessarily in a decrease in nicotine exposure. Furthermore, high powered devices that can expose users to large amounts nicotine from low nicotine concentration liquids also expose users to greater concentrations of toxicants relative to lower power devices 49 and may result in unintended adverse health effects. 50 Thus, a nicotine concentration limiting policy may cause e-cigarette users to inhale more aerosol and hence increase toxicant exposure. Indeed, compensatory puffing behaviours that are associated with increased carcinogenic carbonyls and increased exposure to formaldehyde 51 were recorded among e-cigarette users who were assigned to use an e-cigarette device with adjustable wattage using a liquid with nicotine concentration lower than their usual liquid. 52 In order to more fully understand the implications of a policy limiting nicotine concentration in e-cigarette liquids, studies should assess changes in e-cigarette use intensity before and after the policy was implemented. Another issue is that researchers and public health professionals looking to help e-cigarette users decrease their e-cigarette use may have difficulty in determining whether e-cigarette users are reducing their consumption, especially if users transition between e-cigarette products. Additionally, if common measures cannot be used across devices researchers may have difficulty in identifying e-cigarette devices that put users at greatest risk for dependence. For example, how does one compare the e-cigarette use intensity and dependence between a user of a podbased 4.08 W e-cigarette device who uses one pod per day that contains 0.7 mL of liquid with a nicotine concentration of over 69 mg/mL 53 54 with an e-cigarette user of a 'box mod' device that operates at 71.6 W with a liquid nicotine concentration of 4 mg/ mL and uses 7.8 mL of liquid per day (as in Ref. 35)? Current survey measures used to assess e-cigarette use intensity are likely insufficient and further research is needed to inform the best approaches for comparing e-cigarette use intensity across device types. SUMMARY AND FUTURE DIRECTIONS FOR E-CIGARETTE USE INTENSITY MEASUREMENT There is urgent need to measure e-cigarette use intensity. Due to the extreme device heterogeneity that is a hallmark of the e-cigarette product class, developing a set of standard items that assess e-cigarette use intensity equally well across all products may be challenging. Survey items that measure e-cigarette user puff topography, number of use sessions, amount of e-cigarette liquid consumed or e-cigarette liquid nicotine concentration all have limitations. Future studies that use these measures with others in combination may be most effective for assessing e-cigarette use intensity. As of December 2020, FDA is reviewing e-cigarette product applications submitted to the agency as part of the Premarket Tobacco Product Application process. FDA marketing authorisation decisions may lead to some consolidation of the number and variety of e-cigarettes available on the market, but a wide range of products is likely to remain available. This will continue to present challenges for measuring e-cigarette use intensity. In order to understand e-cigarette use intensity, three factors must be considered simultaneously: (1) device characteristics, (2) liquid characteristics and (3) user behaviours. As illustrated in figure 1, e-cigarette emissions and user exposure are all influenced by these three factors. Importantly, with e-cigarettes, all three of these factors can be modified. Using nicotine emissions as an example, a given combination of e-cigarette device settings (eg, device wattage), liquid ingredients (eg, nicotine concentration) and user behaviours (eg, puff duration) is associated with specific and predictable 55 nicotine emissions. Importantly, there are numerous combinations of e-cigarette device, liquid and user behaviour characteristics that can be employed to achieve a given quantity of nicotine emitted from an e-cigarette. However, unlike combustible cigarettes, e-cigarettes allow users to modify virtually all the factors that impact emissions. Thus, regulations that focus on only one or two of these factors will have difficulty in achieving desired outcomes, especially when many e-cigarette devices are 'open-systems' 56 . Researchers may need to use a combination of survey measures that better capture the interaction between e-cigarette devices, liquid characteristics and use patterns given their influence What this paper adds ⇒ Surveys are used to measure tobacco use intensity by researchers for many purposes. ⇒ Measures of tobacco use intensity have been developed for tobacco products, such as number of cigarettes smoked per day. ⇒ Despite attempts to develop electronic cigarette use intensity measures, great heterogeneity in electronic cigarette device and liquid characteristics and user behaviour make measuring electronic cigarette use intensity challenging. ⇒ This paper describes the challenges, limitations and implications of the current methods used to measure electronic cigarette use intensity. ⇒ Researchers should consider electronic cigarette device and liquid characteristics as well as user behaviours when attempting to measure electronic cigarette use intensity. Special communication on e-cigarette use intensity and toxicant exposure. Additionally, the specific survey items to capture these measurement domains will vary according to study purpose and population. Novel approaches and methods may also improve the ability to obtain data necessary for examining e-cigarette use intensity when used in combination with self-reported survey methods. One approach may be to incorporate the use of user-uploaded images or videos to surveys, for example, asking e-cigarette users to provide images of their devices and liquids. When feasible, researchers may consider asking participants to provide videos or demonstrations of 'typical' puffs if puff counters are to be used which would allow researchers to extrapolate total puff duration in a given period of time. Combining device and liquid characteristics obtained from images and typical puffing behaviours obtained from videos might enable more accurate calculation of the amount of aerosol, nicotine and toxicant exposure for individual e-cigarette users. In laboratory settings, researchers may also examine if any of the currently available survey measures are more associated with biomarkers, such as plasma nicotine, urine cotinine, urine propylene glycol or vegetable glycerin, or other toxicants found in e-cigarette aerosol. These data may inform which survey measures are most useful for comparing e-cigarette use intensity and exposure to e-cigarette aerosol and toxicants. Finally, e-cigarette use intensity comparisons between users or timepoints may be most useful when as many factors are held constant as possible, such as using the same measures over time and only comparing users of a single e-cigarette device and liquid combination. Contributors ES wrote the first draft of the manuscript. All authors provided feedback on the first draft and approved the final version of the manuscript. Funding Funding for this work was supported by the Food and Drug Administration (FDA), Centre for Tobacco Products (CTP) and the National Institutes of Health (NIH). ES was supported by National Institute on Drug Abuse (NIDA) grant number U54DA036105. MB-T and SM were supported by National Cancer Institute (NCI) grant number U54CA228110. SP was supported by NIDA grant U54DA046060, and JBU was supported by NCI grant number U54CA180905. KW and RG were not supported by grant funding for their contributions to this manuscript. All authors contributed substantially to the writing of this manuscript. Disclaimer The content is solely the responsibility of the authors and does not necessarily represent the official views of NIH or the FDA.
2021-06-02T06:16:56.488Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "b1b67808683e64b71bf11d63c6757f93459169ca", "oa_license": "CCBYNC", "oa_url": "https://tobaccocontrol.bmj.com/content/tobaccocontrol/early/2021/05/30/tobaccocontrol-2021-056483.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c7b878fd2ce502975b734ac3f42dfd8b7f663d9c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240875565
pes2o/s2orc
v3-fos-license
Identification of Disruptions and Associated Resilience Strategies in Blood Supply Chain Using a New Combined Approach INTRODUCTION: Supply chains face various disruptions from human-made to natural disasters preventing proper flow of materials and products. This problem is more important in the healthcare supply chains, especially the blood supply chains, in which human lives are at risk. Making the supply chains resilient, recently addressed by managers and researchers, can be a good way to tackle them. This study aimed to identify the most important disruptions and associated resilience strategies in the blood supply chain of Tehran, Iran, and prioritize the identified strategies based on the disruptions. METHODS: In the current study, important disruptions and associated appropriate resilience strategies were determined using previous studies and Delphi method. Then, the most important resilience strategies were identified and prioritized proposing the House of Quality and Importance-Performance Analysis (IPA) as a novel hybrid approach. FINDINGS: A total of 9 disruptions and 16 resilience strategies were determined by reviewing previous studies and asking for expert opinions. The suggested hybrid model also contributed to determining the importance of each strategy in addressing disruptions and prioritizing them in the blood supply chain of Tehran. CONCLUSION: Those strategies with high importance and low performance, such as flexibility or risk management culture, are of great importance and should be considered by managers and improved according to IPA. In addition, other strategies, such as social responsibility or redundancy, should be continued in the current way. Introduction ran is exposed to various natural and manmade disruptions. The human-related consequences of disasters and their impact on healthcare service delivery have made this issue very important (1). On the other hand, rapid changes, increased uncertainty, environmental fluctuations, and unexpected risks have increased the likelihood of severe disruptions in the supply chains (2). These disruptions can affect organizational performance (3). Over the past two decades, several destructive events, such as financial or economic crises, natural disasters, or supplier bankruptcies, have put organizations at risk (4). For example, the earthquake, tsunami, and subsequent nuclear crisis in Japan in 2011 caused Toyota to reduce production to 40,000 units and lose 72 million dollars of profit a day (5). Such disasters disrupt supply chains (6). The likelihood of the above-mentioned events is very low and difficult to predict (7). However, if they occur, they will have a significant impact on businesses (8). Supply chain disruptions are unplanned and unpredictable events interrupting the normal flow of products and materials in the supply chain (9). They are caused by various factors, such as natural disasters, fires, loss of suppliers, wars, and terrorist attacks (7). Disruption in the healthcare supply chain means an unwanted event that can make it difficult to deliver services (10). Disruptions, such as earthquakes, floods, hurricanes, and other hazards, affect many countries and resources around the world, although this impact will be greater in countries that are populated, less prepared, and fragile. This has encouraged international organizations, such as the World Health Organization and United Nations, to launch campaigns (e.g., hospitals safe from disasters) to promote attention toward protecting healthcare facilities from natural hazards (11). Therefore, in recent years, disruption management of the supply chain has become one of the most important concerns of many organizations and managers (12) leading researchers to address disruptions by creating resilient supply chains (13). Resilience can be defined as the ability of a system to return to its normal state or move to a new and more desirable state after a disruption (14). The resilience of organizations and their supply chains has become a common point of interest among many researchers. This interest has been due to the effects of disruptions on company levels in the short and long term (15). Resilience in healthcare services is more important than production because failure to deliver on-time services to patients can have lethal effects (11). Healthcare is usually the first thing coming to the mind when dealing with the consequences of major hazards (16). Although the supply chain and disasters have recently received more attention, there have been a limited number of studies to simultaneously address both disasters and healthcare supply chain (17). Considering the vital role of blood, the management of blood supply chain disruption is very important. A major challenge for public health systems around the world is to provide adequate health and blood during disasters (18). A disaster in blood transfusion services indicates a situation in which the ability of the supply chain to receive and supply blood temporarily or completely stops. In other words, a condition causing sudden or more than usual demand for blood products in hospitals and problems for the blood collection system (19). Therefore, disruptions in the blood supply chain lead to disasters that can cause injury, destruction, loss of life, human suffering, or deterioration in the supply chain (20). These disasters illustrate the way disruptions can affect the performance of the blood supply chain and operational services (6). Consequently, it is important to try to appropriately respond to these catastrophes using resilience strategies (21). Studies carried out on supply chain resilience are increasing, and many of these studies address various issues of supply chain vulnerability and risk (22). In this regard, some studies (23,24) have applied mathematical modeling. Several studies identified resilience factors and strategies in the supply chain by using approaches, such as SWOT analysis, literature review, and focus group (25,26). However, there have been a limited number of studies on the resilience of the blood supply chain. Alora and Barua identified and prioritized the risks associated with disruptions in the Indian supply chain. The first stage involves the identification and finalization of supply chain disruption risks through literature review and expert opinion using the Delphi method. In addition, the second stage includes the extraction of the relationships between risk factors using interpretive structural modeling (ISM) (27). Tang studied robust strategies for the reduction of supply chain disruptions and presented robust strategies with two properties (28). Chowdhury et al. used the analytic hierarchy process (AHP) and quality function deployment (QFD) to identify and prioritize vulnerabilities related to supply chain resilience (29). An extensive literature review was carried out by Sangari and Dashtpeyma to build a comprehensive set of supply chain resilience strategies in which ISM and fuzzy network analysis were used to analyze identified factors (30). Bradaschia and Pereira examined the strategy of flexibility in the supply chain in a hospital and attempted to investigate the resilience of the supply chain with this strategy (31). According to the results of a study carried out by Mandal on the effect of organizational culture dimensions on supply chain resilience, culture can have positive effects in this regard (16). A review of the literature indicates that several studies have investigated supply chain disruptions and their impacts on supply chain performance. However, a few of them have examined supply chain strategies in controlling disruptions in the blood supply chain. Therefore, the determination of the disruptions and resilience strategies of the blood supply chain would be useful in increasing resilience in this regard. In addition, the relationship between these strategies with disruptions and effect of each of these strategies on disruptions is also an important subject. Therefore, the present study aimed to answer the following questions: 1. What are the common disruptions in a blood supply chain? 2. What are resilience strategies to cope with these disruptions? 3. How to relate disruptions and resilience strategies and prioritize them? Importance-Performance Analysis (IPA) is suggested in order to answer the abovementioned questions. However, the conventional IPA did not clearly identify the importance of strategies; therefore, the House of Quality (HOQ) concepts were used to measure the importance of strategies. Some studies used a variety of approaches to improve IPA, such as a modified IPA based on partial correlation analysis and natural logarithmic transformation (32), combination of IPA and Decision-Making Trial and Evaluation Laboratory (33), IPA and Kano model (34), and combination of IPA and fuzzy sets, to reinforce IPA model (35). The major difference between the present study and the above-mentioned studies is that none of the studies used the concepts of the HOQ to determine the importance of variables in IPA. In fact, in addition to studying the major disruptions and strategies of resilience in the blood supply chain, the innovation of the current study is the development of a new hybrid HOQ-IPA approach. Firstly, research methodology is described in the following sections, and then the findings of the study are reported. Finally, discussion and conclusion are presented. Methods The purpose of this study was to investigate the disruptions of the blood supply chain and analyze the associated resilience strategies. A review of the literature and the Delphi method were used for the identification of the blood supply chain disruptions and effective strategies. Then, the IPA approach was adopted to prioritize the identified strategies. Therefore, it was possible to investigate the position of each blood supply chain strategy based on its performance and importance. It is very important to evaluate the importance of strategies in terms of their role in controlling the disruptions; however, in the conventional IPA, the importance of the strategy is determined using a questionnaire regardless of the effectiveness of the strategy in controlling the disruptions. Therefore, the concepts of the HOQ were used to determine the importance of strategies. In other words, the importance of strategies is determined through their effectiveness in controlling disruptions. In the present study, the selection of experts was conducted using purposive sampling. Out of different managers in the blood supply chain of Tehran, Iran, 11 experts were chosen in order to employ the Delphi method and examine the relationships between strategies and disruptions in the HOQ. Each part of the methodology is explained in the following sections. Importance-Performance Analysis The IPA was introduced by Martilla and James as a method for the development of effective marketing programs (36). Organizations can examine different types of quality attributes based on each of the four quadrants in the IPA matrix and formulate strategies and plans. Developed-IPA methodology Proposed IPA steps in the present study are similar to those of typical IPA except that the degree of importance was determined using the HOQ concepts in the second step. Step 1: This step identifies the strategies related to blood supply chain disruptions. In this study, disruptions and resilience strategies were identified using a literature review and the Delphi method. The experts were 11 members of the blood supply chain with more than 15 years of experience. Step 2: This step is the determination of the degree of importance and performance of each factor. In the present study, the performance and importance of each strategy were measured by a questionnaire and the HOQ, respectively. In the HOQ model, a set of requirements (i.e., WHATs) and responses (i.e., HOWs) are expressed, and each response can satisfy one or more requirements. One of the steps in the HOQ is the relationship matrix, used to determine the relationships between disruptions (i.e., WHATs) and resilience strategies (i.e., HOWs) (37). As previously mentioned, the current study used the relationship matrix to determine the importance of strategies; accordingly, each strategy had a relationship to the disruption or could be used to deal with that. In this matrix, the relationships between disruptions and resilience strategies are considered a strong relationship (9), normal relationship (3), weak relationship (1), and no relationship (0). Table 1 depicts the symbols of the aforementioned relationships (37). For the determination of the importance of strategies using the HOQ following steps are taken: A. This step identifies the disruptions that may occur in the blood supply chain. This is performed by reviewing the literature and asking for expert opinions. These disruptions fall into the matrix rows (i.e., HOWs) in accordance with the HOQ matrix as shown in Table 2. B. This step determines the strategies using a literature review and expert opinions. The important point is that more than one strategy can be dealt with a disruption. These strategies are placed in the column of WHATs as presented in Table 2. C. This step identifies the relationship between each strategy and disruptions. Then, the importance of each strategy is calculated by summing the relationships specified in the matrix. Step 3: In this step, the opinions of all decision-makers are integrated using the geometric mean. Therefore, b j is the final value of importance, and c j is the final value of performance (38). As a result, there will be one degree of importance and one degree of performance for each strategy. Step 4: This step determines the threshold value. The threshold value is used to determine the IPA matrix quadrants. The arithmetic mean is utilized for the determination of the threshold value. The threshold values of importance and performance are represented by and respectively, as follows: where m is the number of resilience strategies to cope with disruptions. Step 5: This step constructs the IPA matrix and determines the relative position of each strategy on the matrix. There are four quadrants as follows: "Concentrate here" representing the strategies of high importance and low performance "Maintain" representing the strategies of high importance and high performance "Possible overkill" representing the strategies of low importance and high performance "Lower priority" representing the strategies of low importance and low performance (39) Step 6: This step determines the priority of each strategy for improvement. The gap between the importance value and performance of the j strategy multiplied by the importance value can represent the weight of the j strategy. The weight of the j strategy is denoted by OWj. Then, normalization is performed for ease of analysis as follows: As a result, those with higher should be given higher priority for improvement (38). Findings Considering the IPA steps, the results are presented in this section. Step 1: According to the expert opinions and literature review, 9 disruptions (Table 3) and 16 resilience strategies of the blood supply chain (Table 4) were identified. They were finalized using the Delphi method in three rounds. The definition and explanation of the resilience strategies are presented as follows: 1-Redundancy: It means maintaining the capacity to respond to disruptions in the supply chains, which is mostly performed by investing in capacity and capital before it is required (54). The strategic use of capacity and surplus inventory that can be used in times of crisis, such as shortages or increased demands (14). Since this strategy is costly, it should be used when the disruption is predictable or likely to last for a short time (8). 2-Flexibility: Flexibility is the most frequently used strategy, based on the literature, for the reduction of supply chain disruptions (40). Flexibility is the ability of a company to respond flexibly to fundamental or long-term changes in the supply chain or market environment by adjusting the supply chain configuration (55). Datta et al. stated that flexibility is needed in all parts of the supply chain (56). With infrastructures, such as flexible transportation systems, flexible manufacturing facilities, and flexible capacity, organizations can increase the degree of resilience in their supply chains (30,57). (2,30). It can reduce supply chain uncertainty and vulnerability to disruptions (2). Information sharing means that team members collectively utilize available information resources (58). This strategy and collaboration among different entities in the supply chain are prerequisites for the achievement of visibility (2). 4-Collaboration: It is the ability to effectively work with other entities for mutual benefits (30). Collaboration between different groups can help effectively manage risks (13). 5-Visibility: Visibility is the ability to be perceived by the eye or the mind. Supply chain visibility is about information concerning the entities and events determining orders, inventories, shipments, and distribution to any events in the environment (59). Increasing the visibility of demand information across the supply chain reduces risks (60). 6-Agility: Agility is defined as the ability of an organization supply chain to quickly respond to unpredictable changes in demand and supply (2,14,61). Agility is the ability to efficiently change operating conditions to respond to the uncertain environment or volatile market conditions (40). 7-Anticipation: It is the ability to detect potentially damaging future events (25). Supply chain operational managers should anticipate disruptions and prepare the supply chain for any expected and unexpected changes (57). In some studies, anticipation is also mentioned as sensing (62). This capability prepares organizations to cope with the negative impacts of future events, thereby making the supply chain resilient (30). 8-Risk Management Culture in the Supply Chain: As many organizations have recognized that the only way to implement total quality management was to create its culture in order for everyone to have a quality concern, today it is required to create a risk management culture (14). Therefore, risk management should be embedded in the organizational culture of any company (63), and this includes support of the organization's senior management, especially in critical issues (61). 9-Security: It is the protection against intentional disruptions, such as thefts, terrorist attacks, and cyber-attacks (25,61). The purpose of the security strategy is to increase supply chain capability for the identification of suspicious and unusual elements (64). In this regard, secure information is provided to all stakeholders for the prevention of attacks and intrusions (65). Security is an essential feature of any supply chain and should be predesigned to reduce the occurrence of disruptions (66). 10-Lean and Efficient: Efficiency is meant to produce outputs with the least resources (25) and have low wastage to fully meet anticipated demand (67). The adoption of lean approaches leads to reduce waste and increase productivity (68). 11-Use of Information Technology capabilities: Information technology (IT) can enhance communication and support for other resilience strategies (61) due to the importance expressed separately. 12-Financial strength: It means the ability to deal with financial fluctuations (25). 13-Dispersion: This means decentralizing resources and customers (25). Decentralization allows local communities to be more responsive and reduce the risk of disruption despite an increase in costs (50). 14. Quality Management: Some studies pointed out that quality management and control in addition to related tools and approaches can lead to resilience (68), especially for products that are perishable, such as foods (69). 16. Spreading social responsibility among individuals: Soni et al. describe one of the resilience strategies as corporate social responsibility (13). However, it can also be expressed among individuals. Expanding social responsibility is one of the common ways to increase donations (71). Steel et al. stated that raising donor social responsibility can have a positive effect on blood donation behavior (72). Step 2: This step is the determination of the importance and performance of each strategy. As noted, the HOQ was used to determine the importance of strategies regarding their relation-ships with the blood supply chain disruptions. It should be indicated that the numbers were normalized for an easier understanding of the importance of strategies in the matrix. Expert performance evaluation strategies were obtained using a usual IPA questionnaire. Step 3: The importance and performance values of blood supply chain resilience strategies and their rankings are presented in Table 6. Step 4: The threshold values of importance and performance are listed in the bottom row of Table 6. These values are used in drawing the IPA matrix. Step 5: In this step, the importance and performance matrix is plotted according to the threshold value and importance and performance values of each strategy in Figure 1. As illustrated in Figure 1, the strategies of redundancy (S1), agility (S5), collaboration (S3), social responsibility (S16), and financial strength (S12) are in the "Maintain" quadrant. In other words, these strategies are of great importance in controlling the disruptions with high performance in the supply chain; therefore, they had to be preserved as they were. Anticipation (S7), security (S9), risk management culture (S8), and flexibility (S2) are in the "Concentrate here" quadrant indicating that despite their high importance they are not well implemented, thereby requiring to be invested and improved. Dispersion (S13), quality management (S14), IT (S11), and lean and efficient (S10) are of little importance but performed well. As a result, given their low importance in controlling disruptions effectively, they did not require more investment, and some of the costs and resources spent on these strategies needed to be used for other strategies, especially the "Concentrate here" quadrant strategies. Human resource management (S15), information sharing (S6), and visibility (S4) are in the "Lower priority" quadrant. In other words, given their role in controlling disruptions, they are of low performance; however, due to their low importance, there was no need for investment to improve their performance. Step 6: In order to prioritize strategies for improvement, the weight of each strategy was calculated using formula 2 and normalized using formula 3 and is presented in the fourth column of Table 6. According to the results in Table 6, it is suggested to focus on the improvement of "Concentrate here" quadrant strategies based on the weight calculated for each strategy, namely flexibility (S2), risk management culture (S8), anticipation (S7), and security (S9). Discussion and Conclusion With the spread of disruptions in different supply chains, strategies for dealing with them are increasing. Resilience strategies have received the attention of researchers and managers recently using methods such, as QFD and AHP (29) or ISM and fuzzy analytic network process (30). However, less attention has been paid to these disruptions and their coping strategies in the healthcare supply chains, especially the blood supply chains. To the best of our knowledge, the combined method used in the present study has been never used in other studies. Therefore, the present study investigated appropriate resilience strategies to deal with disruptions in the blood supply chain using a new hybrid method of IPA and HOQ and prioritized each strategy after the identification of their importance and performance. According to the results of the current study, the strategies at the bottom had lower importance in this supply chain; nevertheless, some of them have been paid attention in other supply chains, such as visibility (60,61). In the "Maintain" quadrant, unlike the strategies, such as financial strength (25,45) or redundancy (8,14,54), social responsibility was not mentioned in other studies. Strategies to focus on in the "Concentrate here" quadrant include flexibility (30,40,45), risk management culture (14,63), security (61,64,65), and anticipation (25,30,67). The strategies that fall into this quadrant should be given more attention. For example, flexibility in the resilience literature has been cited as one of the most important strategies for dealing with disruptions (40). Infrastructures, such as a flexible transportation system in different parts of the supply chain from blood collection centers to blood centers and distribution to hospitals or various types of vehicles (e.g., helicopters), can help the network to be flexible in different situations. According to the equipment of the Iranian Blood Transfusion Organization, it is required to collaborate with other organizations, such as the Iranian Red Crescent Society or municipalities. Although these collaborations occur in times of disaster, their protocols, processes, and prerequisites should be specified in advance. In addition, given the current situation, managers should monitor various parts of the supply chain and provide sufficient flexibility in processes, blood centers, and suppliers. Due to the problems and situations existing under sanctions, special attention should be paid to suppliers' flexibility. In this regard, multi-sourcing is one of the common ways. Another important issue is creating a risk management culture in the supply chain ensuring that all the members of the organization have embraced supply chain risk management, especially the management of the organization. Some of the blood supply chain managers do not have a management background; therefore, senior managers need to become more familiar with the concept of risk management and how to spread the culture across the supply chain. Holding risk management courses for the staff at different levels can also be beneficial. Overall, the whole supply chain should perceive the importance of risk management in their working life. Only with regard to the HOQ, some points can be concluded. Some strategies are good for particular disruptions and are not suitable for others. For example, expanding individuals' social responsibility is one of the useful strategies in some disruptions, such as severe climate change or contagious diseases (e.g., coronavirus disease 2019). In these situations, a sense of compassion and responsibility can help a great deal in addressing blood shortage. However, how to expand this issue and cause the donor to remain loyal are other subjects beyond the scope of this study. The flexibility of the supply chain to provide some special methods and circumstances can also be beneficial. It is recommended to carry out studies to further investigate strategies that are ranked in each quadrant by categorizing the strategies of each quadrant according to different criteria using multi-criteria decision-making methods. Moreover, these strategies may be related to each other, and the way a strategy affects other ones can be beneficial for managers and a good idea for future studies. Some strategies are also close to the boundaries of the quadrants, and using fuzzy numbers in the calculation of the values may improve the results. In this study, the HOQ approach was added to determine the weights of each strategy; however, further approaches can be used in future studies. In addition, if the number of disruptions is higher, the disruptions can be firstly clustered using different criteria, and then strategies are identified and appropriately prioritized for each cluster.
2020-11-12T09:04:30.792Z
2020-08-03T00:00:00.000
{ "year": 2020, "sha1": "af2732a741ee4f9b7926846c20a232ce125200cf", "oa_license": "CCBYNC", "oa_url": "http://jorar.ir/files/site1/user_files_a44d7a/alisib-A-10-545-1-6ccfdb2.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1aebfc2ddbd089c18a52090257c587c780340a1b", "s2fieldsofstudy": [ "Business", "Medicine", "Environmental Science" ], "extfieldsofstudy": [] }
213056234
pes2o/s2orc
v3-fos-license
Stress analysis of engine mounting assembly of a three wheeler One of the essential parts of an auto vehicle is an engine mounting bracket, which helps in mounting the engine on the chassis. This paper analyses and compares the existing bracket design of one of the famous three-wheeler of Scooter India limited named as Vikram 750 D with five new alternative models using finite element analysis. In the first part, linear structural finite element analysis of engine mounting bracket of the existing design was carried out to determine the maximum stresses and deformations in the existing model and then identical analysis was conducted for five proposed models of the mounting brackets. The maximum stresses and deformations of the proposed models were compared with that of the existing model to identify a better design. The weight of the existing model and five proposed model were also compared to obtain an optimal design. Introduction- Engine is one of the most vital components of a transport vehicle. It is generally supported by engine mounting bracket system. The improvement in the design of the engine bracket system has been the subject of great interest for many years. As suggested by Ghorpade et al, [1] the automotive engine mounting systems are very important due to different aspects of performance. They presented a finite element (FE) analysis of a basic engine mounting bracket of a car to determine the natural frequency of the bracket. Adkine and Kathavate [2] have performed static as well as modal analysis of engine mounting bracket by using ANSYS for investigating whether the current natural frequency of engine mounting bracket is lower than that of self-excitation frequency of bracket. Dhillon et al, [3] proposed that the exact geometry and positioning of engine mounting brackets on the chassis ensures a good riding quality and performance of the vehicle. They described the solid modelling, finite element analysis, modal analysis and mass optimization of engine mount brackets for an FSAE car. Sebastian et al, [4] suggested that the analysis of a jet engine mounts by finite element method usually deals with the stress analysis of the mount. It is pointless to believe that only stress analysis is essential, as the displacement of the mount (elongation) is also pivotal factor when real-life scenarios come across. Kolte et al, [5] performed structural analysis to check the durability of specified part for a given loading and support conditions. For the component to be safe structurally, in any domain, the stresses generated should not exceed the yield strength of the material. In an automobile, the engine mounting brackets are connected to the main-frame of the vehicle and support the engine. While in operation, the unsought stresses and vibrations generated by the engine and road roughness can transmitted to the frame directly through the brackets. These vibrations may cause discomfort to the passenger sitting inside the vehicle or might even damage the chassis of the vehicle [6]. The main purpose of an engine mounting bracket is to support the engine therefore it is required to design it properly. The stress and vibrations produced in an engine mounting bracket have been continuously a matter of great concern and may lead to the failure of structure. If the vibration in the engine mounting further exceeds it may cause fatigue and sometimes it damages the vehicle [1]. In this paper first, the existing design of engine mounting of three-wheeler Vikram 750 D is analysed using finite element analysis on the three-dimensional model of the bracket to determine the stress distribution & displacements at various points of mounting. In the second part, five alternative designs of the bracket were proposed and were analysed using finite element analysis by subjecting to identical loading conditions on their respective three-dimensional models. The maximum stresses& displacements of each proposed designs are calculated and compared with the results obtained from the present design. Engine Mounting The engine mounting assembly of an existing model of Vikram 750D consists of main three parts 1. Bracket 2. Channel 3. Square bar The manufacturing of each part is discussed as follows. Bracket The bracket is fabricated by shearing off a 5 mm thick sheet according to dimensions shown in the bracket drawing in fig 1 and then by bending the sheet at 130 o . Finally, two holes and two slots are also pierced off from the sheet. Channel The channel is fabricated using a 1 mm thick sheet cut off using a shearing machine, and then notching operation is performed according to the drawing given in figure 2. Finally, it is bent according to the drawing. The two brackets fabricated earlier are welded at the two ends of the channel through MIG welding. Square Bar This component is made from standard 16 x16 mm 2 bar. A square bar of 325 mm length is cut off using a press machine as per the drawing is shown in figure 3. This square bar is welded inside the Ushape channel for increasing the strength of the channel. The dimensional details of the bracket geometry are listed in Table 1. Finally, all three parts of engine mounting described above are assembled and joined as per the drawing using MIG welding. The real image of engine mounting bracket assembly is shown in figure 4 and 5. Static Structural Analysis In static structural analysis displacements, stresses are determined on the structue due to load distribution. The steps used in the analysis are shown in fig. 2.6. The basic steps involved in the development and analysis of existing and proposed designs are discussed in the following subsections. Modelling A three-dimensional solid model of the engine mounting bracket was developed using Solidworks solid modelling software. The dimensions of the assembly as mentioned in Table 2.1 were taken from the actual mounting bracket used in Vikram 750D by Scooter's India Limited, Lucknow. Figure 7 shows the three-dimensional solid model of the actual bracket assembly used in the three-wheeler. Meshing The solid model of the engine mounting assembly was then imported to the ANSYS Workbench environment. The materials properties of the material as given in Table 2 used in the fabrication of assembly were assigned. The model was then meshed using tetrahedral elements with the help of the mesh tool provided in the ANSYS workbench. This tetrahedral element consists of four nodes, with each node having three degrees of freedom in X, Y and Z directions. The meshed model of the existing mounting assembly has 17119 nodes and 8767 elements. Boundary Conditions In an automobile, the engine is mounted on the mounting bracket by tightening four bolts on the tapered side walls provided at both the sides of the bracket. For ease of mounting four elliptical slots (30 x 9 mm) are provided, two slots of on the each of the tapered wall. To apply the boundary conditions of the model, these four elliptical slots were fixed assigning all degrees of freedoms to zero. Figure 9 and 10 display the left and the right tapered walls of the bracket having elliptical slots fixed in the model. Loading The weight of the existing engine block of Vikram 750D is 703 N. This is distributed over the two horizontal flanks provided in the bracket to support the load of the engine block. In the FE model of the bracket, the entire load was divided into half and applied as uniformly distributed over the horizontal flanks in vertically downwards direction. Figure 11 displays the FE model of the bracket with applied load. Solution After the loads and boundary conditions were specified, the model was solved for linear, static structural analysis. Once the solution is obtained, Von-Mises stress and maximum deformation contour plots for the model were generated. Figure 12 and figure 13 displays the contour plots for Von-Mises stress distribution and maximum deformation respectively. Table 3 lists the maximum values of Von-Mises stress and maximum deformation obtained in the analysis for the existing design. ANALYSIS OF PROPOSED DESIGNS The proposed designs were analyzed by the method described in the previous section under the identical load and boundary conditions and the results of stresses & displacements developed at various points are compared with that of the existing model. Total of five proposed designs were studied in this work. Following are the details of each proposed model: Model-1 In model-1 "L" shaped channel is used instead of "U" shaped channel of the existing model as shown in fig 14. The geometry of the bracket remains the same as in the present design however the square bar is not used in this design. Result & Discussion The finite element analysis of the existing bracket was conducted in two parts. In the first part, a solid model of the existing bracket used in Vikram 750D was developed and a linear, static FE analysis was carried out to determine the base values of stresses and deformation in the bracket. In the second part, five new designs of the bracket were proposed and finite element analysis was performed on each using the method discussed in section 2.2 under identical boundary conditions. Figure 19 to figure 28 depicts the Von-mises stress distribution and maximum deformation in all proposed designs. Mass of the engine mounting also plays a very critical role in designing of engine mounting assembly. Low mass is always preferable for engine parts. Therefore, to achieve this, the mass of the existing design as well as five proposed designs are calculated and tabulated in table 5. Table 4, figure 28 and 29 it is evident that the minimum stress was induced in Model-2 and Model-5 but the deformation in these models is slightly more than the existing model. The minimum deformation was observed in the Model-4 but the induced stress in this model is very large. Model-1 and Model-3 show both stress and deformation is more in these models as compared to the existing model. Conclusion The finite element analysis of the existing model and five proposed model of engine mounting bracket were conducted and the values of stress and deformations are compared. It can be concluded that the two models which may be considered for the existing model are Model-2 and Model-5 as under working condition the maximum stress developed in them is less than the maximum stress developed in the existing design. However, on comparing these two models, it was concluded that the better choice for designing engine mounting bracket is Model-5 (Design with square bar channel throughout), as the maximum stress in this model is less than the Model-2. As far as maximum deformation and mass are concerned these parameters are slightly more in Model-5 but this increment of deformation and weight has not much impact on the design. Another reason for choosing Model-5 is that the manufacturing of Model-2 involves two additional processes viz. shearing of sheet & two bending processes which will be requiring additional operation cost as well as time, whereas the manufacturing of Model-5 is simple, as it requires only the process of part off and bending which is very simple & less time taking as compared with the other design. Acknowledgement The authors would like to thank the management and the staff of Scooter India Ltd., Lucknow especially Mr Anil Kumar, (Retd. Chief Engineer) for their steady support in the execution of this
2019-12-19T09:09:36.711Z
2019-12-11T00:00:00.000
{ "year": 2019, "sha1": "8fd4bdadd55153ddbab46b431e655d9b98a54a3b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/691/1/012064", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bb3fceff195ad12255ba32d88c776a2aba9910a0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
236665988
pes2o/s2orc
v3-fos-license
A Current-Commutation DC Circuit Breaker with Adaptive Reclosing Capability . DC faults are critical events in a flexible high-voltage dc (HVDC) grid. Thus, ensuring that the power system returns to normal operation rapidly and reliably after fault isolation is very important. This requires a HVDC breaker. In overhead line systems under temporary faults, reclosing is often required. However, once the dc circuit breaker (DCCB) is reclosed directly, the large second overcurrent may occur which could damage the power electronic devices. To avoid this problem, a current-commutation DC circuit breaker with adaptive reclosing capability is proposed. Compared with the traditional auto-reclosing strategy, the second damage under permanent fault condition can be avoided by the proposed DCCB, which can identify the fault property. Compared with the hybrid DCCB, the power electronic breaking branch composed of lots of IGBTs is replaced by the current- commutation branch, which is employed to interrupt bi-directional dc fault current. Moreover, bypassing branch is configured to reduce the energy dissipation of arrester and shorten the fault isolation time effectively. Finally, simulation cases in PSCAD /EMTDC verify the effectiveness and superiorities of the proposed DCCB. Introduction The voltage source converters based HVDC grids are expected to integrate large-scale renewable energy located far from the load center in the future [1]- [2]. Especially, with the development of the modular multilevel converter (MMC) [3], it becomes more feasible to apply the flexible HVDC transmission. The overhead line will be widely applied in the future HVDC grids because of the economic advantage. Compared with the dc cable, the temporary fault probability of the overhead line is much higher. Hence, effective reclosing strategies should be applied to improve the HVDC system reliability. Moreover, the dc line impedance is much smaller than the one in the ac system [4], causing the dc fault current to rise rapidly within a few milliseconds. Therefore, one of the major challenges of HVDC grids is the fast and reliable isolation of dc faults [5]. The hybrid DCCB is an effective solution to interrupt dc fault current quickly [6]. However, the power electronic breaking branch composed of lots of IGBTs and diodes needs to withstand dc system voltage. Thus, the hybrid DCCB is composed of a large number of power electronic devices, leading to a high cost of hybrid DCCB. The reduction of the peak fault current will reduce the energy dissipation and the capital cost of DCCB. Once the fault is isolated, the key problem for the reclosing is to identify the fault property (temporary or permanent) reliably. Generally, the second damage under permanent fault condition in the flexible HVDC grid is much more severe, while the power electronic devices are very sensitive. Thus, it is not appropriate to reclose the DCCB directly for recovering the system anymore. Above all, the low-cost DCCB which can identify the fault property still needs further research. Reference [7] designed a low-cost voltage clamping based DCCB, but it can only interrupt the current in one direction. Reference [8] proposed a DCCB with H-bridge circuit composed of diode groups to achieve the ability of bi-directional current breaking. Compared with the hybrid DCCB proposed by ABB, the number of IGBTs is halved, but the fault current is not suppressed significantly. Reference [9] proposed a DCCB with Hbridge circuit, which the fault current is suppressed dramatically. However, there are still a large number of IGBTs in the power electronic breaking branch. Besides, it is worth to mention that permanent faults and temporary faults cannot be identified in this scheme, which has the risk to reclose under permanent fault condition directly, so that the large second overcurrent may occur to damage the electronic devices in the grid. In order to further reduce the cost of DCCB and avoid reclosing under permanent fault condition directly, a current-commutation DC circuit breaker with adaptive reclosing capability is proposed. A current-commutation DC circuit breaker with adaptive reclosing capability Topology of the proposed DCCB is shown in Figure. 1, which is composed of the current-flow, current commutation, bypassing and reclosing identification branches. In addition, the current-limiting inductor of the dc line is represented as L dc . The fault is assumed occurring at t 0 . Considering the delay of fault detection, the fault has been detected and located at t 1 , T 1a and T 1b will be switched off immediately to commutate the fault current from current-flow branch to the current-commutation branch, and then UFD 1 is subsequently opened. Stage 2: t 1 <t<t 2 At t 1 , V 1 and V 2 will be triggered, ensuring the reliable opening of UFD 1 and LCS at zero voltage and zero current. At the same time, UFD 1 will be switched off and UFD 2 will be switched on. C g is used to regulate the charging voltage of C b , C b will be charged to a negative voltage for the reliable turning-off of V 1 in the next stage. When the charging process is finished, V 2 will turn off automatically. Stage 3: t 2 <t<t 3 At t 2 , the contactor of the UFD 1 has separated completely and the contactor of the UFD 2 has connected completely. At the same time, V 1 will turn off after a few milliseconds because of the negative voltage of the commutation capacitor C b , and the current starts to commute from V 1 branch to UFD 2 branch. With the reverse charging of C b , the voltage of C b increases gradually. When the capacitor voltage equals to the voltage of HVDC system, the current of C b starts to decrease, and the voltages on L dc and L L become negative. With the increase of time, the voltage exerted on C b becomes higher than the voltage of the HVDC system. Stage 4: t 3 <t<t 4 At t 3 , when the voltage exerted on C b reaches the triggering threshold of MOA, the fault current starts to commute to dissipation branch as shown by i 5 in Figure. 2(d). Meanwhile, V 3 will be triggered to bypass the line inductor L dc , so that the energy dissipation of MOA is reduced and the fault clearing time is shortened. When the fault current of bypassing branch turns to be zero, V 3 will automatically turn off. At t 4 , the fault current will drop to zero, which means DC fault has been isolated successfully. At the same time, UFD 2 will start to be switched off, further preparing for reclosing identification. Stage 5: t 5 <t<t 6 At t 5 , UFD 3 , UFD 4 will be turned on to drain the energy of C b and identify the fault property. At t 6 , the contactors of UFD 3 and UFD 4 have connected completely, and the gap between t 5 and t 6 is 2ms, and V 3 will be trigged at this moment. Considering that electromagnetic induction phenomena cannot be ignored in large-scale DC grids, the current of reclosing identification branch is difficult to turn to zero approximately even the temporary fault has eliminated. Hence, the identification threshold value I th is introduced to avoid misjudgment, which can be set to 2kA. When the fault is temporary fault, the fault point does not exist at t 5 . Thus, the current of reclosing identification branch is less than I th . When the fault is a permanent fault, the fault point still exists at t 5 , the discharging branch and the reclosing identification branch both can provide discharging paths for C b . The current of reclosing identification branch is greater than the identification threshold value I th . When the discharging process is finished, V 3 will automatically turn off. When the current of UFD 4 turns to be zero, the UFD 4 can be switched off. Obviously, the fault is a temporary fault or a permanent fault can be identified according to the current of reclosing identification branch. have connected completely, the gap between t 7 and t 8 is 2ms. At t 9 , when the discharging process is finished, the UFD 3 and UFD 5 will start to open. In order to ensure the HVDC system to return to normal operation reliably after fault isolation, the UFD 1 and UFD 2 should be switched on firstly, and then the T 1a and T 1b will be triggered. During the dc steady state, the current-commutation branch, bypassing branch and reclosing identification branch can be considered as open circuits. Parameter analysis 3.1Design of the Capacitance The commutation capacitor C b is required to be charged with reverse voltage during stage 2I, so that V 1 can withstand reverse voltage and turn off properly at t 2 . The charging of the commutation capacitor C b and the grounded capacitor C g depends on the voltage of dc line U dc . The charging path is as shown by i 3 in Figure. 2(b). During t 2~t3 , the charging path of the C b is as shown by i 4 in Figure. 2(c). The fault current of V 1 branch is considered to be commuted to UFD 2 branch immediately at t 2 . The initial conditions are i d (t 2 ) = i dc (t 2 ) = I 1 , U d (t 2 ) = U dc (t 2 ) = U 1 , ignoring the time that the fault current of V 1 drops to zero. The current of C b in time domain can be described as: Based on equation (1), the voltage of the commutation capacitor C b in time domain can be expressed as: (2) According to equation (2), with the value of C b decreasing, the charging speed of C b increases. At t 3 , the voltage exerted on C b reaches the triggering threshold of MOA, the current starts to commute from C b branch to dissipation branch. The triggering time of MOA is usually taken as the turn-off time in traditional hybrid DCCB, which is 5ms properly. According to equation (5), if the turn-off time is 5ms, the upper limit value C b_ max of C b can be calculated as 30uf. Design of the bypassing resistor Ry As the previous section mentioned, the bypassing thyristor V 3 will be turned on in stage 4 and stage 5. The design of the bypassing resistor R y is related to the current of the bypassing branch. The currents of bypassing branch in stage 4 and stage 5 need to be discussed, respectively. In stage 4, V 3 will be triggered to bypass the line inductance L dc at t 3 . The initial conditions are i d (t 3 ) = i dc (t 3 ) = I 2 . The current of the bypassing branch in stage 4 in time domain can be given as: In stage 5, when the fault is a permanent fault, the fault point still exists at t 5 . The discharging branch and the reclosing identification branch both can provide discharging paths for C b . The current of the bypassing branch decays exponentially. The initial conditions are i d (t 5 ) = i Cb (t 5 ) = I 3 , U d (t 5 ) = U Cb (t 5 ) = U 3 . The current of the bypassing branch in stage 5 in time domain can be expressed as: (4) According to equation (3) and equation (4), the maximum currents of the bypassing branch represented as i d2_max and i d3_max can be calculated, respectively. In view of the current limit of the bypassing thyristor V 3 , the maximum currents of the bypassing branch should be limited to no more than the overcurrent limit of V 3 . The bypassing resistor R y can be taken as 100 . Case study In order to verify the effectiveness of the proposed DCCB, a four-terminal dc grid model is built in PSCAD/EMTDC, whose topology is shown in Figure.2. Fault isolation process The pole-to-ground fault occurs at t 0 =1.0s, located at the head of line 1_4 as Figure.3 shows. The fault isolation process of the proposed DCCB is shown in Figure.6. As shown in Figure. 3(a), the current of current-flow branch increases rapidly once the fault occurs. Considering the operation delay caused by fault detection and location, V 1 is triggered at 1.001 s. Meanwhile, the T 1a and T 1b are turned off, so that the current of current-flow branch has turned to be zero, which provides UFD 1 with zero current turn-off condition. At 1.003 s, the contactors of the UFD 1 have separated completely and the contactors of the UFD 2 have connected completely, so that the fault current starts to commute from V 1 branch to UFD 2 branch. At 1.005 s, the voltage exerted on C b reaches U trigger , and the fault current starts to commute from C b branch to dissipation branch. At the same time, V 3 will be triggered to bypass the current-limiting inductor L dc . When the current of bypassing branch turns to be zero, V 3 will automatically turn off. As shown in Figure. 3(b), C b and C g are charged by dc line at 1.001s. At 1.003s, C b is charged reversely due to the commutation of the fault current. Differently from C b , C g will not be charged anymore because V 2 has already turned off before 1.003s. Reclosing identification process Assuming that the temporary fault has eliminated before 1.310s, the currents of reclosing identification branch and discharging branch are shown in Figure. 4 (a) and Figure.4(b), respectively. At 1.310s, UFD 3 , UFD 4 will be turned on to drain the energy of C b and identify the fault property. At 1.312s, the contactors of UFD 3 and UFD 4 have connected completely, and the V 3 will be trigged at this moment. When the fault is a temporary fault, the fault point does not exist at 1.312s. Correspondingly, the maximum current of reclosing identification branch is 0.95kA, which is less than the identification threshold value I th definitely. The discharging branch provides a discharging path for the C b , the maximum current of discharging branch is 2.95kA. When the fault is a permanent fault, the fault point still exists at 1.312s, the maximum current of reclosing identification branch is 7.95kA, which is larger than the identification threshold value I th definitely, so that the maximum current of the discharging branch is 0.05kA, which is less than the one in the temporary fault. At 1.362s, UFD 5 will be turned on to drain the energy of C g . In summary, the fault type, either temporary or permanent, can be identified according to the current of reclosing identification branch. Correspondingly, the power electronic devices will not suffer from the large secondary overcurrent even under permanent fault condition. This is a core advantage of the proposed DCCB with adaptive reclosing strategy compared with the traditional method which recloses the DCCB directly. Conclusion A novel current-commutation DC circuit breaker with adaptive reclosing capability is proposed in this paper. The fault isolation and reclosing identification process of the proposed DCCB are analyzed, and the parameter of the devices is analyzed. The conclusions are as follows. 1) Compared with the hybrid DCCB, the power electronic breaking branch composed of lots of IGBTs is replaced by the current-commutation branch, which is employed to interrupt bi-directional dc fault current. 3) In addition to providing a current flow path for the reclosing identification branch, the bypassing branch is employed to reduce the energy dissipation of MOA and accelerate the decay of the residual fault current. 3) Different from the auto-reclosing strategy, the proposed DCCB can identify the permanent fault and temporary fault by the current of the reclosing identification branch, which can avoid reclosing under permanent fault condition.
2021-08-03T00:05:28.220Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "99ef577ffc2eb1a35dc5f50d0d2f8f004ef57217", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/32/e3sconf_posei2021_01020.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "846a9f6c3649f3f6ef4f967b52890c779a67790a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
17890490
pes2o/s2orc
v3-fos-license
Age and Gender Differences in the Social Patterning of Cardiovascular Risk Factors in Switzerland: The CoLaus Study Objectives We examined the social distribution of a comprehensive range of cardiovascular risk factors (CVRF) in a Swiss population and assessed whether socioeconomic differences varied by age and gender. Methods Participants were 2960 men and 3343 women aged 35–75 years from a population-based survey conducted in Lausanne, Switzerland (CoLaus study). Educational level was the indicator of socioeconomic status used in this study. Analyses were stratified by gender and age group (35–54 years; 55–75 years). Results There were large educational differences in the prevalence of CVRF such as current smoking (Δ = absolute difference in prevalence between highest and lowest educational group:15.1%/12.6% in men/women aged 35–54 years), physical inactivity (Δ = 25.3%/22.7% in men/women aged 35–54 years), overweight and obesity (Δ = 14.6%/14.8% in men/women aged 55–75 years for obesity), hypertension (Δ = 16.7%/11.4% in men/women aged 55–75 years), dyslipidemia (Δ = 2.8%/6.2% in men/women aged 35–54 years for high LDL-cholesterol) and diabetes (Δ = 6.0%/2.6% in men/women aged 55–75 years). Educational inequalities in the distribution of CVRF were larger in women than in men for alcohol consumption, obesity, hypertension and dyslipidemia (p<0.05). Relative educational inequalities in CVRF tended to be greater among the younger (35–54 years) than among the older age group (55–75 years), particularly for behavioral CVRF and abdominal obesity among men and for physiological CVRF among women (p<0.05). Conclusion Large absolute differences in the prevalence of CVRF according to education categories were observed in this Swiss population. The socioeconomic gradient in CVRF tended to be larger in women and in younger persons. Introduction In high income countries, cardiovascular disease (CVD) disproportionately affects the lower socioeconomic groups [1], probably reflecting an unequal distribution of cardiovascular risk factors (CVRF) across society [2,3,4] and differential access to and/or use of treatment [5]. However, the magnitude of socioeconomic inequalities in relation to CVD mortality differs substantially between countries [6,7]. In Europe, there appears to be a North-South gradient in socioeconomic inequalities in CVD, with larger differences in Northern than in Southern European countries [7]. Between-country variations in the magnitude of socioeconomic inequalities in CVD tend to mirror cross-country differences in the social patterning of CVRF. Indeed, strong socioeconomic inequal-ities in CVRF have frequently been reported in Northern European regions such as in Scandinavian countries or in the United Kingdom [8,9,10], while in several Southern European countries such as Italy, Greece or Spain the association between socioeconomic indicators and CVRF seems to be weaker [11,12,13,14]. For example, Schroder et al. [14] and de Vogli et al. [13] reported a lack of educational/occupational differences in CVRF in Spain and Italy, respectively. Stringhini et al. [15] showed large occupational inequalities in the prevalence of unhealthy behaviors among British civil servants but small inequalities among French employees of the national gas and electricity company. Cavelaars et al. [2] noted a North-South pattern in the social distribution of smoking and vegetable consumption with small associations with educational level in Southern European regions. These North-South differences might be explained by the fact that CVRF were originally more prevalent in the higher socioeconomic groups and the direction of this association has gradually reversed over the last century [16,17]. The ''social transition'' of CVRF from the higher to the lower socioeconomic groups appears to have started earlier in Northern than in Southern Europe, and to have occurred in men before women [18]. In some Southern European countries certain CVRF such as smoking (among women) or low consumption of fruit and vegetables are still more prevalent in the higher socioeconomic groups [11,19,20]. For example, Huisman et al. reported large educational differences in current smoking in both Northern and Southern Europe, but in Italy, Spain, Greece and Portugal the socioeconomic gradient in women was inversed, the prevalence of smoking being higher among higher educated women [10]. However, most studies examining the social patterning of CVRF in Southern European countries, including Switzerland, are based on data from the 1990's [2,11,14,19,21,22]. In the French-speaking region of Switzerland, the most recent comprehensive assessment of social inequalities in CVRF dates back to the early 2000s [22]. It showed small but significant socioeconomic differences in the prevalence of several CVRF such as current smoking, physical inactivity, obesity and hypertension (but not hypercholesterolemia) among men. Among women, a similar pattern was observed, but current smoking was not socially patterned. More recent studies examining only one risk factor at a time reported decreasing educational inequalities in smoking [23], but increasing educational differences in overweight and obesity [24]. The overall aim of our study is to provide an updated and comprehensive assessment of social inequalities in major risk factors for lifestyle-related diseases (current smoking, heavy drinking, physical inactivity, overweight and obesity, hypertension, dyslipidemia and diabetes) in a French-speaking Swiss town. As the French-speaking region of Switzerland is generally assimilated to Southern European countries for its CVD profile [25], this study allows assessing whether it is still the case that social inequalities in major CVRF are small in Southern Europe. A key feature of this study is that it additionally examines whether socioeconomic differences in CVRF vary by age and gender. Study Population and Design The Colaus study is a cross-sectional population-based study conducted in Lausanne, Switzerland (approximately 180'000 inhabitants). Details of the study have been previously described [26]. Briefly, a simple random sampling of 19,830 participants was drawn, corresponding to 35% of the source population, of which 6738 participants were eventually included. The following inclusion criteria applied: (a) written informed consent; (b) age 35-75 years; (c) willingness to take part in the examination and donate a blood sample; and (d) Caucasian origin. Recruitment began in June 2003 and ended in May 2006. The age and sex distribution of the 6738 participants included in the Colaus study were similar to those of the 19,830 individuals originally sampled. Participants attended the outpatient clinic at the University Hospital of Lausanne (CHUV) in the morning after an overnight fast. Data were collected by trained field interviewers during a single visit lasting about 60 minutes. Venous blood samples were drawn after an overnight fast, and assays were performed by the CHUV Clinical Laboratory on fresh plasma samples within 2 hour of blood collection in a Modular P apparatus (Roche Diagnostics, Switzerland). Information on demographic data, socioeconomic and marital status, lifestyle factors, personal and family history of disease, CVRF and treatment was collected. The study was approved by the Institutional Ethics Committee of the University of Lausanne (Switzerland). Measures Socioeconomic status (SES). Education was the indicator of socioeconomic status used in this study. It was assessed as the highest qualification achieved and categorized as ''high'' (tertiary education), ''middle'' (upper secondary education or post-secondary non tertiary education, including vocational education) and ''low'' (lower secondary education or lower) [27]. Cardiovascular risk factors (CVRF). Current smoking was assessed using questions on current smoking status and was classified as yes/no. Former smokers were included in the nonsmokers category. For current smokers, the number of pack years of smoking was calculated by multiplying the number of packs of cigarettes smoked per day (average number of cigarettes smoked per day divided by 20) by the number of years the person reported to have smoked. Alcohol consumption was assessed using questions on the number of alcoholic drinks consumed in the past week, then categorized as ''abstainers'' (0 unit/week), ''moderate drinkers'' (1-21/1-14 units/week for men/women) or ''heavy drinkers'' ($21/$14 units/week for men/women). We considered both abstaining from alcohol and heavy drinking as CVRF. Participants were classified as physically active if they reported participating in a physical activity of more than 20 minutes once a week or more, and as physically inactive otherwise. Body weight and height were measured with participants standing without shoes in light indoor clothing. Body weight was measured in kilograms to the nearest 0.1 kg using a SecaH Scale (Hamburg, Germany), which was calibrated regularly. Height was measured to the nearest 5 mm using a SecaH height gauge (Hamburg, Germany). Waist circumference was measured twice with a non-stretchable tape over the unclothed abdomen at the mid-point between the lowest rib and the iliac crest. The mean of the two measurements was used for analyses [26]. Body Mass Index (BMI) was calculated and categorized in three groups (normal ,25; overweight 25-29; obese $30 kg/m 2 ) based on the World Health Organization recommendations [28]. Abdominal obesity was considered as a waist circumference $102 cm for men and $88 cm for women. Blood pressure (BP) was measured three times on the left arm after at least 10 minutes of rest in a seated position using a clinically validated automated oscillometric device (Omron HEM-907, Matsusaka, Japan) with a cuff adapted to the arm circumference. Three readings were obtained and the average of the last two BP readings was used. Hypertension was defined as systolic/diastolic BP$140/90 mmHg or use of antihypertensive medication. Low HDL-cholesterol was defined for values ,1.0 mmol/l in men and ,1.2 mmol/l in women; high LDL-cholesterol for a value $3.4 mmol/l; high triglycerides for a value $1.7 mmol/l. Diabetes was defined as fasting plasma glucose $7.0 mmol/L or glucose lowering treatment. Other covariates. Place of birth was classified as ''born in Switzerland'' or ''not born in Switzerland''. Statistical Analysis Statistical analysis was conducted using Stata v.12 (Stata corp, College Station, TX, USA). With the few exceptions mentioned below, all analyses were performed separately for men and women and in two age groups (35-54 years and 55-75 years). We used least squares regression to calculate age and place of birth-adjusted prevalence rates or mean values of CVRF for each educational group. Differences in CVRF prevalence and mean values between the lowest and the highest educational group, with their 95% confidence intervals (CI), were also calculated. As suggested in previous studies [29,30], relative inequalities in CVRF were examined using the Relative Index of Inequality (RII) calculated by log-binomial regression [31]. The RII is a regression-based index taking into account both the size and relative position of each educational group in the educational hierarchy. To compute the RII, education was transformed into a summary measure ranging from zero (highest level of education) to one (lowest level of education). The population in each educational category was assigned a score corresponding to the midpoint of the relative position of their category in the cumulative population distribution. For example, if the highest educational category comprises 24% of the population, all participants in this category are assigned a value of 0.12 (0.24/2), and if the second category comprises 30% of the population, the corresponding value is 0.27 (0.12+ [0.3/2]), and so forth. The RII was calculated using logbinomial regression, as the RII by logistic regression has been shown to produce biased estimates of relative inequalities when the prevalence of the health outcome is relatively high (i.e.: .10%) [30]. As such, the RII can be interpreted as the prevalence ratio between the two ends of the educational hierarchy [30]. Logbinomial regressions were adjusted for age (treated as a continuous variable) and place of birth. Analyses including HDL-cholesterol were additionally adjusted for oral contraceptive intake among women. In order to test whether the associations between education and CVRF differed by gender or by age, interaction terms between education (lowest versus highest education in analysis of absolute inequalities and RII in analysis of relative inequalities) and sex or between education and age group were included in the different regression models described. Results From the initial 6738 participants, 435 (6% of the original sample) were excluded because of missing values on one or more covariates (N = 18 for education, N = 157 for alcohol consumption, N = 114 for physical inactivity and N ,20 for the other CVRF, categories not mutually exclusive). Hence, 6303 participants (53% women) were included in the present analyses. Excluded women were slightly older than those included in the study (p = 0.03), but there were no age differences between included and excluded men. Excluded participants were more likely to have CVRF than those included in the analysis (for example, OR = 1.54; 95%CI: 1.25; 1.90 for smoking, OR = 1.29; 95%CI: 1.00; 1.66 for obesity and OR = 2.14; 95%CI: 1.55; 2.94) and they were also more likely to be in the lowest educational group than those included in the study (OR = 1.69; 95%CI: 1.27; 2.27). However, educational inequalities in CVRF were similar in both the excluded and included samples (p for interaction between education and inclusion status.0.05 for smoking, obesity or diabetes). Table 1 shows the characteristics of the participants included in the study. Mean age was 52 years for both men and women. One half of participants reported ''lower than secondary'' education (52.8% of men and 57.8% of women). The distribution of participants across educational categories was similar in the two age groups for men, while women in the older age group tended to report a lower educational level than those in the younger group. The majority of men and women were born in Switzerland. Absolute Inequalities in CVRF For men, age and place of birth-adjusted prevalence and mean values of CVRF by educational level and age group are presented in Table 2. Lower education was associated with higher levels of CVRF with a marked dose-response pattern (p for linear trends ,0.05 for all CVRF apart from LDL-cholesterol in the younger age group and alcohol consumption, LDL and HDL-cholesterol in the older age group). There was a 15% (95%CI: 10.0; 20.2) difference in the prevalence of smoking between the lowest and the highest educational group in the youngest age group, but there were no significant educational differences in smoking prevalence in the oldest age group [D = 3.1% (95%CI: 23.0; 9.2)]. In both age groups, the number of pack-years smoked increased with decreasing educational level. Physical inactivity, overweight, obesity and abdominal obesity were also far more prevalent in the lowest as compared with the highest educational group (D = 25.3%/19.4% in the youngest/oldest age group for physical inactivity; 14.7%/12.5% for overweight; 8.6%/14.2% for obesity and 9.3%/4.6% for abdominal obesity). Large differences were also seen for hypertension (particularly in the oldest age group (D = 16.7%)), but less so for dyslipidemia and diabetes. Absolute educational differences in CVRF tended to be larger in the younger than in the older age group for smoking, heavy drinking, physical inactivity and abdominal obesity, but they were larger in the older age group for obesity and hypertension. For women, the prevalence and mean values of CVRF according to educational level and age group are presented in Table 3. As for men, most CVRF showed a linear association with educational level (p for linear trends ,0.05 for all CVRF apart from heavy drinking in the younger age group and smoking, diastolic blood pressure, HDL and LDL-cholesterol and diabetes in the older age group). In the younger age group, but not in the older, large absolute inequalities were observed for current smoking (D = 12.6%). Physical inactivity (D = 22.7%/21.2% in the younger/older age group), overweight (D = 22.9%/27.9%), obesity (D = 10.3%/14.8%), abdominal obesity (D = 15.7%/ 21.6%), hypertension (D = 8.6%/11.4%) and dyslipidemia (D = 9.5%/7.8% for high LDL-cholesterol) were more prevalent in the lowest educational group in both age groups. Relative Inequalities in CVRF Results for relative educational inequalities in CVRF are shown in Table 4. Participants at the bottom end of the educational hierarchy were more likely to be current smokers than those at the top, but in analysis stratified by age group the association of smoking status with education was evident only in the younger age group (p for interaction between education and age group,0.05). In general, relative educational inequalities in CVRF were larger in the younger age group (although interaction terms reached statistical significance only for smoking, heavy drinking and abdominal obesity in men and for smoking, hypertension, and dyslipidemia in women). For example, men at the bottom of the educational hierarchy were more than four times more likely to have diabetes than those at the Educational inequalities in alcohol abstinence, hypertension and dyslipidemia in the younger age group and in abdominal obesity in the older age group were larger in women than in men (all p,0.05). Sensitivity Analyses About 40% of participants were not born in Switzerland. As education can have different meanings in different populations, depending on the school system and the level of economic development, we repeated the analyses for relative inequalities in CVRF stratifying for place of birth (Switzerland or not Switzerland). In general, there were no substantial differences in educational inequalities in CVRF by place of birth (Table S1). However, inequalities in heavy drinking and diabetes were larger among men not born in Switzerland, and inequalities in obesity were larger among women born in Switzerland (all p,0.05). About 6% of participants had missing values on one or more covariates. As missingness was found to be patterned by education, we assessed whether missing data could have biased our results. Analyses for relative educational inequalities in CVRF were rerun using multiple multivariate imputation (STATA procedures ''ice/ micombine'') to replace missing values. Results did not differ from those reported in the main analysis. Although socioeconomic status is a complex concept, we focused on educational level in this study. However, analyses were also performed using occupational position as the indicator of SES for the 4512 participants who were currently working. Overall, results were very similar to those using education as an indicator of SES. However, in general socioeconomic differences in CVRF tended to be more pronounced for education than for occupational position, especially among women. Finally, we repeated all analyses adjusting for marital status and results were virtually unchanged. All results from sensitivity analysis not shown in Table S1 are available upon request. Discussion We found large absolute differences in the prevalence of CVRF according to educational level in this Swiss population. Moreover, relative inequalities by education differed by gender and tended to be greater in the younger than in the older age group, particularly for behavioral risk factors among men and for physiological risk factors among women. Overall Prevalence of CVRF Prevalence estimates of smoking, physical inactivity, obesity, and hypertension in our study were comparable to other population-based estimates (several of them telephone health surveys) of the Swiss general population [23,32,33,34]. On the other hand, the prevalence of measured hypercholesterolemia and diabetes in Colaus was higher than self-reported prevalence from the Swiss health surveys [35,36], or than that measured in the neighboring region of Geneva [22,37]. The prevalence of overweight and obesity, hypertension, high triglycerides and diabetes was higher in men than in women, as reported previously in Switzerland [38]. Absolute Educational Differences in CVRF Overall, the prevalence of CVRF was lower in higher socioeconomic groups, consistent with general findings in high income countries [2,3,4]. Absolute socioeconomic differences were particularly large for behaviors such as smoking and physical activity, and anthropometric measures such as weight (reflecting the balance between physical activity and diet). This was particularly true among women. This result is in line with findings from recent studies reporting strong educational inequalities in physical inactivity and obesity in Switzerland [32,39]. Absolute educational differences were also large for hypertension but smaller for dyslipidemia and diabetes, as observed previously [40]. Relative Educational Differences in CVRF Relative educational inequalities differed by age and gender for several CVRF. Among both men and women, the educational gradient in current smoking was stronger in the younger than in the older age group. It has been observed that the smoking epidemic initially spread in the high socioeconomic groups, later reached the lower socioeconomic groups, and started declining first in the high socioeconomic group [18,21]. In addition, the ''social transition'' of smoking usually starts earlier in men than in women, and in Europe it was delayed in Southern Europe as compared with Northern Europe [18]. In a study conducted in Geneva (Switzerland) in the 1990s, smoking was still more prevalent among the higher educated women [19], but there were no educational differences in current smoking among young participants (35-44 years in 1993-95). The current study suggests that the social transition in smoking is now completed in Switzerland. Gender and age differences were also observed for the association between education and heavy drinking. Low-educated young men were more likely to report heavy drinking than young men with high education, but the inverse was observed among older men. Among women, heavy drinking tended to be more [39,41], but the associations were not statistically significant. Relative inequalities in physical inactivity were very large but did not differ by age and gender. Conversely, the social patterning of obesity was stronger in women, as previously reported [39,42]. This was mostly due to the very low prevalence of obesity among the highly educated women. It has been hypothesized that this might reflect a stronger social pressure for thinness on women with a high socioeconomic status than on women with a low socioeconomic status, in addition to greater health consciousness [43]. Hypertension was also strongly socially patterned, as reported previously in Switzerland [22]. Although absolute inequalities in dyslipidemia and diabetes were not large, relative inequalities were strong in Lausanne compared with other countries [44]. For example, young women with a low educational level were more than 10 times more likely to have low HDL-cholesterol and 5 times more likely to have diabetes than their more advantaged counterparts. This might be related to the observed inequalities in physical inactivity and obesity among younger women. For most CVRF and for both genders, relative educational inequalities were stronger in the younger (35-54 years) than in the older (55-75 years) age group. This could either mirror cohort differences in the social patterning of CVRF, with greater social inequalities in younger cohorts, or reflect a decrease in social inequalities in CVRF with ageing. As reported earlier, several studies conducted in Southern Europe (including one study in the French-speaking part of Switzerland) -mostly based on data from the early 1990s-found a small or null socioeconomic gradient in CVRF [2,11,13,14,19,21,22]. Our study is one of the first conducted in a Southern European country to find large socioeconomic differences in CVRF, which may hint at either a new situation in Southern Europe or at a difference between Switzerland and other Southern European countries. If the first hypothesis is true, the fact that inequalities in CVRF tended to be stronger among younger than older participants may translate into an increase in social inequalities in adverse CVD outcomes over the next decades. Alternatively, smaller inequalities in CVRF in the older age group could be explained by the fact that relative inequalities in CVRF might decline with age as a result of increasing prevalence of adverse CVRF across socioeconomic groups or because of selection effects. However, both explanations remain speculative as the cross sectional nature of the study precludes distinguishing between age and cohort effects. Evidence for a prominent role of behavioral and biological risk factors such as those examined in this paper in explaining social inequalities in cardiovascular disease incidence and mortality is accumulating [45,46,47]. The determinants of the uneven distribution of CVRF across socioeconomic groups remain poorly understood, but likely include socioeconomic differences in several domains such as social norms, physical living and working environments, health education, health consciousness, attitude and motivation, psycho-social characteristics, and access to and utilization of health care [48,49,50]. We could not examine the role of this broader context in relation to our findings, as these factors were not assessed in our study. Further studies will be needed to elucidate the relative importance of specific factors in the social patterning of CVRF if effective policies to reduce social inequalities in health are to be implemented. Strengths and Limitations The main strength of this study was the availability of a large number of CVRF in a population-based survey covering a wide age range. This study also has potential limitations. The first relates to the inability of the cross-sectional design to distinguish between cohort and age effects. While we speculate that cohortrelated changes might be taking place in the social patterning of CVRF, consistently with data from cohort studies or from repeated cross sectional surveys in other populations, we cannot exclude that the observed cohort differences in our study are accountable by age-related changes in behaviors. Second, measurement of socioeconomic position is challenging. Education is a valid indicator of SES as it allows for comparison of men and women and is applicable to the non-working population. However, it can have a different meaning for different birth cohorts, due to secular trends in educational attainment across generations [51]. Our sensitivity analysis using occupational position showed that our findings hold across indicators of socioeconomic status. Finally, health behaviors (smoking, alcohol consumption and physical activity) were self-reported and it has been shown that questionnaire-based measures are not entirely reliable [52,53]. Conclusions This study shows that large socioeconomic differences exist in the prevalence of several CVRF in a country enjoying one of the highest life expectancies at birth and one of the highest gross domestic products per capita in the world [54]. Although the overall prevalence of several CVRF was higher in men than in women, social inequalities tended to be greater in women. Socioeconomic gradients in CVRF were larger in the younger than in the older generations, suggesting that social inequalities in CVD might widen over the next decades. Further research is needed in order to elucidate the mechanisms underlying social inequalities in CVRF. Supporting Information Table S1 Relative educational inequalities in cardiovascular risk factors by gender and place of birth. (DOCX)
2017-04-14T14:40:47.996Z
2012-11-13T00:00:00.000
{ "year": 2012, "sha1": "c64c3338d9ff3b60a17eacd72f6fa1dc79f540fc", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0049443&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd7b0644f8937fe43df50ab2c0b9719b517ed41e", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
268098836
pes2o/s2orc
v3-fos-license
Models of asthma: density-equalizing mapping and output benchmarking Despite the large amount of experimental studies already conducted on bronchial asthma, further insights into the molecular basics of the disease are required to establish new therapeutic approaches. As a basis for this research different animal models of asthma have been developed in the past years. However, precise bibliometric data on the use of different models do not exist so far. Therefore the present study was conducted to establish a data base of the existing experimental approaches. Density-equalizing algorithms were used and data was retrieved from a Thomson Institute for Scientific Information database. During the period from 1900 to 2006 a number of 3489 filed items were connected to animal models of asthma, the first being published in the year 1968. The studies were published by 52 countries with the US, Japan and the UK being the most productive suppliers, participating in 55.8% of all published items. Analyzing the average citation per item as an indicator for research quality Switzerland ranked first (30.54/item) and New Zealand ranked second for countries with more than 10 published studies. The 10 most productive journals included 4 with a main focus allergy and immunology and 4 with a main focus on the respiratory system. Two journals focussed on pharmacology or pharmacy. In all assigned subject categories examined for a relation to animal models of asthma, immunology ranked first. Assessing numbers of published items in relation to animal species it was found that mice were the preferred species followed by guinea pigs. In summary it can be concluded from density-equalizing calculations that the use of animal models of asthma is restricted to a relatively small number of countries. There are also differences in the use of species. These differences are based on variations in the research focus as assessed by subject category analysis. Introduction Allergic asthma is a complex inflammatory condition of the lung with an increasing prevalence and incidence. Amongst other lung diseases [1-7] the disorder constitutes a major occupational burden of disease [8]. The disease is often concomitant with other allergic diseases such as allergic rhinitis, atopic dermatitis and allergic eye diseases. The direct medical costs evolved from allergic airway inflammation are increasing over the past decades and constitute about an estimated 1-3% of the health fund of the U.S. The economic burden amounts to roughly 12 billion dollar [9][10][11]. Despite the large amount of experimental studies already conducted on allergic asthma, further insights into the molecular basics of the disease are required in order to develop new therapeutic strategies. To establish these novel therapeutic approaches animal models of asthma have been developed and refined in the past years [12]. Different animal species have been used so far for these models. Starting with guinea pig models of allergic airway inflammation to assess pharmacological aspects, new models including rats and mice have been developed which mimic major features of allergic asthma. The mouse seems to be the presently preferred species for the investigation of the immunological basis of the disease [12]. However, there are no in-depth bibliometric analysis of the current state of research in this field available. Therefore the present study was carried out to evaluate the role of animal models in the field of asthma research using large scale data analysis and bibliometric approaches including density-equalizing mapping. Data source Data was retrieved from the database Web of Science database from the Thomson Institute for Scientific Information (ISI) [13,14]. Search strategies For the different searches, phrases joined together with Boolean operators, i.e. AND, OR and NOT using the words "asthma"," allergic airway inflammation" and "animal model" were used. Also, the species used for experimental studies, such as guinea pigs, rats or mice and other species were used as search terms. In order to approximate the overall number of published items on animal models of asthma, the following phrase was used: "asthma*" OR "allergic airway inflammation". This search routine was then combined with the following phrase: "animal* model*" OR "ovalbumin" OR "murine* model*" OR "mouse* model*" OR "mice* model*" OR "rat* model*" OR "guinea pig model*" OR "monkey* model*" OR "dog model*". The asterisk was used to replace the word ending in order to encompass all possible endings (e.g. asthmatic, asthmaticus). Also, the term "ovalbumin" was used in the search strategies since it is the most prominent allergen used in animal models to induce allergy. In addition, each search was limited to preferred document types using a "refine results" function in order to include only original articles, reviews or abstracts and excluding publication types such as letters, editorials and news reports. Time span The initially analyzed time span included the period from 1900 to 2006. 2007 was not included since the data acquisition is not terminated so far. To examine particular aspects of the retrieved data the time span was partly restricted to a period between 1990 and 2006. Citation quantities Published items were also analyzed using the "citation report" method. This method was used to assess the citations per year of citation, and the average citations per item, indicating the average number of citing articles for all items in the set. It is the sum of the times cited divided by the number of results found. Data categorization All data files were examined concerning a variety of different aspects e.g. the origin (publishing countries), the publication date, the source title, the subject category. Data was transferred to excel charts and visualized in diagrams. Density-equalizing mapping Density-equalizing mapping was used according to a recently published method. In brief, territories were resized according to a particular variable, i.e. the number of published items. For the re-sizing procedure the area of each country was scaled in proportion to its total number of published items regarding animal models of asthma. The specific calculations are based on Gastner and Newman's algorithm [15]. Total number of published items The number of published items was used as an index of quantity of research productivity and large differences were found: During the period 1900-2006 a number of 78.860 items were published and included in the Web of Science database with the combined words "asthma*" and "allergic airway inflammation" identified in the title, abstract or key words. Within this data file, 3.489 entries were also related to animal models of asthma, as defined by the search routine. The first studies were published in the year 1968 and numbers increased at the beginning of the 1990's ( Figure 1). Analysis of origin The 3489 entries originated from 52 countries with the US, Japan and the UK being the most productive countries ( Figure 2) participating in 55.8% of all published items. The cumulated publications of the top ten publishing countries encompassed 82.9% of all published items, taking into account that 721 of all filed items were a collaboration of two or more countries. 18 publications could not be assigned to a certain country. Density-equalizing mapping of this set of data demonstrates that a relatively small number of countries is responsible for the majority of research efforts (Figure 3a). Citation parameters The average citation per item was used as an indicator for research quality and differences were found in relation to research quantity figures: When analyzing all 3489 published items regarding the average citation of each item in a country-specific manner, South Africa has the highest average citation rate (132/item) with Switzerland ranking second (30.54/item) and New Zealand ranking third (30.17/item) (Figure 3b). Differences to output quantity ( Figure 3a When a threshold of at least ten publications is introduced, South Africa (8 publications) and Slovenia (1 publication) are not longer ranked in the top ten and Switzerland moves into first position. Additionally Italy and Taiwan enter the list of the 20 countries with the highest ranking ( Figure 4). To assess the reception of the subject matter over the time the citation rates per year were recorded from 1990 to 2006 and a trend of increasing citations was present since 1990 which parallels the increase in published articles ( Figure 5). Publishing journals The ten most productive journals include four journals with main focus allergy/immunology, another four with main focus respiratory system and two dealing with pharmacology/pharmacy. Those with main focus on allergy and respiratory system are the leading journals in their subcategory concerning their impact factor in 2006 (9.091 Published items related to animal models of asthma in the Web of Science database 1900-2006. Figure 1 Published items related to animal models of asthma in the Web of Science database 1900-2006. and 8.829) and are well established. The remaining journals with the topics immunology and pharmacology range mid-field in their category with impact factors of 2.522-6.293. (Figure 6). Analysis of assigned subject categories In all subject categories examined for published items related to animal models of asthma, immunology ranked first by far, followed by the categories respiratory system, allergy, pharmacology and cell biology (Figure 7). The amount of research with animal models conducted in the field of pharmacology and pharmacy constitutes 18.5% of all analyzed research categories (Table 1). Whereas the number of articles published in the subject category "immunology" increased remarkably since 1997, the subject categories "respiratory system", "allergy" and "pharmacology/pharmacy" showed a constant but less steep increase. Articles assigned to the subcategory "cell biology" though showed decreasing numbers since 2004 (Figure 8). Species analysis To analyze the role of different laboratory animal species for their use in asthma models, the ten most productive research categories were compared to species. It was found that mice are the overall preferred species, while guinea pigs are mainly used for studies in the field of pharmacy/ pharmacology and toxicology. Rat strains were less relevant ( Figure 9). When the use of species was compared to the ten most productive countries it was found that U.S. and German affiliations used mouse models of asthma in more than 85% of their studies, whereas in Japan, the UK and Canada, mouse models were not as dominant ( Figure 10). Regarding the use of rats, mice and guinea pigs as animal models of asthma in the period between 1990 and 1994 the importance of all three species seemed to have rather similar priority. With the beginning of 1995, mice started to play a major role and became the most prominent species used in animal models of asthma ( Figure 11). Ranking of country total numbers of published items related to animal models of asthma. Threshold of > 10 published items. Figure 2 Ranking of country total numbers of published items related to animal models of asthma. Threshold of > 10 published items. Discussion The past decades of research in the field of asthma have been challenged by a number of revolutionary insights into immune mechanisms of the disease. Since this research was mainly performed in animal models using novel tools of molecular biology such as loss-of-function or gain-of-function the number of animal studies using mice as species increased as shown in the present study. The present study provides a precise bibliometric evaluation of the role of animal models in the field of asthma research. So far, this aspect has not been investigated in detail and only reviews have focused on methodological and technical issues of animal models [12,16]. The present methodology is based on internationally established databases such as the Web of Science [13,14] and novel bibliometric tools including density-equalizing mapping [15]. The time span in some search routines was restricted to the period between 1990 and 2006. This was chosen because the worldwide number of published items before 1990 was relatively low. Generally, there is a constant increase of interest in this field since the beginning of the 1990's. The large interest in the subject can also be seen when the most productive journals are analyzed. Data analysis of productivity parameters shows that research groups from the US maintain a leadership position in research productivity concerning asthma research in general and animal models of asthma in particular, along with the UK. It is notable that Japan ranks fourth in general asthma research (data not shown) and even second when animal models of asthma are focused. The tendency of only a relatively small number of countries contributing the majority of research can also be remarkably illustrated by density-equalizing mapping procedures. Whereas the number of published items was considered as an index of quantity of research productivity, the average citation per item was used as an indicator for research quality as generally accepted. Therefore all articles were analyzed regarding the average citations of items published in each particular country. Using this average citation per item index without thresholds, South Africa appears to have the highest rank, followed by Switzerland, New Zealand, Belgium and Australia. It has to be annotated, that the results for those countries with a very small amount of published items appear disproportionately high. To objectify these outliers, a threshold of ten published items was introduced and as a result, South Africa (8 publications) and Slovenia (1 publication) are not longer included in the ranking. Switzerland then moves into first position. Additionally, Italy and Taiwan enter the list of the 20 countries with the highest ranking as shown in Figure 4. The leading position of Switzerland seems reasonable when the Swiss institutions are analyzed with internationally renowned institutions such as the Swiss Institute of Asthma Research in Davos (SIAF). Also, Belgium, despite being a relatively small country concerning size and population, houses a number of renowned institutions devoted to asthma research including the University of Ghent or the Catholic University of Leuwen. When focusing on assigned categories in the database Web of Science related to animal models of asthma, the field of immunology plays a leading role with a steep increase of published articles since 1997. This trend is parallel to the enormous financial input in this field from public and private institutions and to the increasing overall numbers of published studies related to immune mechanisms (data not shown). In contrary, animal model research concerning cell biology seemed to stagnate as illustrated in Figure 8. This result might be biased by an A. Density-equalizing map illustrating the number of publica-tions in each particular country. The area of each country was scaled in proportion to its total number of publications regarding animal models of asthma. B. Density-equalizing map showing the average citations per item of each particular country. The area of each country was scaled in proportion to its average number of citations per item regarding animal models of asthma. increased focus on immune mechanisms in the field cell biology with an artificial denomination shift from the category cell biology to the category immunology in various studies. When analyzing the role of different species, it is not surprising that murine models of asthma are the preferred species in all countries and subject categories. This trend is parallel to the increase in studies related to immunology since the mouse is the best species to generate genedepleted strains. However, guinea pigs are still often chosen as asthma model species by countries such as Japan, the UK and the Netherlands as shown in Table 1. This is most probably due to the fact that a major interest of asthma research in these three countries is the area pharmacology. I.e. results for the UK show that the field of pharmacology constitutes 29.8% of overall research. Similar numbers can be found for the Netherlands (27.3%) and Japan (23.6%) The reason for this strong interest can also be attributed to single institutions in these countries. I.e. the Dutch University of Utrecht harbors an internationally renowned department of pharmacology with the focus on airway pharmacology. Pharmacology is also a focus of established institutions in the UK such as the National Heart and Lung Institute in London. Thus, pharmacology as an area of research related to animal models of asthma ranks fourth when all publications and coun- tries are assessed, but second in the UK or the Netherlands. Strikingly, rat models seem to have a noteworthy impact on research only in Canada as illustrated in Figure 10. US or Germany -these are countries with a predominant use of mouse models and only a minor use of guinea pig models -also show lesser interest in the field of pharmacy and pharmacology (11.3% and 13.3%, respectively) when compared to the UK, the Netherlands or Japan as illustrated in Table 1. The presently discovered enormous increase in studies using murine models of asthma is definitely related to the increase in immunological studies of the disease. However, most novel immunomodulatory drug classes for Time dynamics of five selected assigned subject categories. Study period from 1900 to 2006. Comparison of the preferred animal model species in the ten most productive countries (in percent and total numbers). Figure 10 Comparison of the preferred animal model species in the ten most productive countries (in percent and total numbers). Comparison of the use of different species for animal models of asthma in the ten most common assigned subject categories (in percent and total numbers). Figure 9 Comparison of the use of different species for animal models of asthma in the ten most common assigned subject categories (in percent and total numbers). asthma therapy failed to reach clinical practice [17]. It may therefore be asked if the global research efforts that tried to identify novel single immune targets were too reductionistic. It needs to be stated in this respect that the current rate of introduction of novel compounds to the pharmaceutical market is lower than at any time in the past 50 years [17] although the overall number of new discoveries concerning immune mechanisms and murine animal studies raises. Within a complex disease such as asthma not only inflammatory cells but also other systems might play a crucial role. Since mice do not cough and also do not have glandular structure as humans, the future research should reappraise other species. In this respect, the guinea pig offers a greater proximity to the human situation since they i.e. share a similar airway innervation. They also have common symptoms of asthma such as cough which mice do not have. Conclusion The present study represents the first detailed bibliometric analysis of the role and impact of animal models of asthma. The data shows a strong increase in research pro-ductivity. Using science citation analysis it can be assumed that there is an increase in the interest in results of animal studies. While the majority of data originates from the US, smaller countries such as Switzerland take a lead in citation per item rankings.
2014-10-01T00:00:00.000Z
2008-02-27T00:00:00.000
{ "year": 2008, "sha1": "0d21e05c41b665246d61df8611b4ba30ddaad18e", "oa_license": "CCBY", "oa_url": "https://occup-med.biomedcentral.com/counter/pdf/10.1186/1745-6673-3-S1-S7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0b60ce0b299326e69606ca9a91e1342867c304d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258781922
pes2o/s2orc
v3-fos-license
Tuning the Musical Mind: Next Steps in Solving the Puzzle of the Cognitive Transfer of Musical Training to Language and Back A growing body of research has been studying cognitive benefits that arise from music training in childhood or adulthood. Many studies focus specifically on the cognitive transfer of music training to language skill, with the aim of preventing language deficits and disorders and improving speech. However, predicted transfer effects are not always documented and not all findings replicate. While we acknowledge the important work that has been done in this field, we highlight the limitations of the persistent dichotomy between musicians and nonmusicians and argue that future research would benefit from a movement towards skill-based continua of musicianship instead of the currently widely practiced dichotomization of participants into groups of musicians and nonmusicians. Culturally situated definitions of musicianship as well as higher awareness of language diversity around the world are key to the understanding of potential cognitive transfers from music to language (and back). We outline a gradient approach to the study of the musical mind and suggest the next steps that could be taken to advance the field. In this short discussion piece, we consider the current status quo of research concerned with the cognitive transfer between music and language and vice versa, although the studies discussed here are selective and by no means exhaustive.In this discussion, the idea of "musicianship" is key.We first discuss if music and language are distinct or overlapping phenomena and how a cross-domain transfer between the two may look like before we focus on the implications of the idea of "the musician", often defined as someone with at least six years of (formal) musical training in a Western music tradition (Zhang et al., 2018).We outline alternative ways of classifying musicianship and diversifying what is considered to be musical skill.We then discuss some examples of social and communicative functions of music and language that are often neglected in existing research.Finally, we present future challenges and make some recommendations for studies in this growing field. Empirical evidence is currently somewhat conflicting.On the one hand, experimental findings show that music learning and language acquisition develop at a similar speed in early infancy (see Brandt et al., 2012) and that there is a cross-domain cognitive transfer for second language learners (Milovanov et al., 2008) and for tonal speakers (Bidelman et al., 2013).This suggests that the two systems may recruit similar cognitive and/or auditory processes and rely on shared domain-general mechanisms (Asaridou & McQueen, 2013;Patel, 2008;Peretz et al., 2015;Perrachione et al. 2013).On the other hand, there is also evidence for music-specific neural pathways.For example, an fMRI study has found a specific brain region responding stronger to music than to speech (Armony et al., 2015).Another recent fMRI and electrocorticography study suggests that a distinct neural population responds to singing, but not to instrumental music or speech (Norman-Haignere et al., 2022).Studies with clinical populations such as congenital amusia demonstrate impaired processing of pitch in music, but not (or at least not always) in speech (Ayotte et al., 2002;Peretz & Hyde, 2003;Zhou et al. 2017).While some existing findings points towards music and language sharing cognitive processes and neural pathways, compelling evidence is yet to be ascertained. The formal boundary between music and language is also not always as clear-cut as has been suggested (Jäncke, 2012).It may be blurred in some contexts, cultures, or phenomena.Infant-directed speech (Fernald & Kuhl, 1987;Malloch, 1999), the speech-to-song illusion (Deutsch, 1995;Falk et al., 2014), whistled or drummed languages (e.g., Carreiras et al., 2005;Carrington, 1971;Rialland, 2005), calling tunes (Amha et al., 2021), or chanting (Cummins, 2018) give examples of those cases in which the boundary between language and music is blurred.Further evidence indicates that spectro-temporal acoustic markers of speech and song differ cross-culturally and that this impacts production and perception of speech and song across cultures (Albouy et al., 2023).Thus, whether or not language (or speech) and music (or song) can be considered to be distinct or overlapping remains an empirical question.Its answer may be highly dependent on some specific aspects of the two phenomena studied which complicates the study of cross-domain transfer between music and language. The Musical Mind and Unidirectional Cognitive Transfers In the quest to provide an answer to the question if music and language are distinct or overlapping, fascination with the puzzle of the musical mind has been steadily growing (e.g., Chobert & Besson, 2013;Kimel et al., 2020;Magne et al., 2016;Patel, 2011).The current assumption is that musical training fine-tunes the auditory system (Strait & Kraus, 2011a,b) and equips the brain with "auditory fitness" when it comes to processing complex sounds (Kraus & Chandrasekaran, 2010).The biological changes due to long-term musical training can lead to an increased resilience in the face of challenging listening environments or cognitive decline (Coffey et al., 2017;Parbery-Clark et al., 2009;Walsh et al., 2021;Yoo & Bidelman, 2019) and to a substantial learning advantage in the context of foreign language acquisition or language disorders (Christiner et al., 2022;Picciotti et al., 2018;Rathcke & Lin, 2021;Yuskaitis et al., 2015). Yet, the progress toward a comprehensive account of the musical mind and its cognitive make-up has been impeded by overreliance on correlational evidence (e.g., Dittinger et al., 2016Dittinger et al., , 2017Dittinger et al., , 2019;;Kühnis et al., 2013;Pinheiro et al., 2015;Silas et al., 2022;Swaminathan & Gopinath, 2013).To date, the exact mechanisms underlying possible causal relationships between musical training, language, and general cognitive benefits have remained poorly captured (Schellenberg, 2020).Non-correlational designs have so far produced mixed findings and do not straightforwardly support the hypothesized cognitive transfer from musical training to language processing (McKay, 2021;Mosing et al., 2016;Sala & Gobet, 2020;Schellenberg, 2020;Smit et al., 2022).Since linguistic studies often focus on typologically diverse languages and crosslinguistic comparisons, agreeing on a definition of what constitutes a musician has proven challenging (Trehub et al., 2015).Musicianship has mostly been studied with respect to a particular group of people rather than considering the diversity and cultural variability in music (Clayton et al., 2011).However, some accounts (e.g., Feld & Fox, 1994) suggest that a more culturally situated awareness of musicianship and musical skill may be required in experimental studies. Cross-Cultural Diversity of Musicianship and a Continuum of Musical Skill The prevalent approach to studying the influence of musicianship on language and other cognitive domains is situated within music traditions of Western culture and mostly involves binary comparisons between groups of "musicians" and "nonmusicians."The decision about individual affiliation with either group often appears somewhat arbitrary (Cogo-Moreira & Lamont, 2018) and is predominantly (and for practical reasons) based on the number of years an individual spent in formal training (Zhang et al., 2018).However, formal musical training does not necessarily lead to musical skill and vice versa, musical skill can be obtained without any formal instruction (Gagné & McPherson, 2016;Rickard & Chin, 2017;Tan et al., 2014).Generally, any musical encounter can potentially initiate implicit learning of musical principles and structures as seen in music education around the world (Berliner, 1978;Folkestad, 2006;McLucas, 2010;Qureshi, 2000;Ross, 2013).Moreover, the specialist notion of a "musician" is absent in some musical contexts or cultures, such as in Turino's (2008) notion of participatory musical fields where there is no distinction between musicians and nonmusicians, or performers and audience; there are only participants.Similarly, in some cultures, such as the Aka and Mbuti equatorial African Pygmie, music-making is experienced as a communal activity and not as a specialist performance (Ichikawa, 1999;Trehub et al., 2015).This notion of a communal and participatory musicality is likely more prevalent around the world than the current pervasive musician versus non-musician distinction in current research suggests.A departure from a binary definition of musicianship (cf.Cogo-Moreira & Lamont, 2018; Nayak et al., 2021), a move toward a stronger focus on individual differences in musical skill, and a stronger situational awareness of the studied culture and language under investigation will provide the foundation for cross-cultural comparison and generalization. Considering the diversity of concepts of musicianship across cultures as well as the individual variability in musical exposure, experience, practice, skill, and genetic predisposition (Cogo-Moreira & Lamont, 2018;Fiveash et al., 2022;Folkestad, 2006;Nayak et al., 2021;Wesseldijk et al., 2021), musicians may substantially differ from each other across a range of perceptual and motor skills relevant to music-making.Emerging evidence supports the present proposal that musicianship may indeed be best understood as continua of individual skills and aptitudes instead of a binary group affiliation (Nayak et al., 2021).Moreover, perceptual and motor skills commensurate with a high level of musical attainment have been observed in individuals who have never received any explicit instruction in playing an instrument (e.g., dancers, D 'Souza & Wiseheart, 2018;Skoe et al., 2021), or enjoyed any musical training (Correia et al. 2022;Kragness et al., 2022;McKay, 2021;Swaminathan et al., 2017;Swaminathan & Schellenberg, 2020;Wesseldijk et al., 2021).Notably, various cultures do not have a clearcut distinction between dance and music, if it is present at all.Examples of these include "dance-song" genres in indigenous musical practices in Australia's Northern Territories (Barwick, 2003) or the Ewe culture in West Africa where rhythm is represented as a circle starting with gestures and ending in stylized gestures (Agawu, 1987).It can be argued that the same underlying capacities are involved in music and dance while their multimodal representation is culturally mediated (Sievers et al., 2013).Engagement with music may thus be a combination of productive, interactive as well as receptive behaviors (Merriam, 1964). Therefore, some people can qualify as "musicians" without having had formal musical training that is often used as the criterium of musicianship (Zhang et al., 2018).A multitude of studies has suggested alternatives to the dichotomous approach leading to large range of instruments (e.g., tests and questionnaires) to measure musical sophistication, training, and auditory skills-for example, the Goldsmiths' Musical Sophistication Index (GMSI) (Müllensiefen et al., 2014); the Music Use and Background Questionnaire (MUSEBAQ) (Chin & Rickard, 2012;Rickard et al., 2015); Ollen Musical Sophistication Index (Ollen, 2006); the Profile of Music Perception Skills (PROMS) (Law & Zenter, 2012); and the Musical Ear Test (Wallentin et al., 2010), to name just a few.The variety of measures used to test musical ability shows that there is not yet a consensus on what musical ability exactly entails (Okada & Slevc, 2018).How self-reported musicianship relates to auditory skills is not always clear and will depend on one's specific study and research question at hand.However, the possibilities of testing both self-reported musicianship and musical abilities on continuous scales do exist but may need to be used more consistently and more comprehensively in future research.Importantly, the currently available tests often focus on Western musical contexts and on receptive, auditory skills, missing representations of (culture-specific) gestural and interactive productive capacities, which is not entirely consistent with the notion that music is an activity rather than an object (Small, 1998). Correlational studies using large samples of welldescribed, multi-faceted individual differences will open the door to more sophisticated approaches for charting possible causal relations in subsequent case studies.Once the fine-grained individual differences in perceptual and motor skills representative of the musical mind have been adequately captured, we might be able to resolve conflicting evidence on cognitive transfer from music to language (and back) (Bigand & Tillmann, 2022).This progress will not least be attributable to an enhanced statistical rigor due to treating musicianship as multidimensional continua instead of a unidimensional dichotomy (Cogo-Moreira & Lamont, 2018).A dichotomization of the continua of individual skills threatens statistical rigor as this may lead to loss of information about individual differences, missed or spurious effects, and errors in estimating effect sizes (Cogo-Moreira & Lamont, 2018;MacCallum et al., 2002;Maxwell & Delaney, 1993;Royston et al., 2006). Bidirectional Cross-Domain Transfers Recent studies have only started to highlight the potential for bidirectional relationships between music and language processing, showing that not only musical but also linguistic expertise may lead to cross-domain transfer (Ong et al., 2016).For example, an enhanced sensitivity to subtle changes in pitch can be acquired either through musical expertise or through the native command of a tonal language and may equally benefit both domains (Ong et al., 2015(Ong et al., , 2016(Ong et al., , 2017a(Ong et al., , b, 2020)).Studies with listeners of tonal languages (that distinguish word meanings by means of pitch contrasts) lend themselves especially well to examining the role of language for the musical mind (Cooper & Wang, 2012;Maggu et al., 2018).Because of a high correspondence between linguistic tone and musical melodies (Ladd & Kirby, 2020;Schellenberg, 2012;Schellenberg & Gick, 2020;Wong & Diehl, 2002;Zhang & Cross, 2021) and because of the crucial role of pitch in tonal languages (Yip, 2002), experience with a tonal language may have a strong positive influence on the perception of musical pitch (Bidelman et al., 2013;Chen et al., 2016;Wong et al., 2012;Zhang et al., 2020) and on singing accuracy in one's native tonal or non-native non-tonal language (Chen-Hafteck, 1999;Mang, 2006).This cross-domain transfer appears to be mediated by musicianship, given that a particular benefit for pitch processing arises for nonmusicians who are speakers of a tonal language (Choi, 2021;Cooper & Wang, 2012;Maggu et al., 2018). Such findings highlight the complexity and the bidirectionality of cognitive links between language and music. Future work requires a better understanding of the notion of the musical mind and its many facets that may be tuned by different aspects of experience.The experience is not limited to Western musical contexts (Trehub et al., 2015).It may not even be musical per se (Chen-Hafteck, 1999;Mang, 2006;Ong et al., 2016), yet honing the same cognitive skill set as the formal musical training. Future Challenges The proposed recommendations toward a skill-based and culturally situated continua of musicianship as a cornerstone of the cognitive transfer from music to language (and back) comes with three major challenges.There currently exists no unified and agreed upon instrument for measuring musical abilities as multidimensional continua, that also includes an embodied, social approach and listeners' linguistic background, which might affect both empirical findings and comparability across studies (Fiveash et al., 2022;Smit et al. 2022).Given the cultural and contextual diversity of musical ability, the goal should possibly not be to create one all-compassing instrument.In order to move the field forward, we need first to focus on the multidimensionality of the musical abilities of interest and their relation to linguistic abilities.Second, a consistent acknowledgment of culturally situated definitions of musical or linguistic phenomena under investigation is essential for any research on communicative processes such as music and language.Third, the process of emergence of musical and linguistic capacities (or its failure) during individual development is so complex that it is far from being well mapped out (Brandt et al., 2012;Pagliarini et al., 2020;Politimou et al., 2019;Protopapas, 2014;Ramus & Ahissar, 2012).A sound understanding of the cognitive architecture of language is, however, a prerequisite to an informed sampling of musical skill that can be expected to transfer to the language faculty and to help in clinical remediation of language disorders.An interdisciplinary collaboration between cognitive scientists from different disciplines is therefore key to tackling these challenges in future research, advancing theories of cognition and achieving replicability across empirical studies of the musical mind. Finally, given their complementary functions as social communicative systems involving the manipulation of sounds and gestures, capacities for music and language may be subject to similar constraints that derive from links between perception and action.Such links may pave the way for "embodied" transfer effects, such as potential parallels between sensitivity to nonverbal communicative cues (e.g., back-channeling) across domains (Glenberg & Gallese, 2012;Hadley & Pickering, 2020;Levinson, 2016;Matyja & Schiavio, 2013;Moran et al., 2015).Individual differences in such cross-domain capacities may, moreover, be associated with more general sociocognitive capacities related to aspects of personality, including empathy and emotional intelligence (Alispahic et al., 2022;Atkinson, 2002;Resnicow et al., 2004).New evidence on how such socio-cognitive capacities might be involved in language processes is continuously emerging (Franich et al., 2021;Herringshaw et al., 2016;Venker et al., 2019).We propose that, to the extent that language and music share a common embodied and socially embedded basis, the effects of musical training on "auditory fitness" might extend to broader social competence by also enhancing communication skills in (nonverbal) expression and comprehension.Thus, it is vital that the study of musical abilities on language skills and vice versa does not ignore social and communicative aspects of the two phenomena. Action Editor Ian Cross, University of Cambridge, Faculty of Music. Peer Review Ian Cross, University of Cambridge, Faculty of Music Graham Welch, University College London, Institute of Education. Contributorship ES wrote the first draft of the manuscript.All authors reviewed and edited the manuscript and approved the final version of the manuscript. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Ethical Approval This research did not require ethics committee or IRB approval.This research did not involve the use of personal data, fieldwork, or experiments involving human or animal participants, or work with children, vulnerable individuals, or clinical populations.
2023-05-19T15:18:56.009Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "310776bd23f9956512937d4820f7aa3b97d70698", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/20592043231175251", "oa_status": "CLOSED", "pdf_src": "Sage", "pdf_hash": "92ab52e5bd8418b73cd167d8544f145a33ec8d15", "s2fieldsofstudy": [ "Linguistics", "Psychology" ], "extfieldsofstudy": [] }
34842395
pes2o/s2orc
v3-fos-license
Spoken Portuguese: Geographic and Social Varieties The Spoken Portuguese: Geographic and Social Varieties project has as its main goal the Portuguese teaching as foreign language. The idea is to provide a collection of authentic spoken texts and to make it friendly usable. Therefore, a selection of spontaneous oral data was made, using either already compiled material or material recorded for this purpose. The final corpus constitution resulted in a representative sample that includes European, Brazilian and African Portuguese, as well as Macau and East-Timor Portuguese. In order to accomplish a functional product the Linguistics Center of Lisbon University developed a sound/text alignment software. The final result is a CD-ROM collection that contains 83 text files, 83 sound files and 83 files produced by the sound/text alignment tool. This independence between sound and text files allows the CD-ROM user to manipulate it for other purposes than the educational one. Introduction This project was carried out by the Linguistics Center of Lisbon University (Centro de Linguística da Universidade de Lisboa) and was sponsored by Instituto Camões and by the European Program LINGUA-SOCRATES, which aims to promote the knowledge of foreign languages in the European Community, within the ACTION VB -Development and Interchange of Teaching Materials. The Linguistic Center of Lisbon University is the co-ordinator institution of the project, with partnership of the Universities of Toulouse-le-Mirail and Aix-en-Provence, France. The works are already finished and its publication in CD-ROM is foreseen for the next months. The recordings are samples of oral Portuguese, formal and informal, covering Portugal, Brazil, the different African countries with Portuguese as its official language, Macau and East-Timor, as one can see in Figure 1, in a total of 83 recordings, corresponding to nearly eight hours and thirty minutes speech. The CD-ROMs contain the sound files, their correspondent orthographic transcription in text files and an application developed in the institution which aligns the sound with the text: a colored light runs over the transcription of the sequence which is being listened to. The user can control what he is listening to, repeating sequences or jumping parts of the text. The Main Goal of this Project Portuguese is one of the less taught languages in the European Union in spite of being the third European language spoken in the world, reason why this project can be truly important for the improvement of its teaching. As a matter of fact, in the last two decades, the importance of the use of authentic documents and of the oral in teaching of foreign languages has been growing. As far as oral is concerned, a kind of prejudice has led teachers to look at it as something imperfect, a kind of denial of the grammar rules. Since everybody realizes that listening and producing oral messages is crucial, the forgery of oral texts became a common practice: texts are read by professional actors with an ideal pronunciation, and the apparent irregularities and particularities of the oral speech are deleted. As a consequence of this spirit, the students who learn a foreign language in their own countries, many times are used to the way of speaking of their teachers and, eventually, to the "artificial" pronunciation of these texts read by professionals. When they are confronted with a real situation of communication in the language they are learning they often feel lost. This happens because what they are listening to doesn't match with what they are used to listening to: either their interlocutors speak too fast, or the communication strategies of the interlocutor are very different from those acquired by the learner in the classroom. The goal of this project is then to provide authentic speech texts representing different varieties of the Portuguese language. Each text has a sound component (in a wave file) and a written one (in a text file). For the transcription the orthographic representation was chosen (criteria being always controversial and possible solutions always showing advantages and disadvantages) considering the benefits it can bring to students who are usually used to the language orthographic representation. The student can listen to real speech situations without feeling frustrated: now he can listen and read at the same time, having the orthographic support of any misunderstood part of the speech as well as the text/sound timing control. The Corpus Constitution Having in mind that no language, although its unity, is uniform, this collection of samples covers a large range of regional, social and situational realizations of the Portuguese language. Varieties from different countries having Portuguese as its official language were selected, as well as different dialects within the Portuguese and the Brazilian territory. The informants were selected from different levels of education, different professional status and covering a wide range of ages. As far as European and Brazilian Portuguese is concerned, also diachronic variation was taken into account. Documents Selection Material selection involved many different factors. At the project beginning, a large amount of documents was selected from the oral sub-corpus of the Contemporary Portuguese Reference Corpus, including some material provided from Brazilian and Mozambican corpora. However, further more variety samples had to be specifically collected for the project. The first criterion considered in a pre-selection was the sound quality of the material. This was very restrictive, since there were some documents recorded in the seventies and eighties, being its sound quality very poor. After that, the selection related mostly with the language variety representation, considering dialects and sociolinguistic factors, such as age, gender and educational level, as referred above. The figures 3, 4, 5 and 6 show the final selected data distribution according to these variables. The educational level was divided in three categories: until 6 years of scholarship; from 7 to 12 years of scholarship; more than 12 years of scholarship. People were also divided in three groups according to their age: from 15 to 30 years old; from 31 to 45 years old; and more than 46 years old. Figure 3: Women Age / Educational Level Distribution The data shown is these figures only refer to the informants about which there was available information. Some documents were collected from radio interviews and it was not always possible to determine the precise data. It is also worth to mention that the number of informants does not exactly correlate with the number of documents, since in the radio interviews there are sometimes more than one informant. Educational Level 15-30 years 31-45years after 46 years As one can see in the charts, the sociolinguistic variables were not always balanced, regarding the project didactic aims. The topic appeal and diversity was the final used criterion, as well as the speech clearness, leading sometimes to a new evaluation of non-selected material. The Final Result The Corpus ended with 83 texts, corresponding to 30 Portuguese documents -about 3h speech; 20 Brazilian documents -about 2h speech; 25 African documentsabout 2h 50m speech; 5 documents from Macau -about 38m speech; and 3 documents from East-Timor -about 10m speech -, in a total of nearly 8h30m speech. The Alignment Task A software application had to be designed in the institution for the alignment task. This tool allows the association between the text image and the sound wave. This association is established relating groups of characters with time intervals. The program opens the transcription and the sound files. When the operator starts playing the sound he can click on the text part he wants to associate with the correspondent speech part. Some adjustments can be done manualy. The alignment task considered three different types of units -syntactic, prosodic and rhythmic -which had to be coherently combined in a way that would allow a clear text/sound relation, i. e., whenever the prominent unit sounds, the correspondent text part highlights. The type of unit chosen in each case depended on the length of graphical part of text corresponding to the speech interval, in order to avoid delays between sound and highlighting, minding the student as a final user. CD-ROMs Content In 4 CD-ROMs the user can find use instructions and the project and material description. The documents are divided in folders according to its origin. Each document have 3 related files: a text file, in a txt format, with the transcription preceded by a heading containing the interview specific data -title, origin, year of collectionand informant specifications -sex, age, level of education, professional status -as well as different kinds of observations, as some expressions particular use or recording situation details; a sound file (wav format), a text/sound alignment file generated by the software tool described above (dat file). Another software application is available, this one allowing the user to listen and read the selected document at the same time. The Lingua Tool This friendly application, designed for a Windows environment and also developped in the institution, permits the user to open a selected document and to manipulate it while listening to it. With the familiar buttons of a tape recorder, it is quite easy to work with. The user has before him the text image and in the toolbar there are a play button to start the sound playing as well as the respective transcription highlighting. The sound can be controlled through a pause button for temporary suspensions, a stop button to finish the sound playing and rewind and forward buttons to repeat or jump parts of the sound playing -these last tasks can also be accomplished by mouse clicking in the desired part of the transcription. Other Applications Due to its diversity, this corpus constitutes a very useful tool not only for training the capacity of listening and understanding in the teaching activity and for textual analysis, but also for different works on the Portuguese language. Educational Level 15-30 years 31-45years after 46 years For its characteristics, it allows the user to choose the texts according to its needs and preferences, fact that will give him a considerable degree of autonomy in his activity. Once the materials were not selected having in mind a restricted profile of public, besides its usefulness in the process of Portuguese teaching as a foreign language, they can equally be of great interest for the training of translator-interpreters, as well as in first language teaching. Due to the autonomy of the text and sound files, the 83 documents can also be explored in research projects on spoken Portuguese. The sound files can be used in phonetic and dialectology research and the text files can be manipulated as a useful corpus for spoken Portuguese. Regarding its constitution, it is a good sample of a wide set of Portuguese varieties. It would then constitute a reliable source of information, allowing the extraction of different kinds of data, such as concordances, lexical and syntactic associations and frequencies. For all these characteristics, this project will be an original contribution for the knowledge of spoken Portuguese
2015-06-05T01:59:53.000Z
2000-01-01T00:00:00.000
{ "year": 2000, "sha1": "eeda00591c7a611e9cb4648bf4923208e25ded57", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "eeda00591c7a611e9cb4648bf4923208e25ded57", "s2fieldsofstudy": [ "Geography", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
245671801
pes2o/s2orc
v3-fos-license
Dysregulation of GABAergic Signaling in Neurodevelomental Disorders: Targeting Cation-Chloride Co-transporters to Re-establish a Proper E/I Balance The construction of the brain relies on a series of well-defined genetically and experience- or activity -dependent mechanisms which allow to adapt to the external environment. Disruption of these processes leads to neurological and psychiatric disorders, which in many cases are manifest already early in postnatal life. GABA, the main inhibitory neurotransmitter in the adult brain is one of the major players in the early assembly and formation of neuronal circuits. In the prenatal and immediate postnatal period GABA, acting on GABAA receptors, depolarizes and excites targeted cells via an outwardly directed flux of chloride. In this way it activates NMDA receptors and voltage-dependent calcium channels contributing, through intracellular calcium rise, to shape neuronal activity and to establish, through the formation of new synapses and elimination of others, adult neuronal circuits. The direction of GABAA-mediated neurotransmission (depolarizing or hyperpolarizing) depends on the intracellular levels of chloride [Cl−]i, which in turn are maintained by the activity of the cation-chloride importer and exporter KCC2 and NKCC1, respectively. Thus, the premature hyperpolarizing action of GABA or its persistent depolarizing effect beyond the postnatal period, leads to behavioral deficits associated with morphological alterations and an excitatory (E)/inhibitory (I) imbalance in selective brain areas. The aim of this review is to summarize recent data concerning the functional role of GABAergic transmission in building up and refining neuronal circuits early in development and its dysfunction in neurodevelopmental disorders such as Autism Spectrum Disorders (ASDs), schizophrenia and epilepsy. In particular, we focus on novel information concerning the mechanisms by which alterations in cation-chloride co-transporters (CCC) generate behavioral and cognitive impairment in these diseases. We discuss also the possibility to re-establish a proper GABAA-mediated neurotransmission and excitatory (E)/inhibitory (I) balance within selective brain areas acting on CCC. INTRODUCTION In the adult mammalian central nervous system (CNS), γaminobutyric acid (GABA) inhibits neuronal firing by activating two different classes of receptors: GABA A and GABA B . While GABA A receptors are integral ion channels, GABA B receptors are coupled to ion channels via guanine nucleotide-binding proteins and second messengers. The opening of GABA A receptor channels by GABA leads to an inwardly directed flux of Cl − that, by hyperpolarizing the membrane, inhibits neuronal firing. Early in postnatal life, instead, GABA, via GABA A receptors, depolarizes and excites targeted cells by an outwardly directed flux of Cl − (Ben- Ari et al., 1989). This phenomenon is due to the high levels of intracellular Cl − ([Cl − ] i ) that result from the differential temporal expression of the cation-chloride cotransporters NKCC1 and KCC2, which are involved in Cl − uptake and extrusion, respectively. The low expression of the KCC2 extruder at birth leads to Cl − accumulation inside the neuron via NKCC1. The developmentally up-regulated expression of KCC2, which in rodents occurs toward the end of the first postnatal week, results in the extrusion of Cl − , causing the shift of GABA from depolarizing to hyperpolarizing direction (Rivera et al., 1999) (Figure 1). At the network level, the interplay between the depolarizing action of GABA and glutamate generates a primordial form of synchrony which varies in its specific patterns among different brain regions (Buzsáki and Draguhn, 2004;Griguoli and Cherubini, 2017). In the hippocampus, the so-called Giant Depolarizing Potentials (GDPs), are crucial for synaptic wiring and refinement of local neuronal circuits (Ben-Ari et al., 2012). Principal neurons are driven by GABAergic interneurons (Mohajerani and Cherubini, 2005), which act as functional hubs to synchronize large neuronal ensembles (Bonifazi et al., 2009). Early Ca 2+ signals associated with GDPs act as coincident detectors for enhancing synaptic efficacy at emerging GABAergic (Kasyanov et al., 2004) and glutamatergic synapses (Mohajerani et al., 2007). GDPs are indeed instrumental in converting silent synapses into active ones (Kasyanov et al., 2004), a key mechanism for persistently increasing synaptic efficacy . Immediately after birth, at least in the rodent CA3 hippocampal region, GDPs are associated to intrinsic bursts, driven by a persistent Na + current (Sipilä et al., 2006) and are facilitated by the low expression of Kv7.2/Kv7.3 channels, responsible for the non-inactivating, low-threshold M current (I M ) (Safiulina et al., 2008). GDPs disappear toward the end of the first postnatal week, when the polarity of GABA shifts from depolarizing to hyperpolarizing. It is therefore not surprising that the depolarizing action of GABA at early stages of postnatal development coincides with the period of maximal synaptogenesis (Huttenlocher, 1979;De Felipe et al., 1997;Virtanen et al., 2018). Interestingly, GABAergic signals operate before glutamatergic ones, which appear later in concomitance with the development of dendritic arborization (Tyzio et al., 1999;Khazipov et al., 2001;Ben-Ari et al., 2007). The late switch of GABA polarity at the axon initial segment of principal cells favors more organized forms of network oscillations such as those occurring in the FIGURE 1 | Cation chloride co-transporters contribute to maintain a proper E/I balance in neuronal circuits. (A) Early in postnatal life, in rodent's hippocampus, GABA, released from GABAergic interneurons, exerts via GABA A receptors a depolarizing, excitatory action in targeted cells by an outwardly directed flux of Cl − . High [Cl − ] I results from the differential temporal expression of Cl − importer and exporter NKCC1 and KCC2, respectively, and accumulation of Cl − inside via NKCC1. Early in postnatal life, GABAergic transmission is controlled also by microglia via ROS. The early depolarizing and excitatory action of GABA, which leads to E/I unbalance, is instrumental in stimulating synaptogenesis and in shaping early neuronal circuits. (B) In juvenile and adult animals, the upregulated expression of KCC2 contributes to maintain a low [Cl − ] I , responsible for the hyperpolarizing action of GABA and for preserving a proper E/I balance in neuronal circuits. In (A,B) both pre-(GABAergic interneurons, green) and post-synaptic (targeted cells, violet) elements are represented. Red arrows indicate the direction of Cl − flux through GABA A receptor channels. gamma range , as demonstrated in the somatosensory (Khirug et al., 2008) and in the prefrontal cortex (Rinetti-Vargas et al., 2017). The early depolarizing action of GABA and its developmental shift, mainly documented in in vitro studies, have been challenged because of the lack of direct in vivo demonstrations (Ben-Ari et al., 2012). Using a combined electrophysiological and imaging techniques from anesthetized neonatal mice, it has been shown that, in spite of its depolarizing action, GABA inhibits cortical activity via a shunting inhibitory action (Kirmse et al., 2015). Similarly, an inhibitory effect of GABA on spontaneous glutamatergic events has been reported in the hippocampus of anesthetized animals during the first postnatal week, following photo-stimulation of GABAergic interneurons expressing channelrhodopsin (Valeeva et al., 2016). Evidence for an early depolarizing and excitatory action of GABA in vivo has been provided by Oh et al. (2016) who have demonstrated that, in the developing mouse cortex, synapses formation requires GABA-mediated activation of Ttype voltage dependent Ca 2+ channels. The early depolarizing and excitatory action of GABA in in vivo conditions has been further confirmed by Sulis Sato et al. (2017) and Murata and Colonnese (2020). Using a particular probe formed by the fusion of a Cl − and pH-sensitive GFP mutant with an ion-insensitive red fluorescent protein, which allows the combined measurement of [Cl − ] I and pH, Sulis Sato et al. (2017) have demonstrated, by means of two-photon in vivo imaging from individual pyramidal cells in the mouse cortex, the developmental shift of GABAergic signaling; this effect could be mimicked by the selective NKCC1 antagonist bumetanide. In addition, using chemogenetic and optogenetic approaches, Murata and Colonnese (2020) have proved that, at postnatal day 3 in non-anesthetized mice, GABA released from GABAergic interneurons increases the firing of CA1 principal cells. However, according to these authors, the shift of GABA polarity is region-specific, since at the same age GABAergic interneurons exert an inhibitory action on visual cortex principal cells. Our review will focus on evidence concerning the functional role of GABAergic signaling in brain maturation and its alterations in neurodevelopmental disorders, highlighting the contribution of CCC to these effects. CCC are intrinsic membrane proteins that transport Cl − ions, together with Na + and/or K + ions, in an electroneutral manner due to the stoichiometric coupling and directionality of translocated ions. Therefore, members of this family are prime regulators of [Cl − ] i . After the initial discovery of the depolarizing action of GABA (Ben-Ari et al., 1989;Cherubini et al., 1991), a fundamental step forward to understanding why in immature neurons the equilibrium potential for chloride (E Cl − ) is positive relative to the resting membrane potential (V m ) has been made by Rivera et al. (1999) who demonstrated that the developmental regulated expression of the cation-chloride exporter KCC2 leads to Cl − accumulation inside young neurons with consequent shift of GABA action from hyperpolarizing to depolarizing. It is worth mentioning that GABA A receptor channels are permeable not only to Cl − but also to HCO − 3 and therefore, the equilibrium potential for Cl − (E Cl − ) does not correspond precisely to the equilibrium potential of GABA (E GABA ), which usually shifts toward more positive values, being E HCO3− less negative than E Cl − (Kaila, 1994). Taking into account a HCO3 − /Cl − permeability ratio of 0.2-0.4, the quantitative influence of HCO − 3 on E GABA can be estimated using the Goldman-Hodgkin-Katz equation, which shows that the [HCO − 3 ] i (∼15 mM at a pH of 7.1-7.2) influences E GABA in the same way as 3-5 mM of [Cl − ]i. Thus, as compared to adult neurons, in the immature CNS, in which [Cl − ] i is relatively high, the depolarizing influence of HCO − 3 on E GABA is negligible. In addition to controlling E GABA , the direction of GABA action and network excitability, CCCs regulate many physiological processes including cell volume, water transport and intracellular pH (Delpire and Gagnon, 2018). DISTRIBUTION OF NKCC1 AND KCC2 Encoded by the SLC12 gene's family, CCCs are glycoproteins that are widely distributed in all organ systems, including the brain (Kaila et al., 2014). Among CCCs, the main chloride extruder KCC2 has been found almost exclusively in the CNS. Consistent with a developmental gradient, at birth KCC2 is already present in the spinal cord and in the brainstem in rodents, while it starts to be upregulated later in most rostral regions of the brain (Kaila et al., 2014;Virtanen et al., 2020). In the human cortex, upregulation of KCC2 starts prenatally from the 25th postconceptional week and peaks at birth (Sedmak et al., 2016). In preterm infants, the relatively low expression of KCC2 is associated with a discontinuous type of EEG-organized in intermittent bursts of activity, separated by silent periods reminiscent of GDPs-that disappears at birth (Khazipov and Luhmann, 2006). It is worth noting that KCC2 has been also detected in pancreatic β-cells where it plays a crucial role in modulating insulin secretion (Kursan et al., 2017). In the hippocampus, KCC2 is involved in regulating GDPs, which are driven by the synergistic depolarizing action of glutamate and GABA (Bolea et al., 1999;Ben-Ari et al., 2012) and by the intrinsic pacemaker properties of CA3 pyramidal neurons (Strata et al., 1997;Safiulina et al., 2008;Griguoli and Cherubini, 2017). Interestingly, during the first week of postnatal life, using whole-cell Cl − loading experiments, Spoljaric et al. (2019) have reported that the selective KCC2 antagonist VU0463271 can increase the firing rate of CA3 principal cells as well as their synchrony during the rising phase of GDPs, suggesting that these neurons are able to actively extrude Cl − in a KCC2-dependent way, particularly when GABA is applied at the dendritic level. In addition to its canonical function of transporting Cl − outside the neuron, KCC2 plays a key role in controlling actin cytoskeleton's dynamics and spinogenesis at early developmental stages (Blaesse et al., 2009;Virtanen et al., 2020). Of note, during the first postnatal week, KCC2 promotes spinogenesis independent of KCC2 Cl − transport function in the somatosensory cortex (Li et al., 2007;Fiumelli et al., 2013), but, at the same age, it constrains spine density in hippocampal CA1 neurons, an effect that is instead dependent on the transporter function . This difference might be explained by distinct membrane localizations of KCC2 in different neuron types or/and brain region during early postnatal development. Nevertheless, it is safe to state that KCC2 plays multiple roles in brain development, contributing to shape neuronal circuits shortly after birth and network plasticity at later stages of postnatal development (Virtanen et al., 2021). Unlike KCC2, the main Cl − importer NKCC1 is expressed in the brain already at birth, where it plays a key role in maintaining high [Cl − ] i in immature neurons. NKCC1 is widely distributed not only in CNS and in the peripheral nervous system (PNS) but also in a variety of different tissues including the inner ear, in skeletal and smooth muscles, exocrine glands, epithelial cells and kidneys, where it contributes to regulate major physiological functions (Delpire and Gagnon, 2018;Virtanen et al., 2020). While in the CNS, the majority of neurons express low levels of NKCC1, in the PNS sensory neurons such as dorsal root or trigeminal ganglion cells, exhibit high amounts of the protein. When GABA from local interneurons is released at the terminals of sensory afferent fibers, it causes a membrane depolarization that leads to a reduction of glutamatergic transmission and to an inhibitory response. This effect is reduced in NKCC1 knock-out mice, indicating that the depolarizing value of E GABA is maintained via NKCC1. Such mechanism may have a strong implication for nociception, as demonstrated by deficits in thermal nociceptive threshold in NKCC1 knock-out mice (Sung et al., 2000). Primary sensory afferents (i.e., dorsal root ganglion fibers) contain GABA receptors, whose activation by GABA, released from interneurons localized in lamina I/II of the dorsal horn, causes a depolarization (primary afferent depolarization or PAD). The depolarization, maintained by the expression of NKCC1 and the lack of KCC2 (Alvarez- Leefmans et al., 2001) leads to suppression of nociceptive signals mainly via inactivation of voltage-gated Na + channels and consequent reduction of transmitter release (Price et al., 2009) An inflammatory insult, after peripheral nerve injury, may cause an upregulation of NKCC1 activity in nociceptive afferent fibers leading to an increased [Cl − ] i and an excessive GABA A -mediated depolarization that would facilitate cross-excitation between low and high threshold nociceptive afferent fibers and nociception (Price et al., 2005). In the CNS, the selective deletion from hippocampal CA1 pyramidal cells of the SLC12a2 gene, leads to an attenuation of the depolarizing action of GABA, due to a reduction of [Cl − ] i and to a severe impairment of GDPs activity, in a regionspecific way (Graf et al., 2021). However, such deletion only slightly affects the in vivo network dynamics or hippocampaldependent behavioral tasks, suggesting that most of the effects observed in NKCC1 knock-out mice or after pharmacological blockade of the transporter, may be attributed to the loss of the protein from non-neuronal cells or from cells localized outside the brain (Graf et al., 2021). In the brain, NKCC1 has been found to be expressed in several non-neuronal cell types such as choroid plexus epithelial cells, astrocytes, oligodendrocytes and microglia (Tóth et al., 2021). In particular NKCC1 is highly expressed in microglia (DePaula-Silva et al., 2019) (Figure 1) where it plays a fundamental role in neuro-inflammation. Thus, the selective deletion of NKCC1 on microglia affects their cell volume and baseline morphology and boosts cytokines production in response to inflammatory stimuli (Tóth et al., 2021). Interestingly, it has been recently reported that longterm potentiation triggers in potentiated synapses withdrawal of perisynaptic astrocytic processes, which involves the NKCC1 transporter and the actin-controlling protein cofilin. This favors glutamate spillover and NMDA-mediated inter synaptic crosstalk, crucial for LTP and memory formation (Henneberger et al., 2020). TRANSCRIPTIONAL AND POST-TRANSLATIONAL REGULATION OF NKCC1 AND KCC2 Among transcriptional regulators of CCCs, a key factor is represented by Brain Derived Neurotrophic Factor (BDNF) and its tropomyosin kinase B (TrkB) receptor. Employing transgenic embryos that overexpress BDNF under the control of the nestin promoter, Aguado et al. (2003) demonstrated that, in embryonic hippocampal slices, BDNF powerfully controls the developmental switch of GABAergic transmission. At the network level, by upregulating KCC2 expression, BDNF reduced [Cl − ] i and GABA A -activated Ca 2+ transients. Furthermore, in immature cultured hippocampal neurons, BDNF enhanced KCC2 mRNA and protein expression levels via ERK1/2dependent upregulation of Egr4 transcription factor (Ludwig et al., 2011a). BDNF can further increase KCC2 activation, by promoting the localization at the membrane of already synthetized KCC2 in the developing brain (Khirug et al., 2010;Puskarjov et al., 2015;Awad et al., 2018). Egr4 mRNA expression can be also triggered by the trophic factor neurturin which leads to the developmental upregulation of KCC2 in an ERK1/2dependent way (Ludwig et al., 2011b). Furthermore, TrkBdeficient mice exhibit a reduced number of GABAergic synapses associated with decreased expression levels of KCC2, further indicating that BDNF is determinant for its expression (Carmona et al., 2006). These data are inconsistent with those reported by Puskarjov et al. (2015) on BDNF-deficient mice in which no developmental changes in GABA shift were detected. Although the lack of the developmental upregulation of KCC2 in BDNF null mice may be related to compensatory processes, the reason for this discrepancy is still unclear. While the impact of BDNF/TrkB signaling on KCC2 expression at early stages of postnatal development has been well-documented, the role of this neurotrophin on NKCC1 expression is still debated. A recent study by Badurek et al. (2020) has however unveiled that the selective deletion of TrkB from immature dentate granule cells (DGC), when these cells integrate the hippocampal circuit, induces a premature shift of GABA from the depolarizing to the hyperpolarizing direction at mossy fibers-CA3 synapses, which at birth are GABAergic (Safiulina et al., 2006a). A dysfunction in BDNF/TrkB signaling leads due to downregulation of NKCC1 expression and low [Cl − ] i , in the absence of any effect on KCC2. In agreement with a previous study on immature neocortical neurons (Cancedda et al., 2007), the premature hyperpolarizing shift of GABA prevents the establishment of proper synaptic connectivity in targeted neurons, an effect that persists in adulthood (Badurek et al., 2020). However, how in immature DGCs, BDNF/TrkB signaling regulates the expression of the Cl − importer NKCC1 remains to be elucidated. Another trophic factor that controls KCC2 expression and the developmental GABA switch is the insulin growth factor 1 (IGF-1), which presumably requires protein tyrosine kinase c-Src (Kelsch et al., 2001). It is worth noting that, independently on its action on CCCs, the BDNF/TrkB signaling pathway is instrumental in tuning hippocampal wiring at emerging GABAergic (Sivakumaran et al., 2009) and glutamatergic (Mohajerani et al., 2007) synapses, during spike time dependent plasticity, an Hebbian form of learning. Interestingly, neuroligin 2 (NLG2) a cell adhesion molecule involved in regulating GABAergic synaptogenesis has recently emerged as a key modulator of the developmental GABAergic switch. It was unexpectedly discovered that, knocking down NLG2, leads to a reduced expression of KCC2, which is in turn Frontiers in Cellular Neuroscience | www.frontiersin.org associated with a delayed switch of GABA from the depolarizing to the hyperpolarizing direction. The down-regulation of KCC2 was accompanied by a reduced number of dendritic spines and glutamatergic synaptic events, suggesting that, in neural networks, NLG2 may serve as a master regulator of the delicate balance between glutamatergic and GABAergic functions (Sun et al., 2013). Among post-translational mechanisms controlling the activity and stabilization of CCC, protein phosphorylation represents the main functional substrate. KCC2 and NKCC1 are regulated in a reciprocal fashion by threonine-dependent phosphorylation/dephosphorylation residues, targeted by WNK (with-No-Lysine) kinases that are responsible for increasing or decreasing [Cl − ] i levels, respectively (Ben-Ari et al., 2012; Kahle et al., 2013;Kaila et al., 2014). The reciprocal activation is probably determined by similar four amino acid phosphorylation motifs that are present on C and N terminus domains of KCC2 and NKCC1, respectively (Rinehart et al., 2009). Interestingly, early in postnatal life, oxytocin, a hypothalamic hormone known to promote parturition and lactation and to be involved in social behavior, regulates GABA switch by upregulating the activity of KCC2, through the promotion of its phosphorylation at Ser940 and its insertion at the plasma membrane without impairing NKCC1 (Leonzino et al., 2016). INVOLVEMENT OF NKCC1 AND KCC2 IN MAINTAINING A PROPER RATIO BETWEEN EXCITATION AND INHIBITION WITHIN NEURONAL CIRCUITS By regulating the direction of GABA action and therefore the efficacy of inhibition, NKCC1 and KCC2 contribute to set a proper E/I balance within selective neuronal circuits ( Figure 1B). A proper ratio between excitation (E) and inhibition (I), the so-called E/I balance is thought to be critical for controlling spike rate and information processing. It requires precise connections through dynamic processes involving neurotransmitter receptors, transporters, scaffolding proteins, and the cytoskeleton. Using an open source software to map the distribution and morphology of excitatory and inhibitory synapses along the dendritic tree of layer 2/3 mouse cortical pyramidal neurons with computational modeling, Iascone et al. (2020) unveiled that E/I synapses are highly regulated by molecular mechanisms operating locally to generate a relative invariant E/I ratio across dendritic segments. Failure to maintain a proper E/I balance within key neuronal circuits is thought to account for behavioral deficits observed in several neurological diseases (Yizhar et al., 2011;Lee et al., 2017;Ghatak et al., 2021). A reduced inhibition or an excessive excitation may cause an increased signal to noise ratio with consequent neuronal hyper-excitability and seizures. Conversely, an enhanced inhibition may lead to a reduced signal to noise ratio and to a lower level of activity (Sohal and Rubenstein, 2019). Both conditions would affect information processing. Changes occurring at synaptic and circuit levels would influence the interplay between GABAergic interneurons and targeted pyramidal cells leading to altered temporal integration and abnormal rhythmogenesis. In cortical circuits, the E/I balance plays a critical role in regulating the responses of neuronal circuits to sensory stimuli. Thus, in juvenile mice carrying the human R451C mutation of the gene encoding for neuroligin 3 (an adhesion molecule essential for synaptic stabilization) found in some families with children affected by Autism Spectrum Disorders (ASDs), the impairment of GABA release from parvalbumin (PV)+ basket cells was found to severely alter the E/I balance in layer IV neuronal microcircuit of the somatosensory cortex (Cellot and Cherubini, 2014a).This represents a critical issue, since PV+ cells, which are innervated by the same thalamic afferents to excitatory layer IV spiny neurons, play a crucial role in sensory information, acting as an inhibitory gate for incoming thalamic inputs via feedforward disynaptic inhibition (Cellot and Cherubini, 2014a). Changes in the inhibitory gate may alter sensory processing in ASD patients leading to misleading sensory representations with difficulties to combine pieces of information into a unified perceptual whole. Although an E/I imbalance has been implicated in various brain disorders, this concept is rather broad and oversimplified. It should be used indeed with caution, particularly in view of our progress in understanding, thanks also to the development of optogenetics, the functional role of selective neuronal circuits in behavior. Both excitation and inhibition are not unidimensional entities but originate from multiple sources, which are dynamically regulated in space and time (He and Cline, 2019). Differences in local circuits connectivity, can produce various levels of inhibition or disinhibition in different pathways. Distinct classes of cortical GABAergic interneurons may differently contribute to seizures, by suppressing or prolonging them (Khoshkhoo et al., 2017). Therefore, in these cases a therapeutic intervention directed against a particular type of interneuron will be more effective than one aimed at inhibition in general. CCC dynamically regulate in an activity-dependent way GABA A -mediated synaptic strength (Woodin et al., 2003;Fiumelli and Woodin, 2007). Hence, brief synaptic stimulations or pairing pre-and postsynaptic activity induce long-term synaptic plasticity changes at GABAergic synapses, exhibiting a positive shift in (E GABA ), mediated by a decrease in KCC2's function and increase of [Cl − ] i , with consequent decline of synaptic inhibition (Balena and Woodin, 2008). Activitydependent changes in E GABA requires Ca 2+ influx through voltage-gated calcium channels and NMDA receptors (Balena and Woodin, 2008). CCC also very labile, and they can be disrupted in several neuropsychiatric disorders particularly in those originating early in developmental such as ASDs, schizophrenia and epilepsy (Fiumelli and Woodin, 2007). In the following sections, an outline of the involvement of GABAergic signaling in these disorders will be discussed in line with possible therapeutic interventions aimed at targeting CCC to re-establish a proper E/I balance at synaptic level and/or neuronal connectivity at the circuit level. AUTISM SPECTRUM DISORDERS ASDs comprise a heterogeneous group of neurodevelopmental disorders characterized by impaired social interactions, deficits in verbal and non-verbal communication, restricted interests and stereotyped behaviors with high incidence (∼1/70 children) and a significant economic and social burden for families and society. Impaired chloride homeostasis with consequent changes in the direction of GABA shift during time-sensitive windows, may account for behavioral alterations found in some animal models of ASDs, reminiscent of those observed in autistic patients (Pizzarelli and Cherubini, 2011;Cellot and Cherubini, 2014b). GABA-mediated enhancement in network excitability may account for the high co-morbidity of ASDs with epilepsy (Frye et al., 2013;Bozzi et al., 2018;Sierra-Arregui et al., 2020) and for the paradoxical action exerted by benzodiazepines in some ASD patients (Marrosu et al., 1987). The E/I imbalance in selective brain areas may result either from the persistent depolarizing and excitatory action of GABA beyond the critical period (Tyzio et al., 2014;Corradini et al., 2018;Fernandez et al., 2019) (Figure 2A), or from the early hyperpolarizing action of this neurotransmitter, following downregulation of the chloride importer NKCC1 ( Figure 2B). As already mentioned, the early depolarizing and excitatory action of GABA is essential for shaping neuronal networks, as demonstrated by the morphological impairment of cortical pyramidal neurons (Cancedda et al., 2007) and the premature termination of interneuron migration (Bortone and Polleux, 2009) following precocious KCC2 expression by KCC2 electroporation. A reduced expression of NKCC1 at birth has been recently demonstrated in a novel genetic mouse model (Ntrk2/Trkb) in which the selective deletion of TrkB from immature dentate granule cells (DGCs) leads to a disruption of downstream circuits, associated with a severe impairment of synaptic plasticity and cognitive processes (Badurek et al., 2020). BDNF, via TrkB signaling pathway, is known to play a crucial role in the maturation of inhibition as demonstrated in cortical and hippocampal neurons (Huang et al., 1999;Yamada et al., 2002). Interestingly, as shown in Figures 1A, 2A, at early stages of brain development, GABAergic signaling is controlled by Reactive Oxygen Species (ROS, Safiulina et al., 2006b), which in ASDs are produced at high levels by abnormal reactive microglia, often associated with a dysfunction of the immune system sustained by a strong inflammatory state (Pangrazzi et al., 2020). The exact mechanisms by which microglia alters the strength of inhibition is still unclear. One possibility is that microglia interacts with CCC via the BDNF-TrkB signaling pathway. Thus, BDNF, released from microglia, would cause disinhibition via downregulation of KCC2 (Rivera et al., 2002;Coull et al., 2005). This effect may involve KCC2 de-phosphorylation with consequent reduction of surface protein expression and increased protein turnover (Wake et al., 2007). In addition, neuroinflammation associated ROS and inflammatory cytokines would activate NKCC1, thereby enhancing neuronal excitation (Alahmari et al., 2015) ( Figure 2A). Therefore, it is reasonable to hypothesize that targeting CCC may allow, at least in some cases, to improve FIGURE 2 | Alterations in developmental GABA shift lead to neuropsychiatric disorders. (A) The persistent depolarizing action of GABA beyond the critical period, impairs network excitability and the E/I balance. Abnormal reactive microglia, produces high levels of ROS, often associated with an inflammatory state caused by a dysfunction of the immune system. BDNF, released from microglia contributes to downregulate KCC2 with consequent enhancement of network excitability. (B) The early hyperpolarizing action of GABA at birth, caused by the reduced expression of NKCC1, following deletion of TrkB from immature DGCs, at the time when these cells integrate the classical tri-synaptic pathway, severely alters the morphology and circuitry downstream of DGCs. In (A,B) both pre (GABAergic interneurons, green) and post synaptic (targeted cells, violet) elements are represented. Red arrows indicate the direction of Cl − flux through GABA A receptor channels. cognitive deficits in ASDs by re-establishing a correct GABAergic signaling in neuronal circuits. One simple approach to restore physiological levels of [Cl − ] i is to reduce NKCC1 activity with the selective high affinity antagonist bumetanide. Therefore, this drug has been extensively tested as a potential treatment for a variety of neuropsychiatric disorders (Ben-Ari, 2017). In both syndromic (Fragile X) and non-syndromic (valproic acid and maternal immune activation) animal models of ASDs, which are known to be associated with a dysfunction of GABAergic signaling (Tyzio et al., 2014;Corradini et al., 2018;Fernandez et al., 2019), bumetanide, via maternal administration, is able to reduce chloride accumulation and to rescue behavioral deficits by re-establishing an appropriate E/I balance in the brain of offspring (Tyzio et al., 2014). Similarly, bumetanide (0.5-2 mg twice daily for 3 months) has been demonstrated to ameliorate cognitive functions in autistic children (Ben-Ari, 2017). Following a pilot study by Lemonnier and Ben-Ari (2010), this drug has been tested by the same group in two placebo-controlled randomized studies from 60 and 88 children, respectively (Lemonnier et al., 2012(Lemonnier et al., , 2017. These and other studies on autistic children from different countries (Bruining et al., 2015;Hajri et al., 2019;Zhang et al., 2020;Fernell et al., 2021), have demonstrated beneficial effects of bumetanide on cognitive functions, as assessed by Child Autistic Rating Scale (CARS), Clinical Global Impressions Improvement Scale (CGI-I), Social Responsiveness Scale (SRS), and Aberrant Behavior Checklist (ABC), with only few minor side effects (such as mild hypokalaemia; loss of appetite, diuresis, dehydration, asthenia). In one clinical trial, bumetanide was associated with the Applied Behavior Analysis (ABA) training. In this case, more positive results were obtained in children treated with bumetanide and ABA respect to those treated with the ABA alone (Du et al., 2015). In adolescents with ASDs, chronic treatment with bumetanide has been shown to significantly improve visual recognition of emotive figures and to reduce amygdala activation in response to eye contacts (Hadjikhani et al., 2015). This would allow increasing the time spontaneously spent looking in the eyes to acquire the necessary information for social processing (Hadjikhani et al., 2018). However, in a recent double blind randomized study from 92 participants (Sprengers et al., 2021), bumetanide did not differ from placebo on sociability effects (assessed by SRS) but, unlike placebo, exerted clear positive effects on repetitive behavior, a core symptom of ASDs (measured with the Repetitive Behavior Scale-Revised). Although, the reason for the discrepant results between these studies is unclear, the possibility that the drug may be effective only in some forms of autism whose symptoms are more related to a GABAergic dysfunction, cannot be excluded. The high heterogeneity among ASD patients may explain why a large phase 3 clinical trial performed in 50 centers from 14 different countries failed to reach significant differences between bumetanide-and placebo-treated children as recently announced by a press release from Servier and Neurochlore (France). Bumetanide has also been reported to attenuate autism's traits but not seizures in patients with Tuberous Sclerosis (Van Andel et al., 2020). Overall, bumetanide has been shown to exert a symptomatic action mitigating, at least in a subpopulation of autistic children, the severity of symptoms, which reappear after interruption of the treatment. A limitation however in using bumetanide for treating Neurodevelopmental Disorders lies on the fact that this drug has poor pharmacokinetic properties and low capability to cross the blood brain barrier (BBB) to reach either neuronal or non-neuronal targets (Puskarjov et al., 2014a;Virtanen et al., 2020;Löscher and Kaila, 2021;Tóth et al., 2021). Moreover, an active efflux of bumetanide from the brain to the blood, which involves several transporters expressed at the BBB, including organic anion ones, would contribute to maintain a very low concentration of the drug in the brain (Römermann et al., 2017). This raises the possibility that the observed beneficial effects of bumetanide are related to some still unknown peripheral-central type of communication. Another alternative approach to attenuate E/I imbalance in ASDs is to use KCC2 activators to selectively enhance the activity of KCC2, which is a key player in Cl − homeostasis and, unlike NKCC1, it is expressed mainly by neurons (Schulte et al., 2018;Virtanen et al., 2021). Targeting KCC2 specificity may prevent adverse side effects occurring in non-neuronal tissues or in other organs as in the case of NKCC1. Among these, the KCC2 analog CLP257 was able, by lowering [Cl − ] i , to restore chloride transport in neurons with reduced KCC2 activity and to alleviate hypersensitivity in a rat model of neuropathic pain (Gagnon et al., 2013). KCC2 expression is known to be upregulated by phosphatases, insulin growth factor I (IGF-1) and 5-hydroxytryptamine type 2A (5-HT 2A ) receptors (Kelsch et al., 2001;Bos et al., 2013;Baroncelli et al., 2017). More recently, by generating a robust high-throughput drug screening platform that allows for the rapid assessment of KCC2 gene expression in genome-edited human reporter neurons, Tang et al. (2019) identified a group of small molecules that are able to increase KCC2 expression (KCC2 Expression-Enhancing Compounds or KEECs). In an animal model of Rett syndrome (MeCP2 mutant mouse), exhibiting reduced KCC2 activity, these molecules were able, by enhancing KCC2 expression levels, to restore a proper E/I balance and to ameliorate disease-associated respiratory and locomotion phenotypes. This study did not address whether KEECs can ameliorate other deficits in the mutant mice, In particular social behavioral deficits. Nevertheless, these very promising results pave the way to test whether, at preclinical and clinical levels, these drugs may possibly restore cognitive functions in other neurodevelopmental disorders associated with [Cl − ] i imbalance. SCHIZOPHRENIA Schizophrenia is a debilitating psychiatric illness affecting 0.5-1% of the global population, which is characterized by positive (hallucinations and delusions), negative (lack of communication, social interaction, motivation), and cognitive symptoms (Lewis and Lieberman, 2000). Although positive symptoms are the most notable feature of this illness, cognitive disturbances are typically present before the onset of psychosis and are the best predictor of long-term functional outcome (Kahn and Keefe, 2013). The affected domains of cognition include working memory, executive function, learning and long-term memory, visual/auditory perception, and attention (Carter et al., 2008). In particular, working memory impairment is thought to be a core feature of schizophrenia, because it can influence all other observed cognitive deficits (Silver et al., 2003). Working memory function is associated with oscillatory activity in the gamma frequency range (30-80 Hz) in the prefrontal cortex. The power of gamma oscillations in the prefrontal cortex normally increases in proportion to working memory load (Howard et al., 2003;Jensen et al., 2007), but in individuals with schizophrenia this increase is reduced (Uhlhaas and Singer, 2010). Moreover, these deficits have been detected in both individuals with chronic illness (Cho et al., 2006) and in subjects with the first-episode of psychosis (Minzenberg et al., 2010), suggesting that working memory impairments and disrupted gamma power reflect the disease process of schizophrenia and are not due to chronic illness or the use of antipsychotic medications. Gamma oscillations refer to the synchronous firing of large ensembles of excitatory glutamatergic pyramidal neurons within and across brain regions, which is paced by GABAergic interneurons (Gonzalez-Burgos and Lewis, 2008). In particular, fast spiking PV+ GABAergic interneurons, connected by gap junctions, shape via feed-forward inhibition, the spatial and temporal profile of pyramidal cells firing, to functionally impact the information processing (Hu et al., 2014).The coordinated activity of glutamatergic and GABAergic neurons in triggering gamma activity, is commonly referred as the pyramidal interneuron network gamma (PING) (Gonzalez-Burgos and Lewis, 2008). The role of GABAergic cells in the generation and modulation of gamma oscillation is strongly supported by numerous studies using pharmacology-, and more recently optogenetics-, based manipulations both in vitro and in vivo (Whittington et al., 1995;Traub et al., 1996;Fuchs et al., 2007;Cardin et al., 2009;Sohal et al., 2009). It is thus reasonable to hypothesize that the altered gamma oscillations dynamics observed in schizophrenic individuals may be caused, at least in part, by functional abnormalities in GABA neurotransmission in the prefrontal cortex. Another observation supporting a putative role for altered GABAergic transmission in schizophrenia is the frequency of co-morbidity of schizophrenia and epilepsy. In fact, both individual and family history of epileptic disorders appear to be associated with schizophrenia (Qin et al., 2005). A Danish population-based cohort study found that a history of juvenile febrile seizures was associated with a 44% increased risk of schizophrenia, while a history of both febrile seizures plus epilepsy was associated with 204% increased risk of schizophrenia . Overall, it has been estimated that 2-9% of epileptic patients are also diagnosed with schizophrenia compared with an estimated 1% prevalence in the general population (Clarke et al., 2012; Schizophrenia Working Group of the Psychiatric Genomics Consortium, 2014). This comorbidity may be explained by a common genetic architecture between the two disorders; however, this hypothesis needs to be more extensively explored by systematic genome sequencing of patients affected by both epilepsy and schizophrenia. Another potential cause of comorbidity between epilepsy and schizophrenia might be environmental (or nongenetic). For example, neonatal seizures due to global cerebral hypoxia or intracranial hemorrhage are a strong risk factor for later development of both epilepsy and cognitive impairment (Tekgul et al., 2006). Independently of the underlying causes, and as it is the case of ASD, it has been suggested that alterations in GABAergic transmission may account for the co-morbidity of schizophrenia with epilepsy (Kalkman, 2011). Overall, numerous in vivo imaging and postmortem studies have consistently revealed alterations in GABA levels and components of GABAergic circuits in the prefrontal cortex of schizophrenic subjects compared to control cohorts, in particular regarding PV+ basket and chandelier GABAergic cells (extensively reviewed by Dienel and Lewis, 2019). As described above, one of the factors regulating the function of GABA neurotransmission is its nature, which can be hyperpolarizing, depolarizing, or shunting depending on the flow of Cl − ions through GABA-A receptors channels once these are activated. This has prompted researchers to look at the reversal potential of GABA and/or the expression of NKCC1, KCC2 cation-chloride co-transporters and their associated regulatory pathways in mouse models of schizophrenia and post-mortem human tissues. In the subchronic phencyclidine (scPCP)-treated mice, a well-studied animal model mimicking the cognitive impairment symptoms associated with schizophrenia (Jentsch and Roth, 1999;Steeds et al., 2015;Kim et al., 2021) found that the reversal potential of GABA (E GABA ) recorded from pyramidal neurons in the infralimbic cortex (the more ventral portion of the prefrontal cortex) was more positive in scPCP-treated mice as compared to those treated with vehicle. This effect was highly specific since it was not found in the prelimbic cortex, which is also part of the prefrontal cortex and sits just above the infralimbic cortex. Changes in E GABA in scPCP mice led to a depolarizing and excitatory action of the neurotransmitter with consequent increase in firing of infralimbic cortex pyramidal neurons that, following a 20 Hz stimulation, generated twice as many action potentials as those from vehicle-treated mice. Using RNAscope in situ hybridization, they further reported that NKCC1 mRNAs were increased in infralimbic, but not in prelimbic neurons, of 5 postnatal weeks-old scPCP mice, while KCC2 mRNAs were not altered. Based on this observation, Kim et al. (2021) hypothesized that limiting NKCC1 function may rescue the electrophysiological and behavioral phenotypes. In keeping with this hypothesis, their data showed that intraperitoneal or focal intracortical injection of the selective NKCC1 blocker bumetanide, before initiating behavioral testing, ameliorated scPCP mice performance on different behavioral tasks assessing declarative memory, working memory, and executive function, more specifically in the novel object recognition, Y-maze spontaneous alternation and operant reversal learning tests. Since bumetanide exhibits low brain penetration (Römermann et al., 2017), it has been frequently questioned whether its effects on behavior in different disease animal models was dependent on its specific inhibitory effects on NKCC1 in the brain or rather due to secondary unspecific effects. In the scPCP mouse model, short hairpin RNAi-mediated downregulation of endogenous NKCC1 mimicked the effects of bumetanide on behaviors, therefore supporting the hypothesis that more depolarized GABA reversal potential in the infralimbic cortex pyramidal neurons plays a role in the cognitive deficits observed in this mouse model. Another evidence supporting the occurrence of dysregulated Cl − balance in schizophrenia-like animal models comes from the study of the sandy mouse, a dysbindin null mutation. Dysbindin is encoded by DTNBP1. DTNBP1 polymorphisms are considered risk factors for schizophrenia onset (Straub et al., 2002;Van Den Bogaert et al., 2003), even if a consensus has not been reached yet (Schizophrenia Working Group of the Psychiatric Genomics Consortium, 2014; Farrell et al., 2015). Recent transcriptome studies showed that, in addition to alterations associated with PV+ GABAergic cells, the dysbindin mutant mice showed reduction of both NKCC1 and KCC2 mRNA levels at the late embryonic stage (Larimore et al., 2017). In humans, clinical genetic studies support an association between NKCC1 (coded by the gene SLC12A2) and schizophrenia. By correlating blood oxygen level-dependent (BOLD) signals in the prefrontal cortex from schizophrenia patients and controls during a working memory task and genotyping data from a genome wide single nucleotide polymorphism (SNP)-array, Potkin et al. (2009) found that the BOLD signal significantly correlated with the presence of two SNPs within the SLC12A2 gene. Kim et al. (2012) analyzed two independent case control datasets of patients with schizophrenia and healthy controls and found that SNPs in Disrupted-in-Schizophrenia 1 (DISC1) and SLC12A2 interact epistatically to affect risk for schizophrenia. Finally, one recent study identified a novel heterozygous NKCC1 missense variant in a French-Canadian cohort of schizophrenic patients. Functional studies showed that this variant lead to a gain of function of the transporter (Merner et al., 2016). Conversely, post-mortem studies in human tissue produced conflicting results. Hyde et al. (2011) found a reduction of KCC2-codying mRNA in the hippocampus, but not dorsolateral prefrontal cortex, of schizophrenic patients. The same study did not find any significant difference in full length NKCC1codying transcripts. On the other hand Sullivan et al. (2015) reported decreased KCC2 protein expression, by western blot, in the dorsal lateral prefrontal cortex, but not the anterior cingulate cortex, in subjects with schizophrenia. Of note, this group also reported that individuals with schizophrenia off antipsychotic medication at the time of death had decreased KCC2 protein expression compared to both normal controls and subjects with schizophrenia on antipsychotics, suggesting that antipsychotic may affect KCC2 expression levels. Other studies found illnessassociated alterations in the expression of NKCC1 transcripts. In particular, Dean et al. (2007) reported an upregulation of NKCC1 mRNA in prefrontal cortex tissue from schizophrenia cases as compared to controls. On the other hand, Morita et al. (2014) analyzed the expression of different NKCC1-codying transcripts produced by alternative splicing and reported reduced expression of the shorter variants of NKCC1 transcripts (NKCC1b and 1-2a) in the dorsolateral prefrontal cortex of subject with schizophrenia compared to a control cohort. Consistently, Zhang et al. (2021) reported reduced NKCC1 mRNA levels in peripheral blood mononuclear cells of patients presenting with the first episode of schizophrenia, and thus not under the effects of antipsychotic medications. Finally, Arion and Lewis (2011) analyzed mRNA expression levels of NKCC1 and KCC2 and their associated regulatory kinases STK39, OXSR1, WNK1, WNK3, and WNK4 in the dorsolateral prefrontal cortex of subjects with schizophrenia compared to non-psychiatric subjects and reported no differences in the expression of either KCC2 or NKCC1 transcripts. However, they found overexpression of OXSR1 and WNK3 transcripts in schizophrenia. These alterations of transcript levels were consistent across subjects with the illness. In addition, they were not altered in monkeys chronically exposed to antipsychotic medications. Together, these findings suggest that the observed difference in OXSR1 and WNK3 levels in schizophrenic individuals were not caused by other factors (medications, substance abuse, etc.). WNK3 is a potent activator of NKCC1, while it can inhibit KCC2 activity (Kahle et al., 2005;de Los Heros et al., 2006). OXSR1 binds to and phosphorylates NKCC1, resulting in an increase of NKCC1 activity (Vitari et al., 2006). Therefore, if increased OXSRI and WNK3 mRNA levels translate into increased protein and thus kinase activity, then this would lead to both enhanced NKCC1 and decreased KCC2 activity, respectively, thus steering neurons toward a predicted higher intracellular Cl − concentration (Arion and Lewis, 2011). The discrepancy in the findings reported by these studies could be explained in part by differences in the technical approaches used to analyse NKCC1/KCC2 levels and in the medication history of the subjects, by the relative small sample size of each study and heterogeneity of the disease. It is possible that Cl − imbalance may be restricted to specific brain regions, or neuronal circuits, and thus, that only same types of behavioral abnormalities may be affected by targeting Cl − imbalance. More likely, due to the heterogeneity of the diseases, Cl − imbalance mediated by dysregulation of chloride transporters may not be a common pathogenic mechanism in all patients, but it may affect only a subset. If the latter hypothesis is correct, then it is essential to develop and characterize reliable in vivo biomarkers for measuring excitation/inhibition imbalance with high spatial and temporal resolution during cognitive tasks. These biomarkers would help stratifying the patients in different populations and identifying those with higher probability of responding to Cl − balance-targeting medications. EPILEPSY Epilepsy is a neurological condition characterized by recurrent spontaneous seizures that can be classified, perhaps in an oversimplified manner, as primary generalized (e.g., those occurring in absence epilepsy), and focal (with possible secondary generalization), such as those observed in patients presenting with mesial temporal lobe epilepsy (MTLE) or focal cortical dysplasia (Avoli and Gloor, 1987). Here, we will mainly address the role of GABA A receptor signaling in the epileptiform synchronization of limbic networks. Accordingly, focal seizures recorded in MTLE patients arise from limbic structures such as the hippocampus, the amygdala and the rhinal cortices (Gloor, 1997). To note that an epileptic brain with focal abnormalities generates interictal spikes (i.e., short electrographic events <2 s in duration, which are not accompanied by any detectable clinical symptom) (arrows and asterisks in Figures 3Aa,B) and ictal discharges or seizures (i.e., abnormal, hypersynchronous activity lasting up to several tens of seconds, which disrupt normal brain function) (continuous lines in Figures 3Ab, B) (Avoli and Gloor, 1987). Epilepsy is believed to result from a pathological shift of the E/I balance toward excitation. Accordingly, in the 1950s, clinical studies reported that interfering with GABA synthesis leads to convulsions (Coursin, 1954). In the following decades, in vitro experiments have demonstrated that several convulsive drugs are GABA A receptor antagonists (Schwartzkroin and Prince, 1980). In addition, in vivo studies have highlighted a FIGURE 3 | Epileptiform activity induced by 4AP in juvenile rat hippocampal slices maintained in vitro. (A) Field potential recordings obtained during perfusion with medium containing 4AP from the CA3 subfield of adult (a) and young (14 day-old) brain slices (b). Note that only fast (arrows) and slow interictal discharges (asterisk) are recorded in adult brain slices while ictal (continuous line) and interictal epileptiform discharges occur in the young hippocampus; note also in (b) that the slow (asterisk) interictal discharge leads to the onset of a prolonged ictal event. (B) Field and intracellular (Intra) recordings obtained during perfusion with medium containing 4AP from the CA3 area of a 22-day-old rat hippocampal slice; arrows point to the fast interictal spikes while asterisk identifies the slow interictal that initiates the ictal discharge (thick line). Selected portions of the intracellular recording are illustrated at an expanded time base in the panels below; there, small arrows and dots identify depolarizing potentials and fractionated spikes, respectively. (C) Effects induced by application of non-NMDA (CNQX) and NMDA (CPP) receptor antagonists on the 4AP-induced epileptiform activity recorded with field and K + selective microelectrodes from the CA3 subfield of a 19-day-old rat hippocampal slice. Note that this pharmacological procedure blocks the fast interictal events as well as the short-lasting ictal discharge; however the elevation in extracellular [K + ] that correlates with the negative-going slow field potential is not modified. (D) Effects of the mu-opioid receptor agonist DAGO on the synchronous activity induced by 4AP in the CA3 subfield of a 15-day-old rat; note that DAGO abolishes the negative-going slow field potential along with the subsequent ictal discharge while fast interictal spikes continue to occur. [(A) Is modified from Fueta and Avoli, 1992; (B) is modified from Avoli et al. (1993); (C,D) are modified from Avoli et al. (1996b)]. decrease of GABAergic inhibition shortly before the onset of electrographic seizures recorded from the hippocampus (Ben-Ari et al., 1979) or neocortex (Kostopoulos et al., 1983). These findings were in line with those indicating that the functional disconnection of interneurons from excitatory inputs impairs inhibition in animal models of focal seizures (Sloviter, 1987), and that alterations in GABA transporter function or in GABA A receptor subunits composition occur in patients affected by focal epileptic disorders and in animal models mimicking them (McDonald et al., 1991;Johnson et al., 1992;Williamson et al., 1995;Brooks-Kayal et al., 1998). Therefore, in the 1990s, weakening of inhibition was considered a sine qua non conditio for the occurrence of focal seizures and thus for the establishment of epilepsy. However, successive studies have challenged this view since epileptiform activity can occur in hippocampal slices maintained in vitro during pharmacological manipulations that do not interfere with GABA A receptor mediated inhibition and even enhance it. These experimental procedures include the application of Mg 2+ free-medium (Mody et al., 1987;Derchansky et al., 2008) or of K + channel blockers such as 4-aminopyridine (4AP) (Voskuyl and Albus, 1985;Rutecki et al., 1987;Avoli, 1991, 1992). As illustrated in Figure 3Aa, field potential recordings obtained from the CA3 subfield of adult rat hippocampal slices treated with 4AP, have revealed the occurrence of two distinct types of interictal activity. The first type (arrows in Figure 3Aa), also identified as fast, occurs frequently, is driven by CA3 network, and is abolished by non-NMDA glutamatergic receptor antagonists. The second type (asterisks in Figure 3Aa) has a lower rate of occurrence, can initiate in any hippocampal areas, continues to occur when ionotropic glutamatergic excitatory transmission is blocked, and it is abolished by GABA A receptor antagonists Avoli, 1991, 1992) suggesting its GABAergic origin; hence, these 4AP-induced, interictal events have been often referred to as slow GABAergic spikes Avoli, 1991, 1992). It has been also found that, in hippocampal slices obtained from young (10-24 day old) rats, 4AP induces ictal discharges (Chesnut and Swann, 1988;Avoli, 1990), which are preceded by a slow GABAergic spike (Figure 3Ab, continuous line and asterisk, respectively). Shortly later, intracellular recordings from CA3 pyramidal cells in young hippocampal slices demonstrated that this initial GABAergic spike is characterized by a prolonged depolarization, associated to sparse, fractionated (presumably ectopic) action potentials, leading to the generation of repetitive bursts of action potentials once the ictal activity is established (Avoli et al., 1993) (Figure 3B). The mechanistic link between such initial synchronous GABAergic event and the onset of electrographic seizure activity in the young rodent CA3 area was supported by findings obtained in successive studies. As illustrated in Figure 3C, the initial GABAergic spikes were mirrored by sizeable elevations in extracellular [K + ] that were not influenced by application of ionotropic glutamatergic receptor antagonists (Avoli et al., 1996b; see also Figure 5). In addition, GABA A receptor antagonists or pharmacological procedures interfering with GABA signaling, such as the application of a µ-type opioid receptor agonist (Capogna et al., 1993), abolished both the initial GABAergic spike and the subsequent ictal activity, that was replaced by on-going, short-lasting interictal spikes ( Figure 3D) (Avoli et al., 1996b). FIGURE 4 | Epileptiform activity induced by 4AP in the adult entorhinal cortex in in vitro brain slices. (A) Simultaneous field (Field) and intracellular (Intra) recordings obtained from adult rat entorhinal cortex in vitro during bath application of 4AP; the recordings shown in the insert were obtained from a different neuron that was depolarized by intracellular current injection to a steady membrane potential at ∼62 mV. (B) Low voltage fast onset ictal discharges occur spontaneously during bath application of 4AP in the mouse entorhinal cortex (a) but are also triggered by optogenetic stimulation of parvalbumin positive-interneuron (b). The onset of the ictal discharges occurring under each condition is further expanded in the inserts below; note that the ictal discharges are preceded by one or two slow interictal spikes (arrows) under both spontaneous and stimulated conditions. (C) Field potential recordings obtained from the rat entorhinal cortex under control conditions (4AP) and 20 min after bath application of carbonic anhydrase inhibitor acetazolamide (ACTZ, 10 µM); note that this pharmacological procedure greatly reduces the duration of the ictal discharges. (D) Effects of the KCC2 blocker VU0463271 on the 4AP-induced field potential activity recorded from the rat entorhinal cortex; note that addition of VU0463271 induces a pattern of continuous interictal spikes in the field potential recording. Enlarged examples of the field potential recordings are shown below. [(A) Is modified from Lopantsev and Avoli (1998) Overall, these studies have demonstrated that during application of 4AP, the slow GABAergic spikes mostly reflect the synchronous activation of postsynaptic GABA A receptors that leads to increases in extracellular [K + ] sufficiently large to trigger ictal activity (Morris et al., 1996;Lamsa and Kaila, 1997). Originally, it was proposed that such mechanism only occurs in the juvenile hippocampus because of a decreased homeostasis of extracellular [K + ] at this early stage of brain maturation (Avoli et al., 1993(Avoli et al., , 1996b. This hypothesis was, however, not confirmed in successive experiments performed in adult rodent brain slices that could include the entorhinal, perirhinal, piriform, insular cortices, or the amygdala (see for review, Avoli and de Curtis, 2011). In all these cortical structures, 4AP was able to induce slow interictal spikes along with ictal discharges that were often preceded by a single (also termed "sentinel") slow GABAergic event ( Figure 4A) (see also data obtained from the guinea pig whole brain preparation, Carriero et al., 2010). Seizures with a similar electrographic onset have been recorded with intracranial electrodes in the adult human's brain of patients affected by focal epileptic disorders, including MTLE (Perucca et al., 2014) as well as in in vivo animal models (Lévesque et al., 2012). These ictal discharges have been termed low-voltage fast onset seizures since they initiate with a pattern of low voltage oscillatory activity in the beta-gamma range (see for review Avoli et al., 2016). As illustrated in Figure 4A, the onset of an ictal discharge recorded in vitro in the adult rat entorhinal cortex in the presence of 4AP coincides with principal cell depolarization that, as observed in young rat CA3 pyramidal neurons (cf. with insert in Figure 3B), is associated with few "fractionated" (and thus presumptive ectopic) action potentials. Moreover, this initial depolarizing event becomes hyperpolarizing when GABA, abundantly released from GABAergic interneurons (green) which make synapses with a principal cell (yellow), binds to GABA A receptor-channels, permeable to chloride and HCO − 3 . HCO − 3 permeability would allow to shift the equilibrium potential for GABA (E GABA ) toward more positive values respect to E Cl − . The high firing rate of GABAergic interneurons, electrically coupled via gap junctions, leads to accumulation of K + in the extracellular space (high [K + ] o ), which is instrumental in triggering ictal activity. By producing a further shift of E GABA toward more positive potentials, high [K + ] o contributes to weaken inhibition. In addition, activation of GABA A receptors would cause an inward flux of Cl − that would activate the cation-chloride exporter KCC2. By extruding both Cl − and K + , this would further enhance neuronal excitability by further rising [K + ] o (B). (C) Field potential recording of a slow GABAergic event due to the synchronous activation of GABA A receptors (slow GABAergic spike) preceeding elevation of extracellular K + measured with ion-sensitive electrodes in the presence of AMPA and NMDA receptor antagonists, CNQX and CPP, respectively (modified from Avoli et al. (1996a)]. the neuronal membrane potential is depolarized to values less negative than −60 mV by injecting a steady depolarizing current ( Figure 4A; Lopantsev and Avoli, 1998). Indeed, it was known at that time that activation of GABA A receptors can depolarize cortical neurons since these receptors are permeable not only to Cl − but also to HCO − 3 , which has an equilibrium potential more positive than Cl − (Grover et al., 1993;Kaila, 1994). Similar intracellular patterns were also recorded at the onset of 4AP-induced ictal discharges in principal cells of the amygdala in an in vitro slice preparation (Benini et al., 2003). These results, therefore, demonstrate that, paradoxically, the initiation of electrographic seizures occurring in vitro coincides with, and thus may be caused by, a robust synchronous inhibitory event. In line with this hypothesis, studies performed by several laboratories have shown that interneurons fire at the onset of these ictal discharges as well as that GABA A receptor antagonists can abolish them (see for review, Avoli and de Curtis, 2011;Avoli et al., 2016). Intense interneuronal firing leading to GABA release and subsequent, excessive activation of GABA A receptors, as identified at the onset of 4AP-induced ictal discharges, coincides also in the adult entorhinal cortex with elevations of extracellular [K + ] (Avoli et al., 1996a;Librizzi et al., 2017). In turn, these transient increases in extracellular [K + ] depolarize, and thus recruit neighboring neurons into the synchronous firing that is associated with the ongoing ictal activity (see for review: Avoli and de Curtis, 2011;Avoli et al., 2016). The essential role of interneurons in initiating 4AP-induced ictal discharges was later confirmed by optogenetic studies performed in several laboratories in the entorhinal (Shiri et al., 2015(Shiri et al., , 2016Yekhlef et al., 2015) and somatosensory cortex (Chang et al., 2018). As shown in Figure 4B, optogenetic activation of parvalbumin-or somatostatin-positive interneurons initiates ictal discharges with an onset pattern similar to what identified in spontaneously occurring events. Specifically, the onset of both spontaneous and optogenetic-induced ictal discharges is characterized by one-two interictal-like spikes that are followed by fast, betagamma oscillations, i.e., the hallmark of low-voltage fast ictal discharges. In line with the participation of inhibitory cells in the initiation of this type of seizure activity, single unit recordings from the seizure onset zone in epileptic patients have shown that GABAergic interneurons increase their firing rate, during the onset of low-voltage fast seizures, earlier than excitatory cells (see for review, Weiss et al., 2019). To note, as well, that 4APinduced electrographic seizures associated with an increase of extracellular [K + ], presumably due to the high firing rate of interneurons resulting in the activation of postsynaptic GABA A receptors by the massive release of GABA, occur in neocortical slices obtained from epileptic patients with Taylor's type focal cortical dysplasia (D'Antuono et al., 2004;Gigout et al., 2006) but not in brain slices from epileptic patients with no obvious structural abnormalities (Louvel et al., 2001). As mentioned above, findings obtained from the in vitro 4AP model point at the synchronous postsynaptic activation of GABA A receptors, leading to membrane depolarizationswhich are contributed by HCO − 3 efflux ( Figure 5A; Grover et al., 1993;Kaila, 1994)-and to sizable increases in extracellular [K + ] ( Figure 5C; Avoli et al., 1996a,b;Morris et al., 1996;Lamsa and Kaila, 1997), as the fundamental mechanism for triggering lowvoltage fast ictal activity (Avoli and de Curtis, 2011;Avoli et al., 2016). In line with this hypothesis, application of the carbonic anhydrase inhibitor acetazolamide reduced the duration and the interval of occurrence of ictal discharges induced by 4AP in the piriform and entorhinal cortices (Hamidi and Avoli, 2015a) ( Figure 4C). Also to note that Zuckermann and Glaser (1968) discovered over 50 years ago that elevations of [K + ] o induce neuronal hyperexcitability ( Figure 5B) and seizures. Successive studies have demonstrated that the increased extracellular [K + ] weakens inhibition by causing a positive shift of the reversal of GABA A receptor mediated inhibitory currents (Jensen et al., 1993), and leads to neuronal network resonance, which generates oscillatory patterns in the beta-gamma range (Bartos et al., 2007). It has also been established that activation of GABA A receptors leads to accumulation of intracellular [Cl − ], which in turn activates KCC2 which extrudes both Cl − and K + from the intraneuronal compartment ( Figure 5A; Viitanen et al., 2010). Therefore, the activity of KCC2 may play a role in seizure generation and epileptogenesis (see for review : Di Cristo et al., 2018). In line with this view, 4AP-induced electrographic ictal discharges recorded from the entorhinal and piriform cortices in vitro can be abolished or facilitated by inhibiting or enhancing the activity of KCC2, respectively (Hamidi and Avoli, 2015b;Chen et al., 2019). As illustrated in Figure 4D, the KCC2 antagonist VU0463271 transformed the dynamic pattern of 4APinduced interictal and ictal activity into a continuous pattern of interictal-like epileptiform events. Similar effects (i.e., the induction of recurrent, robust spiking in both 0-Mg 2+ and 4AP in vitro models) have been reported by Moss' laboratory in the presence of KCC2 antagonists Kelley et al., 2016;Moore et al., 2018). Furthermore, these investigators have found that in vivo microinfusion of VU0463271 into the mouse dorsal hippocampus induces recurrent epileptiform discharges thus confirming that KCC2 function is essential for making GABA A receptor mediated inhibition operative . More recently, Dzhala and Staley (2021) have identified in an organotypic hippocampal slice model that the KCC2 antagonist VU0463271, by decreasing Cl − extrusion from the intracellular compartment, increases its intracellular elevations during ictal events and discloses a pattern of continuous interictal-like discharges resembling status epilepticus. However, in one in vitro study performed by Moss' group, KCC2 antagonism could prolong the ictal events induced by 4AP in the entorhinal cortex (Kelley et al., 2016). To note also that Silayeva et al. (2015) have reported that KCC2 activity is essential for controlling status epilepticus induced by systemic injections of kainic acid in vivo. In vitro electrophysiological recordings from human resected "epileptogenic" brain tissue have demonstrated that the shift of GABA A receptor signaling from hyperpolarizing to depolarizing direction can contribute to network hyperexcitability and neuronal synchronization. Thus, in hippocampal slices from brain tissue removed from patients affected by drug-resistant forms of MTLE, the depolarizing action of GABA in a subset of pyramidal cells (∼30%) is present during interictal spikes recorded in the subiculum, the output structure of the hippocampus that projects to the temporal lobe, suggesting a perturbed homeostasis of intracellular [Cl − ] Cohen et al. (2002). This hypothesis has been confirmed by Huberfeld et al. (2007) who, using combined intracellular recordings and KCC2 immunochemistry, found that subicular cells generating hyperpolarizing responses to GABA were immunopositive for KCC2, whereas cells generating depolarizing responses were immunonegative. Thus, the depolarizing responses to GABA presumably result from excessively high intracellular [Cl − ] since they were abolished (together with spontaneously occurring field potential events) by bumetanide, a selective blocker of NKCC1; this pharmacological procedure caused a shift of E GABA toward more hyperpolarized values, reinstating GABA A -mediated inhibition (Huberfeld et al., 2015). A depolarizing action of GABA, resulting from altered intracellular [Cl − ] homeostasis, has been also detected in cortical tissue samples obtained from pediatric patients undergoing surgical resection for the treatment of pharmaco-resistant forms of focal epilepsy due to cortical dysplasia (Abdijadid et al., 2015). An immature depolarizing GABAergic signaling, resulting from a down-regulation of KCC2, has been found also in Scn1b −/− and Scn1a +/− mouse models as well as in brain extracts from patients affected by the Dravet Syndrome, a devastating developmental form of epileptic encephalopathy (Yuan et al., 2019). These data point to KCC2, the major Cl − extruder, as a key determinant for maintaining intracellular [Cl − ] homeostasis and for regulating neuronal excitability. A rise of intracellular [Cl − ] following KCC2 dysfunctions impairs GABA A -mediated inhibition, with consequent changes in the E/I balance toward excitation, making the brain more prone to seizures. Interestingly, several mutations of the SLC12A5 gene encoding for KCC2 have been identified in epileptic patients (Duy et al., 2019). Puskarjov et al. (2014b) have reported the R952H mutation of the SLC12A5 gene in an Australian family with early childhood onset of febrile seizures. In rodent neurons, this mutation led to deficits in neuronal Cl − extrusion associated to impaired formation of cortical dendritic spines. Febrile seizures, which might later lead to idiopathic generalized seizures, have been reported to occur in a French-Canadian cohort carrying both the R952H and the R1049C KCC2 mutations, located in conserved residues of the KCC2 cytoplasmic Cterminus, an important regulatory region of transport function (Kahle et al., 2014). The role played by the activation of GABA A receptors in promoting epileptiform activity is also supported by the ability of hippocampal networks maintained in vitro to generate ictal discharges during pharmacological blockade of both GABA B and ionotropic glutamatergic receptors (Uusisaari et al., 2002). Such unexpected role of GABA A signaling may explain the limited therapeutic efficacy of some antiepileptic drugs that were developed to potentiate GABA A receptor function; these compounds include progabide, γ-vinyl-GABA and tiagabine (Rogawski and Löscher, 2004). Moreover, benzodiazepines-which increase GABA A receptor function in the brain by acting on the allosteric "benzodiazepine site" (Costa et al., 1975;Choi et al., 1977)-can halt seizure activity and status epilepticus (Pang and Hirsch, 2005) but do not represent first choice drugs for treating chronic epileptic conditions. It is also worth mentioning that compatible with an excitatory action of GABA, benzodiazepines may paradoxically worsen seizures (Perucca et al., 1998). Bumetanide, by inhibiting NKCC1, may have a beneficial effect, restoring neuronal intracellular [Cl − ] as demonstrated in the case of a girl affected by epilepsy, cortical dysplasia and ASD (Bruining et al., 2015). Beneficial effects of bumetanide were also detected in a double-blind pilot study on 43 randomized babies affected by neonatal seizures caused by hypoxic-ischemic encephalopathy. This diuretic, added to phenobarbital in dose-escalation design, was able to reduce, in a statistically significant way, seizure burden in the group of subjects treated with phenobarbital and bumetanide (n = 27), respect to the control group treated with phenobarbital alone (n = 16) (Soul et al., 2021). However, more work is required to establish bumetanide exposure response and safety. Although in the hypoxic-ischemic encephalopathy, the BBB may be compromised (Römermann et al., 2017), usually bumetanide's effectiveness is limited by its poor brain penetration, an effect that can be overcome with new derivatives, currently under development, capable of better permeate the BBB (Savardi et al., 2021). CONCLUSIONS AND FUTURE PERSPECTIVES The discovery of CCC as key regulators of neuronal Cl − concentration has allowed to better understand how GABA, acting on GABA A receptors, influences via its depolarizing and excitatory action, several developmental processes whose alterations lead to pathological conditions including ASD, schizophrenia and epilepsy. However, in spite of such progress many questions are still open. Although bumetanide has shown to have positive effects in a wide range of pathological conditions, its exact mechanisms of action are still poorly understood. In addition, in humans, its beneficial effects are symptomatic and it is unclear whether they are linked to its ability to shift GABA action from the depolarizing to the hyperpolarizing direction by restoring low [Cl − ] i and a proper E/I balance in neuronal ensembles. The high expression levels of NKCC1 on microglia opens, however new perspectives on the mechanisms of action of this transporter, especially in relation to the role played by activated glial cells in brain inflammation and oxidative stress, which are key features of many neuropsychiatric diseases. Furthermore, to better understand the mechanisms controlling the expression of CCC in the brain and their contribution in shaping, via GABAergic signaling, neuronal circuits in both physiological and pathological conditions, it will be crucial to clarify how BDNF/TrkB signaling pathway regulates NKCC1 neuronal expression early in postnatal life (Badurek et al., 2020). An alternative way to re-establish in neurodevelopmental disorders a proper E/I balance by acting on GABA A -mediated neurotransmission may be the development of new therapeutic tools selectively targeting the chloride exporters KCC2 which in the CNS are exclusively expressed on neurons (Gagnon et al., 2013;Puskarjov et al., 2014a). By acting either on KCC2 membrane trafficking or on their intrinsic transport kinetics, these compounds will allow attenuating neuronal [Cl − ] i , reinstating an appropriate Cl − homeostasis in selective brain regions. However, new KCC2 activators should not interfere with the well-known KCC2 structural function on dendritic spines formation and dynamics (Blaesse et al., 2009). These new molecules may be particularly useful for treating drug-resistant forms of epilepsy, such as cortical focal dysplasia, occurring in the pediatric age. It is worth mentioning that the use of KCC2 analogs may be limited by their possible proconvulsant effects triggered by accumulation of K + in the extracellular space. Studies from animal models have allowed identifying early changes in GABAergic signaling as a contributing cause of cognitive deficits observed in Neurodevelopmental Disorders. These are often associated with a loss of particular subtypes of inhibitory interneurons which, by pacing principal cells, give rise to coherent network oscillations, thought to support different behavioral states of the animals and high cognitive tasks. However, with the exception of epilepsy, a direct proof that these processes may occur also in individual affected by neurodevelopmental disorders remains to be demonstrated, making difficult to translate data obtained in preclinical studies from animal models to humans. In the latter case, evidence in favor of an altered GABA A -mediated neurotransmission indirectly relies on: (i) genetic studies; (ii) immunohistochemistry from postmortem brain samples showing reduction in GABAergic markers or particular subtypes of GABAergic interneurons; (iii) high incidence of epileptic activity as a comorbidity effect; (iv) alterations in oscillatory activity particularly in the gamma power, detected on the EEG or MEG; and (v) in some cases the paradoxical action of benzodiazepines (for a review see Cellot and Cherubini, 2014b). An innovative approach consisting in reprogramming human dermal fibroblast obtained through skin biopsy from patients (at progressive stages of the diseases) into induced pluripotent stem cells (iPSCs) and 3D cerebral organoids will provide an innovative platform to uncover disease mechanisms directly in humans. These processes will allow identifying disease mechanisms in a personalized type of manner and to test novel strategies for prevention, diagnosis, patient stratification, therapy and/or rehabilitation. ACKNOWLEDGMENTS We wish to thank our colleagues who contributed to original works reported in this review. We are also grateful to the members of our labs for useful discussions and suggestions.
2022-01-05T14:23:54.197Z
2022-01-05T00:00:00.000
{ "year": 2021, "sha1": "e566cc1dfe5d5bd7a835ae41c21b9cf73e4f4a2c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2021.813441/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "e566cc1dfe5d5bd7a835ae41c21b9cf73e4f4a2c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
260946220
pes2o/s2orc
v3-fos-license
Gauging the mass of metals in the gas phase of galaxies from the Local Universe to the Epoch of Reionization The chemical enrichment of dust and metals are vital processes in constraining the star formation history of the universe. These are important ingredients in the formation and evolution of galaxies overall. Previously, the dust masses of high-redshift star-forming galaxies have been determined through their far-infrared continuum, however, equivalent, and potentially simpler, approaches to determining the metal masses have yet to be explored at z (cid:38) 2. Here, we present a new method of inferring the metal mass in the interstellar medium (ISM) of galaxies out to z ≈ 8, using the far-infrared [C ii ] − 158 µ m emission line as a proxy. We calibrated the [C ii ]-to- M Z , ISM conversion factor based on a benchmark observational sample at z ≈ 0, in addition to gamma-ray burst sightlines at z > 2 and cosmological hydrodynamical simulations of galaxies at z ≈ 0 and z ≈ 6. We found a universal scaling across redshifts of log( M Z , ISM / M (cid:12) ) = log( L [CII] / L (cid:12) ) − 0 . 45, with a 0.4dex scatter, which is constant over more than two orders of magnitude in metallicity. We applied this scaling to recent surveys for [C ii ] in galaxies at z (cid:38) 2 and compared their inferred M Z , ISM to their stellar mass ( M (cid:63) ). Introduction The metal enrichment of the interstellar medium (ISM) of a galaxy is an imprint left behind by stellar processing.It encodes information about the star formation history (SFH) of the galaxy and simultaneously provides insight into the complex processes that regulate the metal content, such as large-scale gas infall and outflows (Heintz et al. 2022a).Similarly, the amount of dust directly informs the dust yields from the explosions of core-collapse supernovae (e.g., Gall et al. 2014;Leśniewska & Michałowski 2019) and the efficiency of metals depleting into dust grains through ISM growth (e.g., Dwek 1998).Therefore, obtaining a complete census of the dust and metals in galaxies, particularly at the earliest epochs, is vital to constraining these processes and understanding the formation and evolution of the first generation of galaxies overall. Early attempts to gauge the metal census at high redshift reported that the bulk of the expected metals from the integrated SFH at z ≈ 2 were missing from the ISM, having likely been expelled through outflows (Pettini et al. 1999;Prochaska et al. 2003;Ferrara et al. 2005;Bouché et al. 2007).This led to the long-standing "missing metals problem," of which only 30% to 90% of the expected cosmic density of metals could be accounted for in the intergalactic medium (IGM), the ISM, and in stars (Ferrara et al. 2005;Bouché et al. 2007).However, these early studies were significantly affected by biased selections of quasars and, as a consequence, they were unable to identify the most metal-and dust-rich foreground absorbers (e.g., Fall & Pei 1993;Heintz et al. 2018;Krogager et al. 2019).Accounting for this bias and the total amount of metals locked into dust grains has yielded cosmic metal mass densities in the ISM of galaxies at z ≳ 2.5, which is consistent with the total integrated SFH metal yield (Péroux & Howk 2020).Yet even this approach is limited in the sense that quasar absorbers are increasingly less robust tracers of the ISM in galaxies at higher redshifts, mainly probing the diffuse circumgalactic medium in the outskirts of their absorbing galaxies (Neeleman et al. 2019;Stern et al. 2021;Heintz et al. 2022b).Furthermore, they are virtually impossible to detect beyond z ≈ 5 due to the near-complete suppression of the emis-sion located in the Lyman-α forest caused by the Gunn-Peterson effect.It is thus imperative that we establish a complementary approach to constrain the metal mass in the ISM of early galaxies. While several methods exist to determine the dust mass (e.g., Dwek 1998;Draine et al. 2007;Scoville et al. 2014;Sommovigo et al. 2021Sommovigo et al. , 2022) ) and the metallicity through nebular emission strong-line ratios (e.g., Maiolino et al. 2008;Kewley et al. 2019;Sanders et al. 2021) of the ISM of individual galaxies, there is currently no simple way to directly measure the total metal mass.The ISM metal mass, M Z,ISM , has previously been measured through a combination of the metallicity and the gas mass (Sanders et al. 2023a;Eales et al. 2023), but this approach has been limited by the difficulty of deriving metallicities from optical nebular emission lines from ground-based facilities beyond z ≈ 3 (though see recent efforts based on joint JWST and ALMA observations, e.g., Heintz et al. 2023).In this paper, we present a novel approach to infer the total ISM metal mass of individual galaxies using only the [C ii]−158µm line luminosity as a proxy.The [C ii]−158µm emission is advantageous due to its immense brightness as one of the strongest ISM cooling lines (Hollenbach & Tielens 1999;Wolfire et al. 2003;Lagache et al. 2018), and it has efficiently been used as a viable tracer of cold gas in both local and distant universe (e.g., Stacey et al. 2010;Madden et al. 1997Madden et al. , 2020;;Cormier et al. 2015;Zanella et al. 2018;Dessauges-Zavadsky et al. 2020;Heintz et al. 2021Heintz et al. , 2022b;;Vizgan et al. 2022a,b;Liang et al. 2023). There are several pieces of evidence that point to [C ii] being a potential effective tracer of the total ISM metal mass in galaxies.Firstly, carbon is the second most abundant metal by mass in the universe (following oxygen).Secondly, the ionization potential of neutral carbon (IP = 11.26eV) is sufficiently below that of neutral hydrogen, such that the majority of carbon will be in the singly ionized state in the neutral ISM.Furthermore, since [C ii] has been observed to originate from the multiple phases of the ISM, from the outskirts of molecular clouds to the neutral and ionized ISM (Pineda et al. 2014;Vallini et al. 2017;Pallottini et al. 2019;Ramos Padilla et al. 2022), the emission from [C ii] will probe metals in a large range of physical environments and gas properties.Finally, the observed anti-correlation with metallicity of the [C ii]-to-H i abundance ratio (Heintz et al. 2021;Vizgan et al. 2022b;Liang et al. 2023) points to a constant scaling between L [CII] and M Z,ISM , as also noted for other line emission gas tracers (Eales et al. 2023). To gauge the robustness and the applications of [C ii] as a proxy for M Z,ISM , we have structured the paper as follows.First, we provide an overview of the compiled observational samples, the adopted simulations, and the overall methodology to derive the [C ii]-to-M Z,ISM calibration in Sect. 2. Then we present our results in Sect.3, focusing on the M Z,ISM to stellar mass content as a function of redshift, and in Sect.4, we attempt to constrain the cosmic ISM metal mass density Ω Z,ISM at z ≳ 5. We summarize our conclusions in Sect. 5. Observational samples, simulations, and methods In the following sections, we compile all the observational data and simulations used to measure the benchmark ISM metal mass, M Z,ISM , and to calibrate the [C ii]-to-M Z,ISM conversion factor in galaxies. Observational galaxy sample at z ≈ 0 We first considered the observational sample of galaxies at z ≈ 0 from the Herschel Dwarf Galaxy Survey (Herschel DGS; Madden et al. 2013), for which measurements of the metallicities (Madden et al. 2013, mainly et al. 2009) and H i gas masses M HI = 2 × 10 6 − 3.5 × 10 9 M ⊙ .These low-metallicity, gas-rich galaxies resemble the typical high-z galaxy population and therefore serves as an ideal benchmark.For these galaxies we derive the metal mass, M Z,ISM , as assuming solar abundance patterns, where 12 + log (O/H) ⊙ = 8.69 is the solar oxygen abundance and Z ⊙ = 0.0134 is the solar metallicity by mass.We only consider the H i gas since it reflects the dominant ISM gas mass contribution in both local (Leroy et al. 2009;Morselli et al. 2021) and high-z (Scoville et al. 2017;Heintz et al. 2021Heintz et al. , 2022b) ) galaxies, and to be consistent with the GRB measurements (see Sect. 2.3).Further, the majority of metals by mass are expected to be associated with the neutral gasphase with only minor contributions from molecular regions.We derive metal masses in the range M Z,ISM = 2.7×10 3 −2.3×10 7M ⊙ for this benchmark sample of galaxies at z ≈ 0. The results are shown in Fig. 1.We note that recent efforts to compile a more extensive benchmark local galaxy sample have been presented by (Ginolfi et al. 2020a;Hunt et al. 2020).However, this sample (so far) lacks comparable [C ii] detections, so we do not further consider it in this work. Simulations of galaxies at z ≈ 0, 6 To support the observational data, we further include the recent simulations of galaxies through the Simulator of Galaxy Millimeter/submillimeter Emission (SÍGAME) framework (Olsen et al. 2017) 1 .This simulation provides detailed modelling of the far-infrared line emission from galaxies extracted from the particle-based cosmological hydrodynamics simulation Simba (Davé et al. 2019).We consider the results derived for galaxies at z ≈ 6 as part of the SÍGAME version 2 (v2) presented by Leung et al. (2020) and Vizgan et al. (2022a), and the more recent v3 results applicable to galaxies at z ≈ 0 (Olsen et al. 2021). To derive the metal masses for these simulated sets of galaxies, we used Eq. 1 above.Following Vizgan et al. (2022b), we represent M HI by the total "diffuse" H i component in the SÍGAME-v2 simulations, which is equal to the sum of its ionized and atomic hydrogen gas mass and distinct from the "dense" fraction of the mass of each fluid element, and extract directly the H i component from the SÍGAME-v3 model.For both simulations, we adopted the star formation rate weighted gas-phase metallicity, Z SFR , which best represent the metallicities of the H ii regions inferred through the emission-line measurements. The [C ii]-to-M Z,ISM calibration We compared the results from the simulated data sets to the observational dwarf galaxy sample at z ≈ 0 in Then, we used the linear regression module in Scikit to estimate the linear best-fit relation, in addition to the root mean square (RMS) and the r 2 scatter of the data.We found a slope consistent with unity (formally 0.91 ± 0.10), over more than five orders of magnitudes in L [CII] and M Z,ISM , with a unique constant ratio of where the uncertainty represents the RMS scatter, with an r 2 value of 0.85. To further test any potential offsets in the L [CII] − M Z,ISM relation, we show this ratio as a function of metallicity in Fig. 2. Here, we also include the relative abundance measurements from GRB absorption line spectroscopy derived by Heintz et al. (2021).This approach infers the "column" [C ii] luminosity measured in the line of sight from the spontaneous decay of the excited C ii* transition as L c [CII] = hν ul A ul N CII * .Here, N CII * is the column density of the 2 P 3/2 state of C + , and ν ul and A ul are the frequency and Einstein coefficient, respectively, of [C ii].We relate this column [C ii] luminosity to the inferred line of sight metal mass, M c Z = M HI × 10 [M/H] tot × Z ⊙ , with M HI being the H i column mass inferred from the column density as M HI = m HI × N HI and [M/H] tot the total absorption metallicity (equivalent to log Z/Z ⊙ ).These GRB measurements provide accurate estimates of the L [CII] − M Z,ISM ratios only, in pencil-beam sightlines through their host galaxies (see also Heintz & Watson 2020). In Fig. 2, we compare the GRB measurements to the observational and simulated galaxy samples described above.We observe a remarkable agreement, with a mean GRB-inferred ratio of log(M Z,ISM /L [CII] ) = −0.44 ± 0.35 (error denoting 1σ).Moreover, the GRB sightlines probe galaxies in a large redshift range, z ∼ 2 − 6, and expand the metallicity regime for which we can determine the L [CII] − M Z,ISM relation, reproducing the constant ratio down to metallicities of Z/Z ⊙ = 1%.We note, however, that the SÍGAME-v3 simulations potentially indicate an increasing M Z,ISM /L [CII] ratio around solar metallicities.This estimate is still within the overall scatter of the relation, yet it may indicate that [C ii] emission is suppressed for a given metal mass in the highest metallicity galaxies.This could be due to more inefficient cooling through the [C ii]−158µm transition or potentially from lower ionization states.The relation between L [CII] and M Z,ISM derived here is purely an empirical result, based on direct observations and independent simulations, revealing an approximate constant ratio between the two.Crucially, this ratio appears to be constant across redshifts, making the conversion factor universally applicable. Results To apply the [C ii]-to-M Z,ISM conversion factor, we compiled the recent high-z observational samples surveyed for [C ii]: At z ∼ 2, we included the observations of main-sequence, star-forming galaxies from Zanella et al. (2018), whereas at z ∼ 4 − 6, we made use of the ALMA Large Program to Investigate C+ at Early Times (ALPINE) survey (Le Fèvre et al. 2020;Béthermin et al. 2020;Faisst et al. 2020) in addition to the sample of galaxies presented by Capak et al. (2015).At z ∼ 6 − 8, we consider the galaxies from the Reionization Era Bright Emission Line Survey (REBELS; Bouwens et al. 2022).In our anal-Article number, page 3 of 7 A&A proofs: manuscript no.46573corr ysis, we further include the individual measurements of A1689-zD1 (at z = 7.13; Watson et al. 2015;Bakx et al. 2021;Killi et al. 2023) and S04590 (at z = 8.496; Heintz et al. 2023) since both these high-z galaxies have robust estimates of their ISM gas masses and metallicities.The galaxy samples are all selected to have sufficient auxiliary data to enable derivations of the star formation rate (SFR) and stellar mass (M ⋆ ) of each source, and to follow the star-forming galaxy main-sequence at their respective redshifts. In Fig. 3, we show the inferred metal masses for the compiled sample of high-redshift (z ≳ 2) galaxies as a function of stellar mass.For all galaxies at z ≳ 2, except for A1689-zD1 and S04590, we infer the metal mass following the metallicityindependent conversion derived in Eq. 2, log(M Z,ISM /M ⊙ ) = log(L [CII] /L ⊙ ) − 0.45 ± 0.40.For A1689-zD1, we infer M Z,ISM based on the derived approximate solar metallicity (Killi et al. 2023) and the H i gas mass M HI = 1.8×10 10 M ⊙ , using the [C ii]to-H i conversion factor derived by Heintz et al. (2021), which yields M Z,ISM = 2.45 × 10 8 M ⊙ following Eq. 1.For S04590, we adopt the metal mass inferred by Heintz et al. (2023) of M Z,ISM = (3.2± 1.5) × 10 5 M ⊙ , following a similar approach.These estimates are also in agreement with that inferred from the L [CII] − M Z,ISM calibrations for those particular sources.Overall, we find metal masses in the range M Z,ISM = 2×10 7 −10 9 M ⊙ , and observe that M Z,ISM generally increases with M ⋆ , which is expected given the increased metal yield for more abundant stellar populations.For comparison, we overplot the M Z,ISM (M ⋆ ) function derived by Peeples et al. (2014) from the expected total Type II supernova metal production: based on the star-formation histories by Leitner (2012).Here, y is the nucleosynthetic yields, which we assume to be y = 0.033 (Peeples et al. 2014).We note that these SFHs might be more extended than what is possible for early galaxies at z ≈ 4 − 7, simply due to the young age of the universe at these early times.Using more brief SFHs such that the enrichment is dominated by core-collapse supernovae (SNe), Sanders et al. (2023a) that the total metal production is simply proportional to the product of the SNe metal yield, stellar mass, and (1-R), where R is the return fraction.This method yields a consistent curve to that derived by Peeples et al. (2014), however, and therefore seems to well represent the expected metal yield also at high-z.Generally, high-redshift galaxies are observed to have M Z,ISM values that are lower than this predicted curve at any given stellar mass.We also note the potentially more significant offset at low stellar masses in Fig. 3. We go on to consider the metal-to-stellar mass ratio, M Z,ISM /M ⋆ , for the compiled sample of galaxies as a function of redshift in Fig. 4.This ratio represents how effective galaxies are at retaining metals, given that M ⋆ is approximately proportional to the total amounts of metals produced as prescribed in Eq. 3 (see also Peeples et al. 2014;Sanders et al. 2023a).For the galaxies at z ∼ 0, M Z,ISM are measured directly through the metallicities and gas masses of the galaxies in the sample.We determine M Z,ISM /M ⋆ = (2.5 +6.6 −1.5 ) × 10 −3 (median and 16th to 84th percentiles) at z ≈ 0. This increases to M Z,ISM /M ⋆ ≈ 10 −2 , derived as the average for the sample of galaxies at z ∼ 2, and further to M Z,ISM /M ⋆ ≈ 2 × 10 −2 at z ∼ 4 − 6 and M Z,ISM /M ⋆ ≈ 5 × 10 −2 at z ≳ 6.As a reference point, we also include the stacked average of M Z,ISM /M ⋆ ≈ 10 −2 inferred by Sanders et al. (2023a) for galaxies at z ∼ 2 − 3.This result is based on rest-frame optical emission lines to infer metallicities and with gas masses inferred from CO, which may introduce a systematic difference compared to our method.Their results indicate that an increasing amount of metals reside in the gas-phase ISM in galaxies at higher redshifts, which also seem to hold for even the most massive systems (Sanders et al. 2023a).This likely reflects that feedback mechanisms or outflows are not yet efficient in expelling the metals out of the galaxy ISM at these epochs.On the contrary, most of the metals in local galaxies reside in stars (Peeples et al. 2014;Muratov et al. 2017). The cosmic metal mass density in galaxies To further quantify the chemical enrichment of galaxies through cosmic time, we now consider the cosmological metal mass density (Ω Z,ISM ) and its evolution with redshift.Previously, it has only been possible to infer Ω Z,ISM at z ≳ 1 in quasar absorptionline systems (DLAs, e.g., Péroux & Howk 2020), since the metal abundances of high-z galaxies are generally difficult and time consuming to constrain through other approaches such as from nebular line emission and strong-line diagnostics (see e.g., Kewley et al. 2019;Maiolino & Mannucci 2019, for recent reviews).However, there is increasing evidence for DLAs to mainly probe the outskirts of their extended neutral, gaseous halos, in particular at z ≳ 3 (Neeleman et al. 2019;Stern et al. 2021;Yates et al. 2021;Heintz et al. 2022b).Therefore, these pencil-beam sightlines do not probe the central star-forming ISM of their galaxy counterparts, so it is imperative to establish alternative approaches to infer Ω Z,ISM in galaxies at the highest redshifts. We highlight our measurements in Fig. 5 as the red hexagons.For comparison, we show the total expected metal yield from stars, defined as Ω Z,⋆ (z) = yρ ⋆ (z)/ρ c , where y is the integrated yield of the stellar population, ρ ⋆ (z) the stellar mass density, and ρ c is the critical density of the universe.Here, we adopt y = 0.033 from Peeples et al. (2014) (see also Péroux & Howk 2020), the evolutionary function of ρ ⋆ (z) parametrized by Walter et al. (2020) 2 , and ρ c = 1.26 × 10 11 M ⊙ Mpc −3 from the concordance ΛCDM cosmological framework (Planck Collaboration et al. 2020).We find that our measurements are in remarkable agreement with the metal yield predicted by Ω Z,⋆ (z) at z ≳ 4.This suggests that the majority of metals are confined to the galaxy central ISM (as also discussed in Sect.3) and have not yet been expelled to the outer regions through outflows and feedback effects.This suggests that the origin of the extended [C ii] halos are likely not caused by outflows (as previously proposed, e.g., Maiolino et al. 2012;Cicone et al. 2015;Ginolfi et al. 2020b;Herrera-Camus et al. 2021;Akins et al. 2022) et al. (2011).The solid black line shows the expected yield of metals from star formation, assuming Ω Z (z) = yΩ ⋆ (z), with y = 0.033 being the integrated yield of the stellar population and Ω ⋆ (z) is the stellar mass density quantified by Walter et al. (2020).These measurements suggest that metals mostly reside in the ISM of galaxies at z ≳ 3. neutral gas reservoirs (Novak et al. 2019(Novak et al. , 2020;;Harikane et al. 2020;Heintz et al. 2021Heintz et al. , 2022b;;Meyer et al. 2022).In this case, there would be some small, but not negligible, in situ star formation in the extended neutral gas disks.Further, these results provide additional evidence for the robustness of the [C ii]-to-M Z,ISM scaling relation derived in this work.At z ∼ 0, we also find that only ≈ 10% of the expected metal yield resides in the ISM of galaxies, consistent with DLA studies (Péroux & Howk 2020). In Fig. 5 we additionally compare our measurements to estimates of Ω Z,ISM inferred through various approaches and for different baryonic phases: metals associated with the neutral gas probed via DLAs, metals located in the hot intracluster medium (ICM) and partially ionized gas, as well as the mass of metals attained in stars (see Péroux & Howk 2020, and references therein).We find that our Ω Z,ISM estimates inferred from [C ii] are in good agreement with the DLA measurements at z ≳ 3, showing a consistent decrease in the metal mass density with increasing redshift following that expected from the stellar yield, Ω Z,⋆ (z).Direct DLA measurements are, however, only possible out to z ≈ 5 due to the increasing suppression of the emission located in the Lyman-α forest caused by the Gunn-Peterson effect at these redshifts.Becker et al. (2011) attempted to partly alleviate this by measuring the density of O i absorption at z ≈ 6 in sub-DLAs (probing partly ionized gas down to N HI = 10 19 cm −2 ).This point is shown as a lower bound on Ω Z,ISM in Fig. 5 since Ω Z,ISM ≳ Ω OI .The method presented here thus provides a complementary census of the high-redshift metal mass density, at previous inaccessible epochs.At z ≲ 1, most of the metals are observed to be captured in stars (Péroux & Howk 2020). Comparing our measurements of Ω Z,ISM at z ≈ 5 and ≈ 7 to Ω dust at equivalent redshifts, we can further make the first prediction for the volume-averaged dust-to-metals (DTM) ratio at these epochs.Based on the recent simulations by GADGET3-OSAKA (Aoyama et al. 2018) and GIZMO-SIMBA (Li et al. 2019), in addition to the results quasar DLA measurements (Péroux & Howk 2020), we find that Ω dust /Ω Z ≈ 10% at z ≳ 5, about a factor of 5 lower than the Milky Way average (DTM Gal ≈ 50%, by mass). Conclusions In an attempt to establish an independent, complementary way of inferring the total ISM metal mass of galaxies, M Z,ISM , we have derived an empirical scaling between the [C ii]−158µm line luminosity and M Z,ISM .This scaling is determined from an observational benchmark sample of galaxies at z ≈ 0, where M Z,ISM could be estimated directly through the metallicity and H i gas mass of the galaxies, in addition to recent hydrodynamical simulations of galaxies at z ≈ 0 and z ≈ 6.The [C ii]-to-M Z,ISM ratio appears universal across redshifts and constant through more than two orders of magnitude in metallicity, with a ratio of log(M Z,ISM /M ⊙ ) = log(L [CII] /L ⊙ ) − 0.45 (and 0.4 dex scatter). We applied this calibration to recent high-z (z ≳ 2) surveys for the [C ii] line emission from main-sequence galaxies reaching well into the epoch of reionization at z ≈ 8.We derived ISM metal masses in the range of M Z,ISM = 2×10 7 −10 9 M ⊙ and found that the metal-to-stellar mass of these galaxies increases with increasing redshift.This ratio effectively describes how galaxies are increasingly efficient at retaining the produced stellar metal yield at higher redshifts, suggesting that most of the produced metals at early cosmic epochs are confined to the ISM of galaxies.This has potential important implications for outflow processes at these redshifts, which may be less substantial than previously reported. Using the same [C ii]-to-M Z,ISM calibration and recent estimates of the [C ii] luminosity density at z ≈ 5 and ≈ 7, we further placed indirect constraints on the cosmological metal mass density Ω Z,ISM at these redshifts.We found that these estimates were consistent with predictions of the total metal yield from stars, based on a recent empirical parametrization of the stellar mass density.Our measurements were therefore able to account for the total expected metal budget at these redshifts, indicating that, on average, most of the metals produced from stellar explosions are still confined to the local ISM of these galaxies.At lower redshifts, z ≈ 0 − 2, most of the metals are found in other forms, predominantly stars (Péroux & Howk 2020;Sanders et al. 2023a).Further comparing our measurements of Ω Z,ISM to Ω dust measured from quasar DLAs, and with recent simulations, we found that Ω dust /Ω Z,ISM ≈ 10% at z ≳ 5, a factor of ≈ 5 lower than the Galactic average. In the near-future, the James Webb Space Telescope (JWST) will be able to routinely enable measurements of the metallicity of galaxies well into the epoch of reionization at z ≳ 6, as previously demonstrated by the early release science data (see e.g., Trump et al. 2023;Schaerer et al. 2022;Rhoads et al. 2023;Curti et al. 2023;Arellano-Córdova et al. 2022;Brinchmann 2023;Heintz et al. 2023) and the recent results from the Cosmic Evolution Early Release Science (CEERS) survey (Finkelstein et al. 2023), see Heintz et al. (2022a); Nakajima et al. (2023);Fujimoto et al. (2023); Sanders et al. (2023b).In combination with the gas masses inferred from ALMA observations through proxies such as [C ii] (e.g., Heintz et al. 2022b) and the far-infrared continuum revealing the dust content (Inami et al. 2022;Dayal et al. 2022), it will soon be possible to directly measure the ISM metal mass and the DTM ratio of galaxies during the earliest cosmic epochs, as recently demonstrated in the case-study by Heintz et al. (2023).This was enabled by combining ALMA and JWST observations of a lensed galaxy at z ≈ 8.5. Fig. 1 . Fig. 1.Metal mass, M Z,ISM , vs. [C ii] luminosity, L [CII] .The observed galaxy samples at z ≈ 0 (see text) with direct measurements of M Z,ISM and L [CII] are shown by the blue circles.The grey-and orange-shaded 2D hexagonal histograms represent simulated galaxies from SÍGAME-v2 (z ≈ 6) and v3 (z ≈ 0), and their mean and 1σ distributions are marked by the grey and orange squares respectively.The dashed line indicates the best-fit relation log(M Z,ISM /M ⊙ ) = log(L [CII] /L ⊙ ) − 0.45 between all data sets and the grey-shaded region indicates the 0.4 dex RMS scatter. Fig Fig. 2. M Z − L [CII] relation as a function of metallicity.Red hexagons show the inferred line-of-sight measurements from GRB sightlines (see text for details), the blue circles denote the observed galaxy samples at z ≈ 0 with direct measurements of M Z,ISM , and the average values of M Z /L [CII] in bins of 0.5 dex in metallicity predicted by the simulations at z ≈ 0 are shown as orange squares and at z ≈ 6 as the grey squares.The constant M Z − L [CII] ratio and the RMS scatter of the data are shown by the dashed line and grey-shaded region, respectively. Fig. 3 . Fig. 3. Metal mass M Z,ISM as a function of stellar mass M ⋆ for the compiled high-redshift galaxy sample survey for [C ii].Colors and symbols denote the respective surveys (see main text for details).The dashed line mark the expected total Type II supernova metal production as a function of M ⋆ (Peeples et al. 2014). Fig. 4 . Fig.4.Retained metal yield of the ISM, M Z,ISM /M ⋆ , as a function of redshift.Symbol notation follows that of Fig.3, but now includes the benchmark z ≈ 0 galaxy sample, marked by dark-and light-gray boxes representing the 1σ and 2σ distributions of M Z,IS M /M ⋆ , in addition to the lensed galaxy S04590 fromHeintz et al. (2023). , but supports the scenario where the [C ii] emission instead traces the extended 2 Converted to a Chabrier IMF.Fig. 5. Cosmological density of metals as a function of redshift.Red hexagons show the measurements from this work based on the [C ii] luminosity densities at z ≈ 5 and ≈ 7, converted to Ω Z,ISM .The other symbols represent the metal densities inferred by various approaches, color-coded as a function of the distinct gas phases they are probing from the compilation of Péroux & Howk (2020), including results from Sanders et al. (2023a) and with the purple triangle denoting the lower bound derived from O i absorption at z ≈ 6 in sub-DLAs by Becker
2023-08-17T15:11:39.321Z
2023-08-15T00:00:00.000
{ "year": 2023, "sha1": "e3dc8906d896ab9eb5c102afe3aa534bbd68ef19", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1051/0004-6361/202346573", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "c1333065f975f1ae4e9fd6802ca82ff3bffcb316", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
10870417
pes2o/s2orc
v3-fos-license
Effective LSTMs for Target-Dependent Sentiment Classification Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence. Different context words have different influences on determining the sentiment polarity of a sentence towards the target. Therefore, it is desirable to integrate the connections between target word and context words when building a learning system. In this paper, we develop two target dependent long short-term memory (LSTM) models, where target information is automatically taken into account. We evaluate our methods on a benchmark dataset from Twitter. Empirical results show that modeling sentence representation with standard LSTM does not perform well. Incorporating target information into LSTM can significantly boost the classification accuracy. The target-dependent LSTM models achieve state-of-the-art performances without using syntactic parser or external sentiment lexicons. Introduction Sentiment analysis, also known as opinion mining (Pang and Lee, 2008;Liu, 2012), is a fundamental task in natural language processing and computational linguistics. Sentiment analysis is crucial to understanding user generated text in social networks or product reviews, and has drawn a lot of attentions from both industry and academic communities. In this paper, we focus on target-dependent sentiment classification (Jiang et al., 2011;Dong et al., 2014;Vo and Zhang, 2015), which is a fundamental and extensively studied task in the field of sentiment analysis. Given a sentence and a target mention, the task calls for inferring the sentiment polarity (e.g. positive, negative, neutral) of the sentence towards the target. For example, let us consider the sentence: "I bought a new camera. The picture quality is amazing but the battery life is too short". If the target string is picture quality, the expected sentiment polarity is "positive" as the sentence expresses a positive opinion towards picture quality. If we consider the target as battery life, the correct sentiment polarity should be "negative". Target-dependent sentiment classification is typically regarded as a kind of text classification problem in literature. Majority of existing studies build sentiment classifiers with supervised machine learning approach, such as feature based Supported Vector Machine (Jiang et al., 2011) or neural network approaches (Dong et al., 2014;Vo and Zhang, 2015). Despite the effectiveness of these approaches, we argue that target-dependent sentiment classification remains a challenge: how to effectively model the semantic relatedness of a target word with its context words in a sentence. One straight forward way to address this problem is to manually design a set of target-dependent features, and integrate them into existing feature-based SVM. However, feature engineering is labor intensive and the "sparse" and "discrete" features are clumsy in encoding side information like target-context relatedness. In addition, a person asked to do this task will naturally "look at" parts of relevant context words which are helpful to determine the sentiment polarity of a sentence towards the target. These motivate us to develop a powerful neural network approach, which is capable of learning continuous features (representations) without feature engineering and meanwhile capturing the intricate relatedness between target and context words. Target-Dependent Long Short-Term Memory Figure 1: The basic long short-term memory (LSTM) approach and its target-dependent extension TD-LSTM for target-dependent sentiment classification. w stands for word in a sentence whose length is n, {w l+1 , w l+2 , ..., w r−1 } are target words, {w 1 , w 2 , ..., w l } are preceding context words, {w r , ..., w n−1 , w n } are following context words. In this paper, we present neural network models to deal with target-dependent sentiment classification. The approach is an extension on long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) by incorporating target information. Such target-dependent LSTM approach models the relatedness of a target word with its context words, and selects the relevant parts of contexts to infer the sentiment polarity towards the target. The model could be trained in an end-to-end way with standard backpropagation, where the loss function is cross-entropy error of supervised sentiment classification. We apply the neural model to target-dependent sentiment classification on a benchmark dataset (Dong et al., 2014). We compare with feature-based SVM (Jiang et al., 2011), adaptive recursive neural network (Dong et al., 2014) and lexicon-enhanced neural network (Vo and Zhang, 2015). Empirical results show that the proposed approach without using syntactic parser or external sentiment lexicon obtains state-ofthe-art classification accuracy. In addition, we find that modeling sentence with standard LSTM does not perform well on this target-dependent task. Integrating target information into LSTM could significantly improve the classification accuracy. The Approach We describe the proposed approach for target-dependent sentiment classification in this section. We first present a basic long short-term memory (LSTM) approach, which models the semantic representation of a sentence without considering the target word being evaluated. Afterwards, we extend LSTM by considering the target word, obtaining the Target-Dependent Long Short-Term Memory (TD-LSTM) model. Finally, we extend TD-LSTM with target connection, where the semantic relatedness of target with its context words are incorporated. Long Short-Term Memory (LSTM) In this part, we describe a long short-term memory (LSTM) model for target-dependent sentiment classification. It is a basic version of our approach. In this setting, the target to be evaluated is ignored so that the task is considered in a target independent way. We use LSTM as it is a state-of-the-art performer for semantic composition in the area of sentiment Softmax Target-Connection Long Short-Term Memory Figure 2: The target-connection long short-term memory (TC-LSTM) model for target-dependent sentiment classification, where w stands for word in a sentence whose length is n, {w l+1 , w l+2 , ..., w r−1 } are target words, v target is target representation, {w 1 , w 2 , ..., w l } are preceding context words, {w r , ..., w n−1 , w n } are following context words. analysis (Li et al., 2015a;. It is capable of computing the representation of a longer expression (e.g. a sentence) from the representation of its children with multi levels of abstraction. The sentence representation can be naturally considered as the feature to predict the sentiment polarity of sentence. Specifically, each word is represented as a low dimensional, continuous and real-valued vector, also known as word embedding (Bengio et al., 2003;Mikolov et al., 2013;Pennington et al., 2014;. All the word vectors are stacked in a word embedding matrix L w ∈ R d×|V | , where d is the dimension of word vector and |V | is vocabulary size. In this work, we pre-train the values of word vectors from text corpus with embedding learning algorithms (Pennington et al., 2014; to make better use of semantic and grammatical associations of words. We use LSTM to compute the vector of a sentence from the vectors of words it contains, an illustration of the model is shown in Figure 1. LSTM is a kind of recurrent neural network (RNN), which is capable of mapping vectors of words with variable length to a fixed-length vector by recursively transforming current word vector w t with the output vector of the previous step h t−1 . The transition function of standard RNN is a linear layer followed by a pointwise non-linear layer such as hyperbolic tangent function (tanh). However, standard RNN suffers the problem of gradient vanishing or exploding (Bengio et al., 1994;Hochreiter and Schmidhuber, 1997), where gradients may grow or decay exponentially over long sequences. Many researchers use a more sophisticated and powerful LSTM cell as the transition function, so that long-distance semantic correlations in a sequence could be better modeled. Compared with standard RNN, LSTM cell contains three additional neural gates: an input gate, a forget gate and an output gate. These gates adaptively remember input vector, forget previous history and generate output vector (Hochreiter and Schmidhuber, 1997). LSTM cell is calculated as follows. where stands for element-wise multiplication, σ is sigmoid function, are the parameters of input, forget and output gates. After calculating the hidden vector of each position, we regard the last hidden vector as the sentence representation (Li et al., 2015a;. We feed it to a linear layer whose output length is class number, and add a sof tmax layer to output the probability of classifying the sentence as positive, negative or neutral. Softmax function is calculated as follows, where C is the number of sentiment categories. The aforementioned LSTM model solves target-dependent sentiment classification in a targetindependent way. That is to say, the feature representation used for sentiment classification remains the same without considering the target words. Let us again take "I bought a new camera. The picture quality is amazing but the battery life is too short" as an example. The representations of this sentence with regard to picture quality and battery life are identical. This is evidently problematic as the sentiment polarity labels towards these two targets are different. To take into account of the target information, we make a slight modification on the aforementioned LSTM model and introduce a target-dependent LSTM (TD-LSTM) in this subsection. The basic idea is to model the preceding and following contexts surrounding the target string, so that contexts in both directions could be used as feature representations for sentiment classification. We believe that capturing such target-dependent context information could improve the accuracy of target-dependent sentiment classification. Specifically, we use two LSTM neural networks, a left one LSTM L and a right one LSTM R , to model the preceding and following contexts respectively. An illustration of the model is shown in Figure 1. The input of LSTM L is the preceding contexts plus target string, and the input of LSTM R is the following contexts plus target string. We run LSTM L from left to right, and run LSTM R from right to left. We favor this strategy as we believe that regarding target string as the last unit could better utilize the semantics of target string when using the composed representation for sentiment classification. Afterwards, we concatenate the last hidden vectors of LSTM L and LSTM R , and feed them to a sof tmax layer to classify the sentiment polarity label. One could also try averaging or summing the last hidden vectors of LSTM L and LSTM R as alternatives. Target-Connection LSTM (TC-LSTM) Compared with LSTM model, target-dependent LSTM (TD-LSTM) could make better use of the target information. However, we think TD-LSTM is still not good enough because it does not capture the interactions between target word and its contexts. Furthermore, a person asked to do target-dependent sentiment classification will select the relevant context words which are helpful to determine the sentiment polarity of a sentence towards the target. Based on the consideration mentioned above, we go one step further and develop a target-connection long short-term memory (TC-LSTM). This model extends TD-LSTM by incorporating an target connection component, which explicitly utilizes the connections between target word and each context word when composing the representation of a sentence. An overview of TC-LSTM is illustrated in Figure 2. The input of TC-LSTM is a sentence consisting of n words {w 1 , w 2 , ...w n } and a target string t occurs in the sentence. We represent target t as {w l+1 , w l+2 ...w r−1 } because a target could be a word sequence of variable length, such as "google" or "harry potter". When processing a sentence, we split it into three components: target words, preceding context words and following context words. We obtain target vector v target by averaging the vectors of words it contains, which has been proven to be simple and effective in representing named entities (Socher et al., 2013a;Sun et al., 2015). When compute the hidden vectors of preceding and following context words, we use two separate long short-term memory models, which are similar with the strategy used in TD-LSTM. The difference is that in TC-LSTM the input at each position is the concatenation of word embedding and target vector v target , while in TD-LSTM the input at each position only includes the embedding of current word. We believe that TC-LSTM could make better use of the connection between target and each context word when building the representation of a sentence. Model Training We train LSTM, TD-LSTM and TC-LSTM in an end-to-end way in a supervised learning framework. The loss function is the cross-entropy error of sentiment classification. where S is the training data, C is the number of sentiment categories, s means a sentence, P c (s) is the probability of predicting s as class c given by the sof tmax layer, P g c (s) indicates whether class c is the correct sentiment category, whose value is 1 or 0. We take the derivative of loss function through back-propagation with respect to all parameters, and update parameters with stochastic gradient descent. Experiment We apply the proposed method to target-dependent sentiment classification to evaluate its effectiveness. We describe experimental setting and empirical results in this section. Experimental Settings We conduct experiment in a supervised setting on a benchmark dataset (Dong et al., 2014). Each instance in the training/test set has a manually labeled sentiment polarity. Training set contains 6,248 sentences and test set has 692 sentences. The percentages of positive, negative and neutral in training and test sets are both 25%, 25%, 50%. We train the model on training set, and evaluate the performance on test set. Evaluation metrics are accuracy and macro-F1 score over positive, negative and neutral categories (Manning and Schütze, 1999;Jurafsky and Martin, 2000). Comparison to Other Methods We compare with several baseline methods, including: In SVM-indep, SVM classifier is built with target-independent features, such as unigram, bigram, punctuations, emoticons, hashtags, the numbers of positive or negative words in General Inquirer sentiment lexicon. In SVM-dep, target-dependent features (Jiang et al., 2011) are also concatenated as the feature representation. In Recursive NN, standard Recursive neural network is used for feature learning over a transfered target-dependent dependency tree (Dong et al., 2014). AdaRNN-w/oE, AdaRNN-w/E and AdaRNNcomb are different variations of adaptive recursive neural network (Dong et al., 2014), whose composition functions are adaptively selected according to the inputs. In Target-dep, SVM classifier is built based on rich target-independent and target-dependent features (Vo and Zhang, 2015). In Target-dep + , sentiment lexicon features are further incorporated. The neural models developed in this paper are abbreviated as LSTM, TD-LSTM and TC-LSTM, which are described in the previous section. We use 100-dimensional Glove vectors learned from Twitter, randomize the parameters with uniform distribution U (−0.003, 0.003), set the clipping threshold of softmax layer as 200 and set learning rate as 0.01. Experimental results of baseline models and our methods are given in Table 1. Comparing between SVM-indep and SVM-dep, we can find that incorporating target information can improve the classification accuracy of a basic SVM classifier. AdaRNN performs better than feature based SVM by making use of dependency parsing information and tree-structured semantic composition. We can find that targetdep is a strong performer even without using lexicon features. It benefits from rich automatic features generated from word embeddings. Among LSTM based models described in this paper, the basic LSTM approach performs worst. This is not surprising because this task requires understanding target-dependent text semantics, while the basic LSTM model does not capture any target information so that it predicts the same result for different targets in a sentence. TD-LSTM obtains a big improvement over LSTM when target signals are taken into consideration. This result demonstrates the importance of target information for target-dependent sentiment classification. By incorporating target-connection mechanism, TC-LSTM obtains the best performances and outperforms all baseline methods in term of classification accuracy. Comparing between Target-dep + and Target-dep, we find that sentiment lexicon feature could further improve the classification accuracy. Our final model TC-LSTM without using sentiment lexicon information performs comparably with Target-dep + . We believe that incorporation lexicon information in TC-LSTM could get further improvement. We leave this as a potential future work. Effects of Word Embeddings It is well accepted that a good word embedding is crucial to composing a powerful text representation at higher level. We therefore study the effects of different word embeddings on LSTM, TD-LSTM and TC-LSTM in this part. Since the benchmark dataset from (Dong et al., 2014) comes from Twitter, we compare between sentiment-specific word embedding (SSWE) 2 and Glove vectors 3 (Pennington et al., 2014). All these word vectors are 50-dimensional and learned from Twitter. SSWE h , SSWE r and SSWE u are different embedding learning algorithms introduced in . SSWE h and SSWE r learn word embeddings by only using sentiment of sentences. SSWE u takes into account of sentiment of sentences and contexts of words simultaneously. We compare between SSWE h , SSWE r , SSWE u and Glove vectors. From Figure 3, we can find that SSWE h and SSWE r perform worse than SSWE u , which is consistent with the results reported on target-independent sentiment classification of tweets . This shows the importance of context information for word embedding learning as both SSWE h and SSWE r do not encode any word contexts. Glove and SSWE u perform comparably, which indicates the importance of global context for estimating a good word representation. In addition, the target connection model TC-LSTM performs best when considering a specific word embedding. 50dms 100dms 200dms LSTM 27 95 329 LSTM-TD 20 93 274 LSTM-TC 65 280 1,165 We compare between Glove vectors with different dimensions (50/100/200). Classification accuracy and time cost are given in Figure 3 and Table 2, respectively. We can find that 100-dimensional word vectors perform better than 50-dimensional word vectors, while 200-dimensional word vectors do not show significant improvements. Furthermore, TD-LSTM and LSTM have similar time cost, while TD-LSTM gets higher classification accuracy as target information is incorporated. TC-LSTM performs slightly better than TD-LSTM while at the cost of longer training time because the parameter number of TC-LSTM is larger. Case Study In this section, we explore to what extent the target-dependent LSTM models including TD-LSTM and TC-LSTM improve the performance of a basic LSTM model. i heard ShannonBrown did his thing in the lakers game!! got ta love him 0 1 Hey google, thanks for all these great Labs features on Chromium, but how about " Create Application Shortcut"?! 1 0 Table 3: Examples drawn from the test set whose polarity labels are incorrectly inferred by LSTM but correctly predicted by both TD-LSTM and TC-LSTM. For each example, target words are in bold, "gold" is the ground truth and "LSTM" means the predicted sentiment label from LSTM model. In Table 3, we list some examples whose polarity labels are incorrectly inferred by LSTM but correctly predicted by both TD-LSTM and TC-LSTM. We observe that LSTM model prefers to assigning the polarity of the entire sentence while ignoring the target to be evaluated. TD-LSTM and TC-LSTM could take into account of target information to some extend. For example, in the 2nd example the opinion holder expresses a negative opinion about his work, but holds a neutral sentiment towards the target "lindsay lohan". In the last example, the whole sentence expresses a neutral sentiment while it holds a positive opinion towards "google". We analyse the error cases that both TD-LSTM and TC-LSTM cannot well handle, and find that 85.4% of the misclassified examples relate to neutral category. The positive instances are rarely misclassified as negative, and vice versa. A example of errors is: "freaky friday on television reminding me to think wtf happened to lindsay lohan, she was such a terrific actress , + my huge crush on haley hudson.", which is incorrectly predicted as positive towards target "indsay lohan" in both TD-LSTM and TC-LSTM. Discussion In order to capture the semantic relatedness between target and context words, we extend TD-LSTM by adding a target connection component. One could also try other extensions to capture the connection between target and context words. For example, we also tried an attention-based LSTM model, which is inspired by the recent success of attention-based neural network in machine translation (Bahdanau et al., 2015) and document encoding (Li et al., 2015b). We implement the soft-attention mechanism (Bahdanau et al., 2015) to enhance TD-LSTM. We incorporate two attention layers for preceding LSTM and following LSTM, respectively. The output vector for each attention layer is the weighted average among hidden vectors of LSTM, where the weight of each hidden vector is calculated with a feedforward neural network. The outputs of preceding and following attention models are concatenated and fed to sof tmax for sentiment classification. However, we cannot obtain better result with such an attention model. The accuracy of this attention model is slightly lower than the standard LSTM model (around 65%), which means that the attention component has a negative impact on the model. A potential reason might be that the attention based LSTM has larger number of parameters, which cannot be easily optimized with the small number of corpus. Related Work We briefly review existing studies on target-dependent sentiment classification and neural network approaches for sentiment classification in this section. Target-Dependent Sentiment Classification Target-dependent sentiment classification is typically regarded as a kind of text classification problem in literature. Therefore, standard text classification approach such as feature-based Supported Vector Machine (Pang et al., 2002;Jiang et al., 2011) can be naturally employed to build a sentiment classifier. Despite the effectiveness of feature engineering, it is labor intensive and unable to discover the discriminative or explanatory factors of data. To handle this problem, some recent studies (Dong et al., 2014;Vo and Zhang, 2015) use neural network methods and encode each sentence in continuous and low-dimensional vector space without feature engineering. Dong et al. (2014) transfer a dependency tree of a sentence into a target-specific recursive structure, and get higher level representation based on that structure. Vo and Zhang (2015) use rich features including sentiment-specific word embedding and sentiment lexicons. Different from previous studies, the LSTM models developed in this work are purely data-driven, and do not rely on dependency parsing results or external sentiment lexicons. Neural Network for Sentiment Classification Neural network approaches have shown promising results on many sentence/document-level sentiment classification (Socher et al., 2013b;. The power of neural model lies in its ability in learning continuous text representation from data without any feature engineering. For sentence/document level sentiment classification, previous studies mostly have two steps. They first learn continuous word vector embeddings from data (Bengio et al., 2003;Mikolov et al., 2013;Pennington et al., 2014). Afterwards, semantic compositional approaches are used to compute the vector of a sentence/document from the vectors of its constituents based on the principle of compositionality (Frege, 1892). Representative compositional approaches to learn sentence representation include recursive neural networks (Socher et al., 2013b;Irsoy and Cardie, 2014), convolutional neural network (Kalchbrenner et al., 2014;Kim, 2014), long short-term memory (Li et al., 2015a) and tree-structured LSTM (Tai et al., 2015;Zhu et al., 2015). There also exists some studies focusing on learning continuous representation of documents (Le and Mikolov, 2014;Bhatia et al., 2015;Yang et al., 2016). Conclusion We develop target-specific long short term memory models for target-dependent sentiment classification. The approach captures the connection between target word and its contexts when generating the representation of a sentence. We train the model in an end-to-end way on a benchmark dataset, and show that incorporating target information could boost the performance of a long short-term memory model. The target-dependent LSTM model obtains state-of-the-art classification accuracy.
2016-09-29T09:40:39.000Z
2015-12-03T00:00:00.000
{ "year": 2016, "sha1": "0b0dc14b8a8dcccbfc62a38355fff2f6a361e9d2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "745426dc79c3a86f51f56ab8df6930694112613f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
225726700
pes2o/s2orc
v3-fos-license
Design and Implementation of IoT Based Remote Laboratory for Sensor Experiments —This article describes the design and implementation of a remote laboratory for learning sensors-based experiments and its applications using embedded systems and Internet of Things (IoT) platform. The main objective of this remote laboratory is to enhance the learning on sensors in engineering education and dealing with the industrial automation applications. With the growing IoT platform for automation, the proposed system can monitor the sensor data and allows the learner to work from anywhere and anytime using mobile android application. Thus, learners can develop knowledge on sensors and control algorithms required for the automation industries and then deploy them on the real industrial automation modules. Introduction Experimentation is a very useful and necessary component of learning science and technology.Traditional laboratories in the university, institutes and industrial/research centers are used for this purpose and are limited when compared to the number of aspiring learners.As developing such laboratory facilities require lot of human resources and economics, remote laboratories may provide an alternative acceptable solution to such learners who are deprived of direct laboratory access. The concept of remote laboratory was introduced from 1999.Since then, several researchers have proposed/developed various types of remote laboratories.In 2004 a remote laboratory for automatic control was developed at the University of Siena [1]. Human efforts are reduced when things gets automated.The need for working with the traditional laboratory is reduced by automating laboratory resources [2,3].Active learning by means of remote laboratories is especially valuable for distance education students and learners in the workplace [4].An efficient complementary m-learning tool can take a whole new approach to the teaching and learning process itself [5,6].Further, the attempt will help in achieving a futuristic model of laboratory using IoT. An embedded system plays an important role in IoT due to their unique features, such as real time computing, and hence it is increasingly used as an effective platform for Industrial automation [7].This provides the best communication and computing platform for industrial machineries and sensors.In industrial automation, proximity sensors [8] play an important role in various applications such as position control, conveyor system control, process control, robotic welding and machine control etc. There are several types of proximity sensors such as inductive sensor, capacitive sensor, magnetic sensor and light sensor etc.For illustration purpose, experiments using inductive sensor is described.This work is to design and implement a remote laboratory to study the working of the proximity inductive sensors [9] and its applications in industrial automation [10]. The paper is divided into five sections.In the remaining part, Sections 2 and 3describe the system architecture and the details of Implementation, respectively.Section 4 provides the results of the remote laboratory experiments and conclusions are drawn in Section 5. System Architecture The basic requirements for the implementation of a remote laboratory are physical laboratory, embedded system and IoT platform.The general architecture can be represented as shown in Fig. 1.For the present work, the selected architecture for the implementation of sensor laboratory is shown in Fig. 2. The system is capable of communicating with android mobile application of learner from anywhere and at any time.The details of the implementation are described below. Physical laboratory The Physical laboratory consists of experimental setup to study the working of sensors and applications of the sensors in industrial automation.The experimental stations are as follows: • Station 1 consists of a designed testing rig, to study the different types of proximity sensors.For illustration purpose proximity inductive sensor is considered.• Station 2 consists of application setup for using sensors in industrial automation.For illustration purpose sorting system with sensor testing unit is considered. Embedded system The embedded system mainly consists of a master card, a slave card and a relay card used to connect between microcontroller and physical laboratory. • Master card: ARM STM-32 microcontroller is used as Master Card.With the available 3 number of UARTs in Master card, communications is established among Slave card, Wi-Fi module and HMI display respectively. iJIM -Vol.14, No. 9, 2020 • Slave card: ARM STM-32 microcontroller is used as Slave card.A block of relays are used to connect between slave card and physical experimental setup to control the experimental setup.• I 2 C communication protocol: I 2 C communication protocol is used between EEPROM and master card to store device password, Wi-Fi address, server address and port number.• HMI display: HMI display gives the status of input and output of the experimental setup to the learner.Using HMI touch panel, inputs can be given and outputs can be observed at physical laboratory. IoT platform Message Queuing Telemetry Transport protocol (MQTT) is an IoT connectivity protocol.The IoT platform consists of following modules: • Wi-Fi module: ESP12E Wi-Fi module is used to provide SSID (service set identifier) and network key.With the MQTT protocol, the learner can select the experiment with the help of android mobile application. Experimental Setup The Implementation of the experimental setup is taken in two stations: Station 1 and Station 2. Station 1-Proximity inductive sensor The objective of this station is to design and develop the working of proximity sensor testing rig.Various types of proximity sensors are used for detecting the presence or absence of an object.An Inductive proximity sensor is a well-known device used in a system for detecting the presence of a metallic (conducting) target.Fig. 3. shows the diagrammatic illustration of sensor testing rig for inductive proximity sensors. Fig. 3. Sensor testing rig The sensor testing rig consists of inductive proximity sensors and different standard target materials like metal, rubber, glass, wood, plastic and which are all of dimension 40x20x1mm.These target materials are preloaded in the indexing unit and it is controlled by indexing motor (IM). By turning ON the switch using android RL mobile application by the distant learner, the signal is detected by the IoT-embedded system and the preloaded program will be executed.A signal will then excite the IM which will start and comes to the reference point near the proximity inductive sensor.After reaching reference position the indexing unit will rotate for one revolution or till the target is detected as metal.Once metal target element is detected in the indexing unit, the indexing unit stop rotating and the same signal will be sent to the mobile application screen.In case, there is no metal target found in the indexing unit, the indexing unit will complete one revolution and stops. Station 2 -Sorting of metal and non-metal elements for industrial application The objective of this station is to understand the working of sorting systems in industries.The automated sorting systems can speed up the production process, gives higher throughput rates.Sorting of metal and non-metal process consists of a testing unit which is equipped with inductive, light and capacitive sensors as shown in Fig. 4. By turning ON the input switch, the conveyor motor start moving with one test element, that is pushed from stack magazine (which is preloaded with test materials), which results in the object to move on the conveyor.When a test material (metal or non-metal element) moves near to the capacitive sensor, it detects the arrival of the material.The separation of metal and non-metal element is then carried out by the inductive sensor.The Inductive sensors detect only the conductive materials and it works on the principle of electromagnetic induction.Fig. 5 shows the testing unit with different sensors used in the sorting system. If the test material is detected as metal by the inductive sensor, the conveyor is moved in forward direction; the material is picked and dumped in a Bin1.If the test material is non-metal, the conveyor is moved in backward direction, and the material is picked and dumped in Bin2.The process is repeated till all the test materials are sorted. Results and Discussion Embedded system with IoT based kit for sensor laboratory is as shown in Fig. 6.The input lines (from sensors or switches) and output lines (to motors or relays) are connected to slave card.The slave card transmits these signals to the master card.The master card processes the data and communicates to the IoT and slave card.The details of the experimental results are discussed below.When the input (starting signal) is given to the switch through RL mobile application by the learner, the indexing unit will come to the reference position with the help of index motor connected to the kit.The Index unit starts rotating from reference position and when the inductive sensor detects the metal element, the motor stops rotating.The stopping of the indexing unit before completing one rotation indicates the detection that the test material is a metal.This will be indicated to the learner in their RL mobile application.Fig. 7 shows the implementation of proximity sensor testing rig. Fig. 7. Implementation of Station 1-Proximity Sensor testing rig From Fig. 8, when the input is given to the switch IRL1=ON (RED) through mobile application, the index motor output ORL1=ON (RED) and indexing unit will come to the reference position.iJIM -Vol.14, No. 9, 2020 The work station can be used for the demonstration of other types of proximity sensors in a similar way. Station 2 -Sorting of metal and non-metal elements for industrial application The station 2 is developed for sorting of metal and non-metal elements.When the input (start signal) is given to the kit shown in Fig. 10, one test material is pushed from the stack magazine onto the conveyor belt.The conveyor belt moves it past the testing unit which has two sensors: capacitive and inductive.The capacitive sensor will detect the arrival of a test material (whether metal or non-metal), and the corresponding indicator in the RL mobile application is turned ON.If the test material is a metal, the inductive sensor detects it and the corresponding indicator is ON, the conveyor belt moves in the forward direction and stops.After checking the inductive sensor indicator, the test material is picked and dumped a Bin1.The stack magazine will then push the second test material and the process is repeated.If the test material is non-metal the inductive sensor indicator is checked and as it is now OFF, the conveyor is moved in the reverse direction and the test material is now picked up and dumped in Bin2.The next test material is then taken from the stack magazine and process is repeated till all the test materials are sorted.13 shows, when the non-metal element passing on conveyor, the inductive sensor IRL2= OFF (GREEN) and the conveyor ORL1= ON (RED) reaches the end, and then conveyor goes in reverse direction and stops, the material is then picked in Bin2.Hence the sorting of metal and non-metals is carried out. Conclusion The work presented in this article describes the concept of IoT based remote laboratory for sensor experiments.The sensor module is used for monitoring and controlling the various parameters of industrial automation.The experimental results reported are encouraging.Such a training and knowledge may lead design and development of more and more innovative automation systems. Fig. 8 . Fig. 8. Output of Station 1-Proximity Sensor working in mobile application Fig. 9 . Fig. 9. Output of Station 1-When Proximity Inductive sensor detects Metal element in Indexing unit
2020-06-18T09:06:19.628Z
2020-06-17T00:00:00.000
{ "year": 2020, "sha1": "373e58bf43ad26bbad7f4aabb3143ecd7a2a9921", "oa_license": "CCBY", "oa_url": "https://online-journals.org/index.php/i-jim/article/download/13991/7187", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "93a861dceab79eb2f6cb5dd6916c07a842148b0a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
58856492
pes2o/s2orc
v3-fos-license
Bi-directional immuno-modulation by Matrix Metalloproteinase-7 (MMP-7) and A Disintegrin And Metalloproteinase-17 (ADAM-17) as transplantation rejection-tolerance spectrum Theoretically, Matrix Metalloproteinase-7 (MMP-7) leads to allograft rejection, and A Disintegrin and Metalloproteinase-17 (ADAM-17) results in allograft tolerance. The research proposal utilizes the animal model of knock-out mice to perform transplant surgery and then detect or measure allograft rejection by selected serum biomarker and tissue typing. Comparisons will be made for knock-out, wild-type, and wild-type treated with proteinase inhibitors. Methodological and theoretical details will be elucidated and revised as the research goes on. Immuno-modulator drugs that does not only suppress but also enhance immunity in transplant. We can quantify post-transplant patients' immunity status and adjust bi-directionally the immunity levels whenever rejection or infection occurs. Surgery: transplantation of s specified organ, such as kidney, heart, aorta, lungs, etc. Drafting of immune synapse networks from MMP-7 and ADAM-17. Measurement of rejection by serum marker and tissue typing; 3. Repeat the above steps 1 to 3 on mice treated with protease inhibitors; 5. Repeat the whole experiment for replicability and reproducibility; 7. Results interpretatiom and report writing. Work plan The research will be carried out with the mentorship of Dr. Wei-Hsuan Yu, PhD, Laboratory of Matrix Biology, Institute of Biochemistry and Molecular Biology, College of Medicine, National Taiwan University, Taipei, Taiwan. 1. Decision of which organ to transplant on mice; 2. Decision of which serum marker and tissue typing to use; 3. Repeat the whole experiment for replicability and reproducibility; 5. Report writing and publication. Details for replicability and reproducibility To achieve replicability and reproducibility, we will repeat the same surgery, sample collection, lab analysis, and statistical analyses on at least two individual mice.If time and funding permit, we will repeat the experiment on more mice. (2) Experimental data obtained from our animal model and degradomics will be compared and explored with the use of MatrixDB (Launay et al. 2015) Expected results and impact Expected Results
2018-12-07T16:32:55.763Z
2016-05-26T00:00:00.000
{ "year": 2016, "sha1": "6ed45953fe06ae1a251dad183a83035729f0a222", "oa_license": "CCBY", "oa_url": "https://riojournal.com/article/9268/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6ed45953fe06ae1a251dad183a83035729f0a222", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
145549959
pes2o/s2orc
v3-fos-license
Queering School, queers in school: An introduction General rights Unless other specific re-use rights are stated the following general rights apply: Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ueer studies of education have become a growing field with a range of theoretical and political positions and methodological approaches. Moreover, research with lesbian, gay, bisexual, transgender and queer (LGBTQ) kids is tightly connected to anti-homophobia, anti-transphobia and normcritical activism. One of the key contentions within this field is what researchers and activists mean by "queer" in the context of education: is it a focus on queer/ed subjectivities? Is it about using queer theories to critique forms and norms of education in a given sociopolitical context? Who is queer/ed in schools? Is the language of homophobia and transphobia the best or even correct way to describe and analyse normative educational settings and frameworks? The ways in which queer education activists and researchers address normative school settings vary, but many are driven by hope for survival and better times. Education researchers Susan Talburt and Mary Lou Rasmussen have opened up for a serious Q evaluation of what they read as a "restorative agenda" in queer studies of education, questioning: ... the very repetitions we were struggling with: a relentless search for 'agency', a belief in pedagogical improvements to encourage diverse gendered and sexual subjectivities, and ideas of a future made better by new imaginings. 1 What Talburt and Rasmussen point out is the problems of a deep-rooted belief in change for the better that are based on the individual instead of on systemic changes. We learn from them to argue that such hopes for a future, which can take us towards experiences of education less pointedly marked by practices of exclusion, certainly require critical reflection and theoretical challenges. At the same time, we cannot do without those local interventions, albeit short-term, that are necessary just there, just then. One of the questions that remain is how we can build lasting conversations between these spaces. A participant in one of the editors' studies challenged her to organise a conference "to bring us all together." With this issue, we are attempting to be part of that conversation, and to pass on that challenge. In this issue of Confero, we highlight both ethnographic investigations of queer and queered kids in school and critical views of school's policy making and normative frameworks. Queer education research is a rapidly growing area of study. Where researchers and activists insist on the entanglements between not least sexual, gendered and racialised structural formations, we also insist on our expectation that principal values in schools meet the increasing challenges from queer activism and research. 2 Reviewing previous studies in this field, it is notable that statistics show that queer/ed kids are at risk of harassment and violence, 3 and experiencing an increased risk for depression, drug use and suicidality. 4 Recent studies address both the experiences discussed and the logic of victimhood inscribed. 5 In particular, several studies in North America discuss initiatives for creating safe schools or safe units within schools, with student support groups and the so-called gay-straight or queer-straight alliances as the most well-known and well documented. 6 Although these studies suggest that the presence of a gay-straight alliance is associated with less homophobic harassment, little is known about the causality. Are these groups prohibiting homophobic and transphobic harassment, or is it a less homophobic and transphobic environment that is required for a gay-straight alliance to be initiated? Other researchers argue that such initiatives, while important respites, are not much more than "band-aids" in contexts that eschew more structural changes. 7 Some call for other interventions to address heteronormativity and cisnormative cultures in schools, such as incorporating LGBTQ issues in teacher education 8 or school counselling. 9 An important intervention in this debate is to fundamentally question the logic of queer kids as victims -and therefore subjects -of homophobia and transphobia. Instead, it is necessary to analyse processes of subjectivation through heteronormativity and cisnormativity in the context of education in schools. 10 Besides a core focus on safe school environments, several previous studies engage with LGBTQ issues in relation to sexuality education. According to many of these studies, sexuality education most often teaches compulsory heterosexuality, 11 sometimes, and typically for North America, with an absence-only-until-marriage mission, 12 or a one-sided focus on heterosexual experiences and prevention of STDs in heterosexual intercourse, 13 leaving non-heterosexually identified pupils' experiences, questions and needs unspoken. Furthermore, research on school cultures, teacher education and school policy covers some of the questions queer education researchers address. 14 A crucial node for intellectual work on queer education would be to work through conceptualisations both of childhood and youth, and of identity formation/subjectivation. It becomes more than obvious that queer education studies reach far beyond heteronormative perceptions in which LGBTQ-subjectivity is perceived as a minority. 15 Our special issue When initiating this special issue, we had a double aim: wanting to both address queer people's everyday experiences of school and to focus on the theorization of queerness in education. We have been fortunate to gather research(ers) and activist work that highlight a broad and deep range of queer perspectives on school. Taken together, the articles provide an overview of how heteronormativity permeates schools, from the abstract prescriptions of legislations, pedagogical methods, social edginess in classrooms or school yards, to self-conceited straightness in textbooks, manuals and implements. The origin of these articles are found in Australia, Canada, Slovenia, Sweden and the US. We wish to further engage in a discussion on the geopolitics of queer issues, without assuming that there is one recipe for dealing with heterosexual normativity, as has been earlier discussed in Jasbir Puar's critique of homonationalism. 16 Indeed, the liberal idea of schools as a platform for life-long learning of tolerance, inclusion and anti-mobbing seems to resist the influences that queer and feminist theories have had both in research and in activism, which is discussed in several of the articles in this issue. 17 In "Taking homophobia's measure," Australian researcher Mary Lou Rasmussen analyses manuals employed in sexuality education in Australian and US schools, where homophobia is presumed as a condition that can be measured on various scales. Rasmussen's exposition over various methods to handle homophobia indicates that they often pinpoint certain groups and classify archaic personality types. Following Rinaldo Walcott's argument that what we understand as 'homophobia' is still in question, Rasmussen queries these methods and the scientification of the scale as a model for measuring homophobia. Unlike many scholars who usually point out the problem but leave the tools of implementation to practitioners, Rasmussen suggests alternative ways of discussing LGBTQ in school. The second contribution for this special issue also engages with text analysis. While Rasmussen focuses on scales where homosexuality is 'othered', Swedish researcher Malin Ah-King's 16 Puar, 2007. 17 Bromseth and Darj, 2010. article, "Queering animal sexual behavior in biology textbooks," draws on an analysis of how animal sexual behaviour is depicted in biology textbooks by showing texts where non-heterosexuality is systematically ignored. Given that any biology school textbook must simplify the richness of sexuality in nature, it is striking how the textbooks continue to show such simplification through the lenses of human heterosexual and gender norms. As Ah-King points out, biology gives us knowledge about nature and thus impacts on our ideas of what is 'natural'. When nonheterosexuality is left unmentioned, the impression of its nonexistence is easily given. Similarly, invisibility of non-heterosexuality is central in the third contribution for this issue. Switching focus from text analysis to lived experiences, Slovenian researcher Ana Sobocǎn's research on the situation in school for children with homosexual parents in Slovenia is built on a unique interview study. Since Slovenia joined the European Union as a member state, there has been new legislation recognising same sex relationships. However, according to Sobocǎn this has had limited impact on the level of hate speech, ignorance and defamation that queer people experience. In fact Sobocǎn notices, what she coins, "moral homophobes" who use the protection of children as an excuse to express homophobic attitudes. This fundamentalist view imposed on children reproduces the well-worn idea that LGBTQ people are incapable of transferring good values to children, which affects the political debate in Slovenia. Sobocǎn also discusses a generation gap between older and younger homosexual parents and that the younger generation is more active in claiming openness and education on LGBT-issues, what Sobocǎn calls a "denormalization", and key to moving away from harassment and hatred. Another piece that engages with lived experiences is US-American researcher Mel Freitag's article "A queer geography of a school: Landscapes of safe(r) spaces." A US school, known by reputation as the "gay school" is the context for Freitag's ethnographic fieldwork. Drawing on the experiences of youth and staff in this school, she discusses notions of safety and safe spaces. Freitag discusses how queering a space can provide a safe(r) space, not only for queers themselves, but for straights as well. Despite the school's reputation, and the researcher's expectations, most of the pupils did not identify as LGBTQ. Rather, the school is described as an area where pupils are able to self-identify in a broad spectrum of sexuality and gender positions, or not selfidentify their gender or sexuality at all. A safe(r) space seems to be a space where identities are not limited to a repertoire of alternatives that have been established beforehand; rather a much more fluid and dynamic lived experience is depicted. The safe(r) space is thereby providing a richness far beyond the fixed stages of "tolerating" or "celebrating" homosexuality, as in the homophobia measuring scales discussed by Rasmussen in this issue. From the almost comforting feeling of following Freitag through the corridors of the so-called "gay school", the reader must be ready for an abrupt shift to take in the second US contribution, the position paper "Safety for K- students: United States policy concerning LGBT student safety must provide inclusion." April Sanders departs from one of the most serious consequences of homophobia in schools, namely young queers' suicide following homophobic harassment. Sanders argues that US policy documents directing school organisation should and must address homophobic harassment. Statistics and examples of nonheterosexual youth being exposed to violence and harassment due to homophobia is employed to show this alarming situation that demands necessary political and policy changes. The final article in this issue shares with Sanders an activist point of departure. Rachel Epstein, Becky Idems and Adinne Schwartz are LGBTQ activists from Canada. Their contribution "Queer spawn on school" engages with school experiences of children with LGBTQ parents. 18 The authors show how homophobia affects those who are culturally queer, i.e. those growing up with non-heterosexual parents, regardless of whether they are emotionally queer or not. It is a gloomy read to take part in children and teenagers' experiences of being bullied. However, it is also encouraging to hear queer spawn speak up about their obstacles, within the context of research. During the late  th century, children in non-heterosexual (mainly lesbian) families were the subjects of interest in several studies. Specific experiences of these children, or any deviation from other children and youth, were however most often played down in these early studies, partly because an overt focus on difficulties was seen as a risk in feeding homophobes with arguments against queer families. With Epstein, Idems and Schwartz's text, queer spawn are able to speak in their own right, demonstrating a political and societal advancement of non-heterosexual families in Canada -and possibly encouraging further developments that are to come. Working with this special edition has been an enormous pleasure for us. Thanks to the authors for their fierceness in activism and intellectual astuteness! We hope that the conversations in this issue can contribute to ongoing debates and challenges in education research and in schools. Mary Lou Rasmussen To make the claim that there is not a universalized form of homophobia might strike some as strange. In fact, it might strike others as even stranger that what constitutes homophobia in one geopolitical space does not translate seamlessly to another geopolitical space. And if homophobia is in question, the what and the how of the idea of homosexuality are also in question. -Walcott, :  y focus in this article is on the topic of homophobia and its place in the sexuality education classroom in Australia and the United States (US). This paper draws on research in anthropology 1 law 2 and, on studies of gender and sexuality 3 in an attempt to complicate predominantly psychological understandings of homophobia that may underscore the popular use of scales to measure homophobic attitudes in pre-service and in-service teachers. These interdisciplinary approaches to homophobia provide the basis for 1 Murray, 2009. 2 Monk, 2011. 3 Butler, 1999;Hooghe, Dejaeghere, Claes and Quintelier, 2010;Hooghe, Claes, Harell, Quintelier and Dejaeghere, 2010;Puar, 2007Puar, , 2012Walcott, 2010. M a critical reading of some contemporary pedagogical approaches to anti-homophobia education in diverse education contexts. Clearly, Australia and the US provide different contexts in which to understand the place of homophobia in education. The concern of how to address problems related to homophobia and heterosexism in education has been more fraught in the US context than in the Australian context, where states have generally endorsed some form of comprehensive sexuality education. 4 This is not to say that homophobia is not seen as an issue in the Australian context, though attempts to address homophobia in teacher education and university education have not been confronted with as much organized resistance as in the US context. 5 It is also true to say that in both the US and the Australia the question of how to deal with homophobia, and resistance to inclusion of issues related to diverse genders and sexualities has not been uniform. 6 In sexuality education it is often taken as read that homophobia is problematic and the focus becomes ways in which to intervene against the reproduction of homophobic attitudes. 7 As a consequence, strategies are devised and implemented to help students and teachers become less homophobic. 8 Teachers and students who refuse this help maybe seen as ineffective or a 'problem' in the battle against homophobia. 9 Those who stand up and confront homophobia are lauded. 10 Some of the resources I discuss below are illustrative of how Australian's working to combat homophobia in diverse education contexts have sought to craft US scales so they are fit for purpose in the Australian context. 11 However, if what we understand to be homophobia is in question, as Walcott suggests, what does this mean for some of the tools used in anti-homophobia education? In this article I aim to consider how scales that measure homophobia 12 (a common tool deployed in anti-homophobia education in Australia and the U.S.) might be read against the proposition that what we understand homophobia to be is still in question. In the first section of this paper I look at research from psychology, education, and sexuality studies in the US and Australia that attempts to situate homophobia on different scales. My focus is on the conditions of possibility that have brought three particular scales into being: Daniel Witthaus' adaptation of Betty Burzon's classification of homophobic types for use in workshops (in and outside of schools in rural and regional Australia); Ollis' pedagogical use of Riddle's Scale of Attitudes in a national Sexuality Education Resource produced in Victoria, Australia; Zack, Mannheim and Alfano's classification of archetypal responses to homophobic rhetoric, for use in teacher education in the United States. My critique of these scales should not be read as a disavowal of the problem of homophobic bullying. I appreciate that for some young people experiences of homophobia are profound, frequent and devastating. Rather, my focus is on how particular truisms have developed about homophobia, and its treatment, manifest in scales organized to measure levels of homophobia in particular groups. It is these understandings that I want to complicate in this article. Following on from an analysis of scales that have been developed to measure homophobia, I move to a consideration of the logics that underpin these scales. How is homophobia being interpreted in these scales? What is the relationship between antihomophobia education and post-homophobic imaginings? How does homophobia intersect with cultural and religious difference in these scales and what does this mean for the continued use of scales that purport to measure homophobia? Finally, I turn to some other ways of theorizing homophobia that might prompt educators and researchers to think differently about the question of homophobia, and their use of scales that measure homophobia. Scaling Homophobia Homophobia is commonly associated with psychological understandings of sexuality. There are hundreds of studies that use scales to measure homophobia; the following studies are just a few examples. 13 The scales generally originate in psychology, and their history in the measurement of homophobia goes back to at least . 14 It is beyond the scope of this article to provide a detailed analysis of the formation of these scales, for a history of the logic underpinning the development and validation of homophobia scales in the discipline of psychopathology see Wright, Adams and Bernat's Development and validation of the homophobia scale. 15 In this article my focus is on the pedagogical use of these scales to educate people in such a way that it may assist them to become less homophobic. I situate such a rationale for the use of scales in educational contexts alongside contemporary research that is critical of how homophobia is conceptualized and sometimes utilized as part of "progressive" educational agendas. As indicated by Debbie Ollis, an education researcher working in the Australian context, sexuality educators may employ scales of homophobia as tools to support them in developing educational spaces that they perceive to be more affirming of sexual diversity. Ollis argues that: The successful pre-service and in-service teacher education programs which do exist have demonstrated a number of elements that have been seen to have promoted their success. These include a group-teaching model, seen as effective in developing the key skills of working together and communication (Thomas & Jones ; Walker et al. ); and questionnaires and rating scales (including Riddle's scale of attitudes) on participants' own reactions, designed to provoke self-reflection amongst participants (Levenson-Gingiss & Hamilton ; Thomas & Jones ; Ollis ). 16 For Ollis, the scales are a means to provoke students to reflect on their own thinking about diverse sexualities. The scales are also held to be particularly pedagogically persuasive because they enable pre-service and in-service teachers to measure their own attitudes and to see how these measures might change in comparison to other points on the scale. In their work with teachers Ollis, Harrison and Maharaj advocate the use of Riddle's scale. 17 Dorothy Riddle, the developer of Riddle's scale, was a psychologist and a part of an American Psychological Association Task Force that effectively lobbied for the removal of homosexuality as a psychiatric disorder in the Diagnostic and Statistical Manual. The Riddle scale of attitudes was developed in the early s when Riddle was based at the City University of New York. 18 The first published version of the scale did not appear until . It is worth noting the context in which the Riddle Scale was developed; it is now nearly  years old but researchers and educators in Australia and the US still see the scale as having applicability within and outside the US. 19 Let me be clear in stating that Ollis' decision to use the scale in her pedagogy is in many ways unremarkable. For instance, Gay & Lesbian Health Victoria, the peak body for lobbying on issues related to enhancing the health and well-being of Victoria's Gay, Lesbian, Bisexual, Trans and Intersex communities also employs Riddle's scale in its professional development programs. 20 However, researchers in counselling psychology have questioned the value of such scales, arguing: The long-standing theoretical assumption that heterosexual attitudes can be understood only along the unidimensional, bipolar continuum ranging from condemnation to tolerance (Herek, ) has been challenged by these findings. We speculate that these results are not only a function of the evolution of heterosexual attitudes since Herek's seminal work in the area but also reflect an increasing need and interest in the precision of measurement in this area. 21 While Worthington and colleagues seek to develop a more precise measurement building on the research of Herek, in this article I 18 See http://newsarchive.woodstockschool.in/Alumni/DistAlum/riddle.htm accessed 20 April 2013. 19 Hirschfield, 2001;Ollis, 2010;Ollis et al., 2013. 20 See http://www.glhv.org.au/files/Training_session_plan.pdf accessed 29 April, 2013. 21 Worthington, Dillon and Becker-Schutte, 2005, p. 116. seek to question the drive to measure such attitudes -at least through the employment of scales which employ continuums. Ollis has identified, and I would concur, that some teachers are reluctant to "recognise and affirm sexual diversity" in public schools and she has developed a series of workshops to help teachers think about what might cause this reluctance. 22 The workshops, which were part of a national Talking Sexual Health program, also feature in a more recent resource, Sexuality Education Matters 23 (an online resource for Australian teacher educators 24 ) which aims …to present teachers with an examination of a range of discourses that have operated to position sexual diversity in a constraining and negative way…These include discourses of fear, illness, difference, and abnormality. The workshop also aimed to present teachers with others [discourses], which Johnson () calls 'a way forward' that can enable teachers to deconstruct heterosexuality, affirm diversity and position sexual diversity as the part of the normal spectrum of sexuality; in other words the positive subject positions. 25 (Emphasis mine) In Ollis' workshop, as discussed in her  article, participants position themselves and their school in response to heterosexuality and homosexuality using 'Riddle's Scale of Attitudes'. 26 The following attitudes in relation to both heterosexuality and homosexuality appear on Riddle's scale: 22 Ollis, 2010, p. 218. 23 Ollis et al., 2013. 24 See http://www.deakin.edu.au/arts-ed/education/teach-research/healthpe/projects.php accessed 20 April 2013. 25 Ollis, 2010, p. 220. 26 Ollis, 2010, p. 221. Celebration These people celebrate gay and lesbian people and assume that they are indispensable in our society. They are willing to be gay advocates. 27 Appreciation These people appreciate and value the diversity of people and see gays as a valid part of that diversity. These people are willing to work to combat homophobic attitudes in others. Admiration This acknowledges that being gay/lesbian in our society takes strength. Such people are willing to truly look at themselves and work on their own homophobic attitudes. Support These people support work to safeguard the rights of gays and lesbians. Such people may be uncomfortable themselves, but they are aware of the implications of the negative climate homophobia creates and the irrational unfairness. Acceptance Still implies there is something to accept, characterised by such statements as 'You're not a gay to me, you're a person'. 'What you do in bed is your own business.' 'That's fine as long as you don't flaunt it.' This attitude denies social and legal realities. It still sets up the person saying 'I accept you' in a position of power to be the one to 'accept' others. It ignores the pain, invisibility and stress of closet behaviour. 'Flaunt' usually means say or do anything that makes people aware. This is where most of us find ourselves, even when we'd like to think that we are doing really well. Tolerance Homosexuality is seen as just a phase of adolescent development that many people go through and most people 'grow out of'. Thus, gays are less mature than straights and should be treated with the protectiveness and indulgence one will use with a child. Gays and lesbians should not be given positions of authority (because they are still working through adolescent behaviours), as they are seen as 'security risks'. Pity Heterosexual chauvinism. Heterosexuality is seen as more mature and certainly to be preferred. Any possibility of becoming straight should be reinforced and those who seem to be born 'that way' should be pitied, as in 'the poor dears'. Repulsion Homosexuality is seen as a 'crime against nature'. People who identify as homosexual are sick, crazy, immoral, sinful, wicked etc., and anything is justified to change them (e.g. prison, hospitals). You might well hear this expressed as 'Yuk! When I think about what they do in bed!' The hierarchy at play in the scale is readily apparent; people who are repulsed by homosexuality appear at the bottom. In this structure it appears that the most desirable position a teacher might assume is that they come to celebrate homosexuality. The desirability of achieving celebration on Riddle's scale is discussed below: …teachers also talked about the importance of Riddle's scale in challenging their notion of what the attitudes 'tolerance' and 'acceptance' really meant in relation to being inclusive. Kim was one of the three teachers prior to the professional development to feel that her program did not need changes to be inclusive. Yet even for her, the 'Scale of Attitudes' activity challenged her understanding and attitudes and made her reflect on the possibility that she too had some movement towards inclusiveness to make. She could remember thinking: "I was so liberated in my thinking but I'm probably not yet at celebration, you know, that's still one step on for me. So I guess that struck home because I thought, well, everybody's got somewhere to go as far as their thinking on homosexuality". (Kim, Phase ) 28 Kim' statement that "everybody's got somewhere to go as far as their thinking on homosexuality" demonstrates that she has absorbed the lesson of the scale, namely that many people's thinking about homophobia is in need of advancement. Ollis is, I think, pleased with this outcome because it points to the productivity of these scales in helping people diagnose their own shortcomings in regards to affirming sexual diversity. What interests me, both in Ollis' and Kim's (the pre-service teacher participant) use of the scale, is their investment in the logic employed by Riddle in developing the scale, namely, that celebration should be every teacher's ultimate destination. Later in this paper, I critically consider this impulse to move us to celebration. But first, I want to illustrate some other scales that are currently being used in anti-homophobia education in Australia and the US. Daniel Witthaus is a prominent Australian anti-homophobia activist who has been doing advocacy related to gay and lesbian issues since the early s. He spends a lot of time talking to school and community groups in rural and remote Australia. Currently he is endeavouring to develop support for NICHE -(National Institute for Challenging Homophobia Education). On his Beyond That's So Gay website in a resource entitled The Faces of Homophobia: Everyday resistance quantified… he states that he has adapted Betty Burzon's (sic) model homophobic types for the Australian context as part of his Beyond that's so gay, Australia wide training program. In her text Setting them Straight 29 , Berzon, an author and psychotherapist, developed a series of types in order to help readers who encountered homophobic messages in everyday conversations. Other 29 Berzon, 1996. researchers have also drawn on Berzon's types in their antihomophobia work. 30 In creating types that draw strongly on Australian stereotypes Witthaus' is no doubt using a form of language that he thinks will engage his audiences in regional and remote Australia. Witthaus has developed the following descriptors of different personality types which he relates in the following order. The Romper Stomper 31 Feel vulnerable and constantly under attack; Mobilised to counterattack those things and people that threaten their wellbeing; Typically male, their definition of reality is described as 'narrow' and their outlook 'hateful'. The Frustrated Bogan 32 Trouble coping with reality, and shows inflexibility in adapting within their environment; Frustration is primarily handled using aggression; Emotion is an important weapon, often shown by lashing out. The Politician Conservative individuals who jump onto the nearest 'bandwagon' (e.g. polls); Desperate to fit in with the 'in-group' and be seen to distance themselves from the 'out-group'; Avoid taking responsibility for their attitudes and actions. The Sheep Thinkers who are dependent upon the opinion of others (i.e. the flock); Don't spend much time considering the consequences of discrimination; Their lack of a self-determined belief system paired with their apathy makes them dangerous in the hands of the wrong shepherd. 30 Rostosky, Riggle, Horne and Miller, 2009;Wormer and McKinney, 2003. 31 The name Romper Stomper evokes the 1992 Australian film of the same name directed by Geoffrey Wright. The focus of the movie is racism enacted by a neo-Nazi skinhead group in a Melbourne working class suburb. 32 Bogan is an Australian pejorative used to denote somebody who is lacking in culture or manners. The Stirrer Attempts to exploit the fears and frustrations of the other homophobic types; Exploits people's ignorance and fear of difference; Adept at stirring up anger in others and experts in uniting and building cohesion against a 'common enemy'. The Almost Ally Invariably well-educated and older people, often females, who pledge their LGBT allegiance; Often unaware of their own homophobia; Unwilling to put themselves in situations where they, or others, could assess them as prejudiced. 33 These portraits portray people who are homophobic as paranoid, hateful, conservative, and unable to think for themselves. The 'type' classified as The Sheep, which appears to evoke religious metaphors (the shepherd) and their followers (sheep), are constituted as unthinking and non-agentic. Akin to Ollis' use of Riddle's scale, for Witthaus' advancement of people along the scale is a clear goal of its use. This is apparent in the citation below: Experienced LGBT advocate and friend to religious communities, Anthony Venn-Brown, is clear that in any everyday conversation he has with homophobic opponents he only has one goal: to identify where they are on this very scale and to shift them one step forward. 34 Ollis and Witthaus are both committed to anti-homophobia education, and they share a belief that anti-homophobia education can help people become less homophobic. are assembled within a liberatory framework which sees the value in progressing all people along a scale. In the logic of the scale, becoming less homophobic, constitutes a more enlightened or liberatory position. Together with Harwood, I have previously argued that the expression of competing truths about homosexuality [including the expression of homophobia] is an important part of pedagogy and that to curtail speech that is homophobic privileges particular understandings of inclusion. 35 Consequently, I read these scales as imposing particular truths on people who are asked to participate in lessons based on their use vis-à-vis where they should situate themselves in relation to homophobia. US education researchers j. Zack, Alexandra Mannheim and Michael Alfano have also designed a scales to measure "the varying levels of ability and willingness of the participants [ student teachers] to address homophobia in their classroom. Ideally, we hoped that our participants would move from the lower levels of avoiders and hesitators to the higher levels of confronters and, ultimately, integrators". 36 Below are brief descriptors of each of the archetypal responses to homophobic rhetoric classified by Zack et al.: Confronters Many student teachers took it upon themselves to take time from the scheduled lesson plan to address homophobic slurs that were leveled against students. It was the consensus among these student teachers that homophobic rhetoric was widespread, considered socially acceptable, and posed a challenge to them as educators that was nearly impossible to conquer singlehandedly -but they were willing to give it a try. () 35 Harwood andRasmussen, 2012. 36 Zack et al., 2010, p. 102. Integrators A few student teachers sought to combat the issue of homophobia within the school by integrating homophobia reduction into the curriculum. These student teachers understood that queer culture is an important part of the multicultural repertoire and should not be excluded. () Hesitators By far the largest archetype, "hesitators" describes the largest group, those who felt a call to action to address the homophobia they witnessed, but lacked the set of skills necessary to create an atmosphere free of homophobic rhetoric or move students toward more accepting ideologies. The reasons for this lack of confidence varied among the student teachers, but were most commonly the result of ) being accused of being gay by students, ) encountering religious opposition in the students, and ) feeling pressured to focus on content. () Avoiders While there was heated discussion regarding homophobic rhetoric, made evident by the numerous student teachers who volunteered the topic and confirmed how rampant the problem was, some student teachers chose to remain silent during the discussions. It is impossible to state with any certainty the reasons for these participants' withdrawal from the conversation. The silence may imply that they were on some level complicit with the level of homophobia being exhibited by students and unwilling to address these behaviors…Some of the avoiders may have been struggling with their own sexual identity. Or, we hypothesized, perhaps some were uncomfortable talking about anything dealing with sex in a public forum. While no student teacher freely admitted to doing nothing when encountering homophobic speech at their schools, their silence was telling. () The archetypal responses developed by Zack et al. produce a hierarchy that measures people's capacity to address homophobia in a way that the researchers' perceive as appropriate. The notion of progress is also apparent. The researchers, in talking about Confronters, observe "we were pleased that many felt confident enough to address homophobic speech when it presented itself and had the knowledge and skills to move students in a positive direction". 37 So participants who were characterized as having most able and willing to address homophobia were the one's who conceptualized themselves as having the capacity to move students on from homophobic attitudes. Avoiders, the archetype situated at the bottom of Zack et al.'s scale, are seen as potentially taking up this position for a multitude of reasons. Below they provide an account of the type of teacher education student who might take up the avoider position: Knowing that the discourse within our program favors pluralism and a regard for diversity, it is likely that some participants in the discussion remained silent because their personal views were in opposition to homosexual lifestyles. Perhaps they believed that the religiously, morally, and politically charged issue of homosexuality was outside the purview of public schooling. Or, maybe they were just too shy. Whatever the case, it seemed unlikely that these beginning teachers would be addressing the issues of homophobic hate-speech in any meaningful ways in the near future. 38 As opposed to the classifications describing the lowest points in Riddle's scale and Witthaus' types, this discussion allows that participants might have religious objections which would account for their being labelled as avoiders. There is also recognition that the space of the university classroom featured in the research, which is described as one that "favors pluralism and a regard for diversity", meant that "some participants in the discussion remained silent". 39 37 Zack et al., 2010, p. 104. 38 Zack et al., 2010, p. 103. 39 Zack et al., 2010 This is a particularly salient observation because it indicates the ways in which religious objections to homosexuality have become unspeakable in some university classrooms. Avoiders read the classroom climate and know that homophobic utterances are unacceptable in this particular space and thus they know to keep silent. This shared understanding, on the part of professors and their teacher education students, that homophobia is unutterable, sets up a space which sets specific limits on pluralism and diversity, no doubt with the best of intentions. Below Zack at el. provide Confronters with tips on how to deal with religious beliefs of students that are perceived as discriminatory: Student teachers should also be equipped with information that challenges the religious beliefs of students (when these beliefs are mired in discrimination) …Some organizations that can aid those entering the teaching profession in solidifying their responses to religious and legal arguments against homosexuality include freedomtomarry.org, which provides advice on how to talk about marriage equality, and informedconscience.com, a group that explores homosexuality and the Catholic Church and provides alternative interpretations of scripture. 40 I am concerned at what such directions might mean for teachers when they are working in schools and they encounter remarks that they perceive as homophobic from peers, parents or students. Such an approach could set up teachers to the conclusion that certain students' beliefs are in need of correction, or, at least, movement in a "positive direction". This prompts me to ask: When does saying no to homophobia become a means by which to discipline specific types of religious beliefs in the classroom? The binaries at work in the production of scales utilized in antihomophobic research and pedagogies are well summed up in a recent doctoral thesis entitled With us or against us: Using religiosity and sociodemographic variables to predict homophobic beliefs. 41 In this study Erin Schwartz, a graduate of the Indiana State University doctoral program in Counseling Psychology, utilizes a psychological scale to measure the homophobic attitudes of people in the US who were, and were not, religiously affiliated. By employing a particular scale Schwartz found that people who identified as fundamentalists in Christian traditions were more likely to be homophobic. While the body of thesis does not appear to make mention of its title, one interpretation might be that scales of homophobic beliefs are useful because they are helpful in determining who is "with us or against us". What is not clear, is who is "us"? Schwartz was surprised to note that level of education among people who were fundamentalist did not alter their level of homophobia -though age did. The finding of no differences in homophobia based on level of education was surprising. It had been expected that having more education and thus, more exposure to various points of view from sources other than family-of-origin and one's religious congregation, would play an important role in differences in homophobic beliefs. This unexpected finding indicates that education alone may not have an important impact on changing prejudicial beliefs. 42 (Emphasis mine) Such a finding is surprising to Schwartz, I would argue, because there is a firm belief that more education and exposure to gays and lesbians will have the effect of moderating people's 41 Zack et al., 2010, p. 109. 42 Schwartz, 2011 homophobic tendencies. The strength of this belief, that people will become less homophobic when exposed to anti-homophobia education, is apparent in all the scales that I have discussed above. In the context of this discussion of homophobia and sexuality education, this belief is key because it reflects a repeated tendency to attribute homophobic beliefs to a lack of education, rather than to religiosity. In their research on homophobia among adolescents in Canada and Belgium, Hooghe, Claes, Harell, Quintelier and Dejaeghere 43 also trouble the belief that there is a link between homophobia and educational attainment. They note that Despite arguments that hostility toward LGBT rights among Muslims can simply be attributed to their lower average education level or to a Mediterranean cultural factor, our study does not find support for these arguments. Our models included controls for educational background from two separate country samples with diverging immigration patterns. This allows us to isolate the religious factor quite unequivocally as an important element for the occurrence of negative feelings toward equal rights for LGBT groups. 44 It is clear in this study that level of education does not correlate with level of homophobia. Hooghe et al. state that their finding that religion and homophobic belief are correlated in some people of Christian and Muslim faiths is unremarkable. They go on to note that several research studies suggest "adherence to strict and fundamentalist forms of religion is positively associated to homophobia and anti-gay attitudes". 45 The correlations Hooghe et al. see between homophobia and religious fundamentalism 43 Hooghe et al., 2010. 44 Hooghe et al., 2010, p. 396. 45 Hooghe et al., 2010, p. 385. leads them to question the assumptions that underpin scales that measure homophobia. In an article by Hooghe, Dejaeghere, Claes, and Quintelier's subtitled: The Structure of Attitudes toward Gay and Lesbian Rights among Islamic Youth in Belgium the researchers draws attention to the specific ways in which race, ethnicity and religion are often highlighted as markers of increased homophobia in studies using homophobia scales. Hooghe et al. seek to problematize this type of research arguing that: …the scales …all originate in a liberal, rights-oriented approach toward homosexuality, which is often at odds with a more religiously based understanding of homosexuality and homosexual behavior. Basically, this would imply that the measurement scales for homophobia that are conventionally used are not sufficiently cross-culturally valid to allow for unbiased understanding of the feelings toward homosexuality among various religious groups. These scales indeed originate from a secularized Western research setting and very little effort has been devoted to the question [of] whether these scales can be used meaningfully in a more religious context. 46 For the purpose of this discussion of scales and homophobia in the context of sexuality education, Hooghe et al.'s comments are particularly salient. While continuing to employ scales in their research, there is also recognition by these researchers of the limitations of scales that measure homophobia. Hooghe et al. illustrate the complexities of defining just what homophobia is in quantitative and qualitative research. Their own research using these scales has prompted them to question how scales that measure homophobia are rooted in systems of belief that almost ensure particular groups of people will be 46 Hooghe et al., 2010, p. 50. classified as homophobic. As I have asked elsewhere "how might I understand religious reasoning on sex education, using a frame that eschews the authority of secular reason?" 47 In the context of this discussion, I am constructing scales that measure or classify particular types of homophobia as embedded in the authority of a secular reasoning in which an anti-homophobic response is often conflated as a combination of ignorance, irrationality, religiosity and miseducation. What are the consequences then of employing these scales in antihomophobia research and pedagogy to, once again, and, often not surprisingly, identify particular members of specific populations as homophobic? To my mind, the repeated use of homophobia scales is problematic because in, a Butlerian 48 sense, the findings they produce are performative. Through the continued utilisation and production of the scales we come to know particular subjects first and foremost as homophobic; in this respect the employment of scales can be seen as a liberal mechanism of exclusion. Thinking differently about homophobia in teaching and research As David Murray notes "Homophobia has gone global" 49 and it is "increasingly attached to moral, political, and economic agendas around the globe." Homophobia has, indeed, gone global, but as the epigraph to this article suggests, this is not to say that homophobia cannot be easily translated across geopolitical sites. In countries like Australia and the U.S. that both have large communities of new immigrants this is an 47 Rasmussen, 2010, p. 701. 48 Butler, 1999. 49 Murray, 2009 important consideration because if homophobia is not a universal phenomenon, then anti-homophobia education needs to be attuned to this. Though, as I discuss below, significant differences in how people understand the question of homophobia are by no means confined to immigrant communities. For instance, people within Protestant religious communities across the U.S., hold markedly different understanding of homophobia and heteronormativity. Daniel Monk in an article entitled, Challenging homophobic bullying in schools: The politics of progress, see discourses related to homophobic bullying as first and foremost political, and therefore necessarily subject to critique. He writes, …while issues such as gay marriage and gays in the military are campaigns that have been exposed to lively critique within the LGBT community and academic literature, there has been very little similar debate about homophobic bullying, located as it is within the 'benign' emancipatory liberal discourses of education and future-focused discourses of innocent and universal childhood. 50 The critique of scales that are used to measure homophobia has been limited, partially because it is commonly understood that such scales are fundamentally benign. Monk goes on to make the point that anti-homophobic discourse is founded in "imaginations and representations of a post-homophobic time". 51 I construe scales that measure homophobia as part of broader constellation of discourses that seek to challenge homophobia, and as I have tried to illustrate above, I do not perceive such scales as benign or emancipatory. By challenging 50 Monk, 2011, p. 191. 51 Monk, 2011 the use of these scales I want to join with Monk in scrutinizing the politics that underpin anti-homophobia education. The progressive narratives implicit within scales that measure homophobia can be conceived as a technology explicitly designed to help students and teachers develop imaginings of posthomophobic time. Scales of homophobia very specifically construct responses to homophobia as something which might be improved, over time, by moving people along the scale from a position of repulsion to celebration 52 or from romper stomper to almost ally (Witthaus). The scales simultaneously produce, and are embedded in, imaginings of post-homophobic time. Homophobia, (so the logic of these scales suggests), we can all agree, is a problem. Consequently, it is also held to be true that individuals, who are identified as holding homophobic beliefs via technologies such as scales, can only benefit from exposure to anti-homophobia education. Part of my task here then is to elaborate why I think it is problematic to develop educational practices that are embedded in the reproduction of posthomophobic imaginings. Imaginings of a post-homophobic time are problematic in part because such imaginings assume that some consensus has been derived on the subject homophobia, yet recent anthropological studies of homophobia point to inconsistencies in the way that this concept is understood. 53 For instance, Constance Sullivan-Blum in her study of contemporary American Christian homophobia notes that the evangelical Protestants she interviewed consistently denied that they were homophobic. Sullivan-Blum accounts for this reticence in part by drawing attention to the way in which her participants conceptualized 52 Ollis, 2010. 53 Murray, 2009 people who are homophobic. They believed that "homophobes harbor an irrational fear of homosexuals" and they did not perceive their attitude towards homosexuals as therefore homophobic. Rather, Sullivan-Blum notes, "most evangelical Protestants I spoke to are not afraid of homosexuals; rather they believe that homosexuality is sinful and must be rejected as morally wrong". 54 Such distinctions in the way that people understand the concept of homophobia, and the ways in which they imagine themselves and others as homophobic (or not), points to the challenges of anti-homophobia education and imaginings of post-homophobic time. Scales of homophobia might suggest that particular groups of people, such as evangelical Protestants, are more likely to be homophobic. However, if these people do not apprehend homophobia as something that is applicable to them, what does this mean for the application of the scale? Monk suggests that: One might reasonably ask whether in highlighting the existence of homophobia in schools and developing strategies that enable it to be acknowledged by policy-makers it is necessary to engage with conflicting imaginations about an idealised post-homophobic world. The argument here is that it is, for if homophobic bullying is made speakable through discourses of heteronormativity, then those outcomes become the form through which its success is evaluated. 55 Monk rightly points out that the success of anti-homophobia education is predicated on particular imaginings of homophobia that rarely admit conflicting perspectives. The scales can only be ruled a success, if there is a concomitant agreement about the discourses of heteronormativity. As Sullivan-Blum notes, 54 Sullivan-Blum, 2009, p. 51. 55 Monk, 2011 evangelical Protestants perceive same-sex marriage as problematic for many reasons, one of which is that it disrupts the authority of scripture. 56 I do not perceive scripture in the same way as evangelical Protestants, nor do I support same-sex marriage -but for very different reasons to evangelical Protestants. My point here is that sometimes when homophobia is construed as irrational or uneducated or illiberal -it is worth interrogating further whether or not such claims can be sustained. Surely, sometimes homophobia may result from the above. But it also worth considering that sometimes the tendency to construct particular events, people, places and or religions as homophobic may be a maneuver that has the effect of constructing all objections to post-homophobic imaginings as necessarily pathological, ignorant and regressive. As a result, people who don't agree that heteronormativity is a problem may come to be seen as in need of re-education. Of course the necessity of conforming to post-homophobic imagining does not fall equally upon all people of different faiths. Discourses of homophobic bullying, that are reproduced through the use of scales that measure homophobia, may also operate to reify binaries between Islamic fundamentalism and secular freedoms. 57 So the problem of not conforming to particular readings of homophobia and post-homophobia is not limited to the sphere of religion, it may also become associated with homonationalism and terrorist assemblages. 58 Particular groups of people who are marked as homophobic according to these scales can also be construed as a danger to the secular state, and to the safety of the imagined nation. 56 Sullivan-Blum, 2009, p. 56. 57 Monk, 2011, p. 200. 58 Puar, 2007 Conclusion I do recognize that discrimination related to gender and sexual identifications does exist. At the same time in this article I have been attempting to complicate the pedagogical power that is associated with taking up the position of challenging, and measuring, homophobia. Scales of homophobia may be difficult to speak back to precisely because their righteousness is affirmed through images of the vulnerability of gay youth. 59 Though as Monk illustrates, the cost of such righteousness is "the extent to which it effectively silences other voices and reduces the experience of lesbian and gay young people to one of passive victimhood. 60 In this article I have situated scales that measure homophobia as part of a broader political project that is embedded in emancipatory imaginings of a post-homophobic world. In order to do this I have tried to consider some of the logics that underpin the use of such scales. By way of a conclusion, I have sought to make a list of provocations that illustrate what I perceive to be troubling logics that support the use of scales that measure homophobia of teachers and students. My hope is such a list might provoke ongoing debate about the ways that homophobia is taken up in education about gender and sexuality. Provocations • That we can agree on what homophobia is • That we can therefore measure homophobia 59 Rasmussen, 2004;Puar, 2012. 60 Monk, 2011, p. 188;Rasmussen, 2004. • That there is a "right way" to respond to homophobia • That progressive teachers and students will challenge homophobia • That affirming homophobia is inadmissible in the bounds of liberal, secular, education • That people who are homophobic can benefit from antihomophobic education My hope is that taken together these provocations might be used to open up conversations in which homophobia becomes less familiar. It is only by making homophobia strange in the context of anti-homophobic education that it may become possible to think differently about motivations and assumptions that underpin such pedagogical projects. Such provocations about homophobia are, as indicated in the epigraph to this article, also designed to provoke questions about the what and the how of homosexuality. If an aim of anti-homophobia education is to create spaces in which young people who are lesbian or gay identified may be safer -can we assume that taking homophobia's measure will necessarily have this outcome? Queering animal sexual behavior in biology textbooks Malin Ah-King iology is instrumental in establishing and perpetuating societal norms of gender and sexuality, owing to its afforded authoritative role in formulating beliefs about what is "natural". However, philosophers, historians, and sociologists of science have shown how conceptions of gender and sexuality pervade the supposedly objective knowledge produced by the natural sciences. 1 For example, in describing animal relationships, biologists sometimes use the metaphor of marriage, which brings with it conceptions of both cuckoldry and male ownership of female partners. 2 These conceptions have often led researchers to overlook female behavior and adaptations, such as female initiation of mating. Such social norms and ideologies influence both theories and research in biology. 3 Social norms of gender and sexuality also influence school cultures. 4 Although awareness of gender issues has had a major impact in Sweden during recent years, the interventions conducted have been based on a heteronormative understanding of sex; this has rendered sexual norms a nonprioritized issue and thereby rendered non-heterosexuals invisible in teaching and textbooks. 5 Since this research was published in 2007 and 2009, 6 norm critical pedagogics 7 have been included in the Swedish National Agency for Education's guidelines for teaching. This inclusion represents one way to tackle the recurring problem of heterosexuality being described as a naturalized "normal" behavior and homosexuals, bisexuals and transsexuals being described from a heteronormative perspective. In this paper, I employ gender and queer perspectives to scrutinize how animal sexual behavior is described and explained in Swedish biology textbooks. The analysis is based in gender and queer theory, feminist science studies, and evolutionary biology. The article begins with an outline a discussion of my theoretical framework, relating gender and queer perspectives on evolutionary biology to a discussion of queer methodology. I then scrutinize some empirical examples drawn from five contemporary biology textbooks used in secondary schools (by students aged 16-18 years old). Finally, I discuss the implications of the textbooks' representations of animal sexual behavior, the problems of and need for a "textbook-version", and providing examples of what an inclusive approach to biology education might look like. Gender and queer perspectives Gender studies is mostly concerned with critical investigations of the cultural construction of gender as it occurs across various times and cultures. Although gender studies have largely adopted a constructionist framework, this does not imply a denial of material reality. Rather, gender studies problematizes how material reality is portrayed; for example, by questioning stereotypical portrayals of the sexes and reminding us that portrayals and descriptions of biological phenomena are themselves cultural conceptions. 8 Queer studies challenges "heteronormativity" -the ways in which heterosexuality, through everyday speech and behavior, is presented as the only natural and normal way of living, while other sexualities are simultaneously rendered abnormal. 9 Queer theories are critical theories for emancipating thought and action, while questioning both ways of knowing and indeed the very nature of being. 10 Queer theories also involve questioning binary categorizations. 11 Many researchers are engaged in applying queer theories to research and activism in school education systems. 12 Vicky Snyder and Francis Broadway argue that queer theory can have a number of implications for science teachers: it offers ways to foster critical thinking, to question categorizations and norms, and to challenge cultural practices that privilege heterosexuality as normal and natural. 13 These perspectives enable critical analysis of the ways in which knowledge is produced and represented. Therefore, what is rendered invisible by these norms, as they impact upon teaching in practice, is relevant to students' views of nature, of other human beings, and their self-image. To teach biology is to mediate knowledge that shapes the 8 Thurén, 2003. 9 Kulick, 2004Rosenberg, 2002. 10 Greene, 1996 One critique of queer theories has been that they have been formed from a mainly white subject position and that sexuality is inextricably linked with racialized subjectivities (e.g. Barnard, 1999). 12 Bromseth and Darj, 2010;Bryson and de Castell, 1993;Kumashiro, 2002. 13 Snyder andBroadway, 2004. understanding that students create of themselves and of science. Snyder and Broadway suggest that: Using the lens of queer theory, we can view the hegemonic matrix, interrupt heteronormative thinking, and broaden all students' potential for interpreting, representing, and perceiving experiences. 14 Gender and queer perspectives have the potential to increase critical thinking about science among both teachers and students through elucidating the fact that scientific endeavors are always conducted within a social context. Gender perspectives on evolutionary theories of sex differences In order to contextualize my analysis, I will begin with a brief overview of the development of evolutionary theories, explaining sex differences from a feminist science studies perspective. Sexual selection is the element of Charles Darwin's theory of natural selection most often used to explain sexual difference as evident in morphology and behavior, and it also provides the basis for the textbook descriptions analyzed here. 15 Darwin explained the evolution of sexual difference by sexual selection as mainly due to male-male competition (resulting in, for example, male horns) and female choice (resulting in, for example, male ornaments), but he also mentioned exceptions, such as instances in which females compete for males. It has been pointed out that a focus on male competition and female choice, which both consider how variation in male reproductive success 14 Snyder and Broadway, 2004, p. 621. 15 Darwin, 1871. is produced, has resulted in the assumption that sexual selection is always strongest in males and unimportant for females. 16 Darwin, although describing much variation among species, generalized his observations into a collective view of eager, competitive males and coy, choosy females. 17 This depiction has been criticized, especially from a gender studies perspective, 18 and numerous recent findings, such as those involving female multiple mating, have changed the theoretical framework within which sexual selection research is undertaken. 19 Anisogamy (a form of reproduction in which the sexes produce different sized sex cells), provides a biological definition of the sexes: individuals producing large sex cells are females, those producing small sex cells are male. This asymmetry of initial investment, in combination with parental investment, has been suggested as causing sex differences in sexual strategies, so that carriers of small gametes compete for access to females, and females are choosy about mates. 20 However, proponents of the dominant theoretical framework for studying sexual selection today continue to use their criticized basic assumptions, namely: 1) Male reproductive success is more variable than that of females, 2) Males gain more by increasing mate number than do females, and 3) Males are generally eager to mate and hence are indiscriminate in mate choice, while females are choosy and less eager. 21 Even though these notions might hold true in many cases, this framework has, until the last four decades, hindered research into, for example, female mating 16 Gowaty, 1997a. 17 Darwin, 1871. 18 Gowaty, 1997bHrdy, 1981Hrdy, , 1986 Knight, 2002. 20 Parker, Baker and Smith, 1972. 21 Dewsbury, 2005. outside of a social pair, male choice, and the cost of sperm. 22 Current evolutionary biology Currently, as evidence for the variability and dynamics of sexual strategies accumulates (it is almost a ubiquity that females mate with multiple partners), sexual selection theory is itself transforming. Evolutionary biology has partly incorporated females' role in evolution, by (for example) highlighting other sexual selection mechanisms: male choice, female-female competition resulting in variation of female reproductive success, male coercion of female choice (males may aggressively condition female behavior) and interactions between the sexes other than mate choice which influence reproductive success. 23 The number of studies of male mate choice has increased relatively recently: discoveries of females in some species gaining as much as males in reproductive success by multiple mating, and females actively initiating mating, form part of an ongoing re-evaluation of traditional views of female and male reproduction. 24 Recent developments have also moved towards a more inclusive view of variation in sexual behavior, for example, same-sex sexual behavior. 25 Same-sex sexual behavior has been found in over 1500 species, among a wide variety of animals. 26 Anisogamy and parental investment may partly explain sexual difference in mating strategies, but the connection is not as simple as was first theorized, and a more complex view has emerged. 27 Traditional theories postulate that anisogamy and parental 22 Tang-Martinez and Ryder, 2005. 23 Gowaty, 1997a. 24 Tang-Martinez, 2010. 25 Bagemihl, 1999Bailey and Zuk, 2009;Sommer andVasey, 2006. 26 Bagemihl, 1999;Bailey and Zuk, 2009;Roughgarden, 2004. 27 Clutton-Brock, 2007 investment cause mate competition and mate choice (sexual selection), but the causal relationship may be reversed so that sexual selection may cause differences in parental investment, which has been shown to be the case in cichlid fishes. 28 Furthermore, alternative models now predict sexual behavior in ways that do not rely upon the assumption of anisogamy. 29 Current evolutionary biology The life sciences emerged from a positivistic tradition of striving to make objective and value-neutral measurements of the world. Within this tradition it is unusual to consider the impact that politics and culture exert upon the "doing of science". Science is often envisioned as objective and thus as reflecting nature "as it really is"; as such, it may claim the ability to produce universalized facts. This understanding is probably prevalent among students reading biology textbooks in school. By contrast, feminist science studies have shown that science is a cultural process which is influenced by social ideologies. 30 Hence, another way of presenting science in context is to emphasize that science is itself context bound, value laden, and indeed a human endeavor in which human beings are critical in formulating the theoretical framework through which nature is observed, interpreted, and named. This is not to suggest that nature itself is a construction, but rather that our understandings and presentations of nature will always be influenced by the theoretical framework that we are using in order to access it. Alternatively, as some theoreticians have argued, we may say that knowledge about nature is co-constituted, so that nature is an active participant in knowledge-making. 31 Methods I have conducted a textual analysis of Swedish secondary school biology textbooks. I selected the five until recently available textbooks 32 for education in biology as a subject (there are also books available for education in nature oriented subjects, which give a less comprehensive exposition of animal behavior) in order to ensure a substantive sample. I have selected those sections that describe and explain animal sexual behavior. 33 Various authors have chosen to discuss animal sexual behavior in slightly different sections. Inga-Lill Peinerud et al. have a focused section on "Sexual strategies" under the over-arching heading "Behavioral Ecology", while Gunnar Björndahl et al. have two sections under the heading "Behavioral Ecology": "Reproduction" and "Different mating systems", and also refer to them in the Summary of that chapter. Anders Henriksson has one page on "Sexual selection" in a section on "life evolving"; under "Behaviors and life strategies" there are sections on "Birdsong", "Different kinds of territories", "Fight for a territory", "Partner choice and relations" and "Toad seeks partner". Lars Ljunggren et al. use the heading "Evolutionary ecology and ethology" to cover sections on "ornaments", "To invest in the offspring", "Polyandry", "Mate guarding", "Nuptial gifts" and "Polyandrous females". Janne Karlsson et al. have a section on "Sexual systems" under "Behavioral Ecology". 31 Barad, 2007;Latour, 1987. 32 Guiding questions for the analysis have been: How is sexual difference in animal sexual behavior described and explained? What are the emerging, primary narratives, and are there counter-examples? Are anthropomorphic terms used? What is described as the norm and what is described as deviant? Which animal examples are selected, and what do they represent? Are there any examples of variation in sexuality, and if so, how are these described? I read the texts closely in order to identify common themes, then re-read the texts several times to ensure all themes were covered similarly. The emerging themes were: 1) Descriptions and explanations of sex differences, 2) Counterexamples, 3) Choice of animal examples and illustrations, 4) Criticism of anthropomorphism and value judgments, 5) Diversity in sexual behavior. Under the first theme, I have identified several sub-themes: Males compete, females choose and care; Active males/passive females; Anisogamy as a general explanation for sex differences in behavior; Parental investment as an explanation for sex differences in behavior; Mating system theory; Extra-pair paternity/Certainty of paternity as explanation for sexual behavior; and Alternative reproductive tactics. I extracted excerpts and described the coverage in accordance with the themes, both examples that illustrate the main narratives and counter-examples. Since my aim was to analyze not just whether these themes are covered, but how they are represented, I have focused on excerpts that are interesting from gender and queer perspectives. I noted the number of animal species, which animal groups were presented and whether the text was implicitly referring to any particular group of animals. The illustrations were scrutinized for which animal species were represented and what the illustrations were conveying. I also noted value judgments and whether there were instances of anthropomorphic terminology. Finally, I checked whether the books covered variation in sexuality, for example, examples of same-sex sexuality. I have decided not to privilege any particular textbook; if the reader wishes to compare them, table 1 (at the end of the article) gives an overview of how the various textbooks have covered the themes of the analysis. Analysis of textbooks from gender and queer perspectives The results of the analysis are summarized in table 1, where I provide examples of the emerging patterns and themes on which my analysis focuses. In the results section, I provide excerpts from the textbooks as well as my interpretations and reflections (an overview of the themes and additional excerpts are available in table 1). Descriptions and explanations of sex differences Males compete, females choose and care Generally, among the textbooks, female and male sexual strategies are explained in dichotomous terms: "females choose and males compete", 34 "males have to show their competence" and if he "competes with other males" as well as "shows his competence as a father", he can "be accepted and be allowed to fertilize the female's eggs". 35 "Most often the most ostentatious, largest and strongest males win the struggle to get to mate" 36 and 34 All citations are translated from Swedish to English by the author. 35 Peinerud et al., 2006, for page numbers see appendix. 36 Ljunggren et al., 2007. "females most often choose partners". 37 One of the five textbooks did not mention male competition. While giving the same general picture, some accounts in the textbooks open the readers' minds to more diverse possibilities, such as "different species have different sexual systems" and "the pre-requisites are most often different for the two sexes". 38 There is also a difference between general claims such as "females that care and males that waste", 39 and making the same claim but adding "most often" 40 in front of it; doing so allows for a more variable understanding of sexual difference in behavior. In one of the textbooks, sexual difference in sexual motivation is described as follows: Males have high sexual motivation and react more easily than females on sexual signals. As mentioned a male turkey can try to mate with a briefcase, which would hardly be expected by a female. The female demands stronger signals to react and is more selective for which signals she reacts to. 41 This statement is in line with the dominant paradigm's criticized assumption of generally eager males and coy females, discussed previously. While it is often ascertained that females choose, there are very few descriptions of females actually choosing; one is an account of an experiment in which the tails of widow-birds were 37 Henriksson, 2003. One might think that these two statements are contradictory, but they reflect two different mechanisms by which sexual selection may act to produce sex differences, such as horns and ornaments. 38 Karlsson et al., 2005. 39 Peinerud et al., 2006 Henriksson, 2003;Karlsson et al., 2005;Ljunggren et al., 2007. 41 Karlsson et al., 2005. experimentally prolonged or shortened, which found that females preferred long tails. 42 This observation leads to the next theme, that of describing males as generally active and females as passive. Active males/passive females The portrayal of males as inherently active and females as inherently passive represents a deep cultural dichotomy, especially pronounced in Western societies. 43 Janne Karlsson et al. write, concerning birds: "Among species in which one partner has to guard the nest while the other makes flights to eat, the male often mates with the female when they return" 44 [my emphasis]. Concerning sea elephants: "It is almost only the dominant males that mate". Another example: "Since practically all females among both birds and mammals become fertilized, from an evolutionary perspective it is more beneficial for a weaker individual to be a female than a male" 45 [my emphasis]. Though in many species males do have larger variation in reproductive success among themselves than females, many species also show similar patterns for males and females. 46 Furthermore, there are mammal species in which dominant females suppress reproduction of sub-dominants in the group (e.g. wolfs, primates 47 ), hence not all females get the chance to mate or reproduce. Similarly, Karlsson et al. describe female mating in passive terms: "The male that manages all this [fighting for a territory etc.] gets accepted and is allowed to fertilize the 42 Karlsson et al., 2005. 43 Haraway, 1986. 44 Karlsson et al., 2005. 45 Ljunggren et al., 2007 Tang-Martinez, 2010. 47 e.g. Abbot, 1984. female's eggs" 48 [my emphasis]. In line with this, females are generally described as passive in narratives of sexual selection: "Males fight intensively among each other [...] dominant males hold a harem of females. Almost only the dominant males mate". 49 However, one figure illustrates how females may influence mating: "A sea elephant female that mates with a male wobbles her body back and forth and screams loudly. A male with higher rank that hears the screams chases away the intruder and mates with the female himself". 50 Even when female choice is exemplified, the example illustrates a mating system with pronounced male domination. Anisogamy as a general explanation for sex differences in behavior Four of the textbooks refer to the sexual differences in the size of the sex cells (anisogamy) in order to explain behavior in more or less deterministic terms: "Because the sex cells among males and females differ the evolutionary strategies in the game has become different", and "the difference in size and amount of sex cells has through the course of evolution contributed to increase the differences between the sexes among many animals". 51 Again, a small inclusion of "at least partly" makes a considerable difference in how static sexual difference is perceived to be: "Much behavior can at least partly be explained by the male's sperm being much smaller and not as costly to produce as the female's egg cells". 52 "For a female it is a large cost in the form of energy to produce eggs. A male's sperm are "cheaper" to produce and therefore he can afford considerably more sex cells 48 Peinerud et al., 2006. 49 Karlsson et al., 2005. 50 Karlsson et al., 2005. 51 Peinerud et al., 2006. 52 Björndahl et al., 2007 than the female". 53 Janne Karlsson et al. refer to the high cost of reproduction for females producing eggs, gestating and lactating, and to the importance of carefully choosing mates, compared to males who can mate with many at a small cost. 54 By relying heavily on mammalian examples in order to make generalizations about animal behavior (see choice of animal species below), the described pattern becomes biased toward female care and parental investment. In scientific discussions, however, the degree to which the initial investment in gametes affect subsequent sexual strategies remains contested. 55 Parental investment as an explanation for sex differences in behavior Several of the books refer to the large cost of care, either explicitly or implicitly, using mammalian examples as the basis of the argument. For example: "In order for a female to produce a large amount of surviving offspring the female's sexual strategy becomes to invest in quality of the care of offspring". "She shall also readily find a male, that can help her with this". "Since the male's production of sperm does not require much energy it is instead the number of females he can fertilize during a lifetime that determines how many offspring he can get. The male therefore invests in quantity". 56 Here the implicit assumption is that we are dealing with mammals, or birds. Among animal species overall, however, few undertake any care of their offspring. The (generalized) female is assumed to care, and the male to "help" with that caring, a description colored by cultural assumptions about the gendered responsibility to care. In 53 Henriksson, 2003. 54 Karlsson et al., 2005. 55 e.g. Ellingsen andRobles, 2012. 56 Peinerud et al., 2006. contrast, one textbook explains that: "Parents put a lot of energy into reproduction and care of the offspring" 57 -a gender-neutral description which does not reflect culturally specific gender stereotypes. Mating system theory in the textbooks Polygamy and monogamy are mentioned in all the textbooks, and all but one mention both polygyny (a male mating with several females) and polyandry (a female mating with several males). In one textbook, the term polygamy is described as, and only in the context of, "a male has several females". 58 Polygamous literally means "many marriage", and so is a gender-neutral term. Hence, while it is not strictly incorrect to use it in the way described above, the opposite pattern -of females having relationships with several males -is made invisible in this particular example. "Polygamy among mammals" is often contrasted with "monogamy among birds". 59 Recent decades of DNA-testing have revealed that few birds are mating monogamously, and although many birds live in social monogamy, the majority of them mate numerous times with several partners. 60 Examples illustrating mating system theory to be found in the textbooks include a description of bee-eaters (birds) in which males defend territories with resources upon which the females depend, and females who mate with territorial males. 61 Another example is the polygyny threshold model, describing how females may prefer to mate with an already mated male if his territory 57 Ljunggren et al., 2007. 58 Peinerud et al., 2006. 59 Björndahl et al., 2007Henriksson, 2003. 60 Griffiths, Owens and Thuman, 2002. 61 Karlsson et al., 2005. provides more resources than that of another, unmated male. 62 In accordance with the gender criticism of the scientific accounts, these descriptions depict females as passive resources for males, while many other examples show that active interactions between females and males result in the mating system. 63 Extra-pair paternity/Certainty of paternity as explanation for sexual behavior Several books mention how DNA-analysis has revealed both frequent female multiple mating and the ways in which males ensure their paternity, such as by guarding females. For example, "Eurasian Sparrow hawk [pairs] mate several hundred times during one breeding season. In this way he ensures that he is the one to become father of the pair's young". 64 For perhaps obvious reasons, this category of explanations is rather male biased, which is not necessarily wrong. However, while they are all described from a male perspective of guarding females or ensuring high levels of paternity by other means, there are other examples one might choose, such as female aggressive behavior to keep other females from laying eggs in their nests, i.e. strategies for maternity assurance. 65 Alternative reproductive tactics Alternative mating tactics are described in three of the five textbooks, for example: "There are also males, often younger, that choose to prowl around, court and fertilize females that have already formed a pair with a male". 66 This wording is rather 62 Karlsson et al., 2005. 63 Gowaty, 1997a. 64 Peinerud et al., 2006 Gowaty andWagner, 1988. 66 Peinerud et al., 2006. negative and frames alternative reproductive tactics as a behavior outside of the norm. It also suggests the male plays the active part while females have no influence over mating. Extra-pair matings and alternative reproductive tactics are often described in culturally loaded terms (see anthropomorphic terminology below) such as young males who "prowl around", 67 and are hence called "sneaky fuckers". 68 Similarly, female Great Reed warblers are described as having "casual relations", 69 which has a negative connotation, being suggestive of promiscuity. Other examples of how alternative reproductive tactics are described include: "Large frog males attract females more than small ones. But the latter have a trick [...] to keep themselves in the vicinity of the large male that attracts most females". "The 'sneaky fuckers' may then fertilize the eggs". 70 In the scientific literature, "sneakers" is the common terminology; I have never before seen "sneaky fuckers" employed in a scientific context, and indeed the term turns up no hits on Web of Science, but a search for "sneakers" resulted in 181 matches. Counter-examples That sexual behavior can be modified by environmental factors (for example, when male frogs adjust their song to predation pressure and female density 71 Karlsson et al., 2005. sexual strategies. Similarly, Anders Henriksson describes how male singing abilities differ between two toad species depending on female density in the area and length of the mating season. 72 Furthermore, Janne Karlsson et al. discuss the phenomenon of members other than a social pair providing care for young (so called "helpers") and mention that some insects reproduce through eggs developing into new individuals without fertilization. Gunnar Björndahl et al. give examples of caring males in some fishes and birds, and point out that, among many fishes, neither sex care for young. Lars Ljunggren et al. mention that polyandrous females are often larger than males, that female cuckoos perform egg dumping, and that in praying mantis and spider species, the male can be eaten by the female during mating and thereby provide resources for the offspring. Inga-Lill Peinerud et al. observe that both males and females may abandon a partner with a clutch of eggs in their nest. 73 Hence, all textbooks provide one or more counter-examples to the main narrative (table 1). General questions of representation In this section I consider the choice of animal examples, illustrations, anthropomorphism and value judgments in the descriptions, as well as the lack of examples of sexualities other than heterosexuality. Choice of animal species Three of the five books take mammals as an implicit starting point for discussing sex differences in sexual strategies among 72 Henriksson, 2003. 73 Peinerud et al., 2006 animals. This leads to an emphasis of female caring in relation to what is the most common pattern in animals overall, namely to not care for the offspring. The diversity of species per textbook illustrates how the authors have attempted to present diversity in this particular context (see table 1). Clearly, the overrepresentation of mammals or pair-bonding birds, especially in two books, does not provide an accurate or even a thorough understanding of the diversity of animals' sexual strategies. Choice of illustrations In Inga-Lill Peinerud et al.'s textbook, there are two illustrations for this section, both of pair-bonding birds, namely a pair of bullfinches accompanied by the caption "the female that chooses, the male that displays", and a pair of swans "that often live in a life-long relationship and therefore it has not been as important for the male to put extra resources on external attributes as bright colors". 74 In this book, the choice of examples mirrors a (human) cultural norm of opposite-sex pair-bonding species in which (by the descriptions in the textbook) females care by default, while males may or may not choose to care. All the other textbooks have illustrated both polygamous and monogamous examples, and various other examples, while one textbook is also illustrated with diagrams (for details see table 1). The choice of illustrations probably reflects whether the authors are aiming to illustrate diversity or offering a general portrayal of sexual strategies. Anthropomorphic terminology Generally, within the sciences, it is considered erroneous to use 74 Peinerud et al., 2006. anthropomorphic 75 terminology to describe animal behavior, since to do so allegedly departs from the objective ideal of scientific work. Scientific literature is not devoid of anthropomorphic terminology, however, so in many cases the textbook terminology follows scientific convention. As Eileen Crist has shown, the behavioral sciences have contained two contradictory traditions: the tradition of natural history, to which Darwin belonged, which often used anthropomorphic terminology to describe animal behavior, and the subsequent classical ethology tradition in which such terminology was regarded unscientific. 76 Yet, others have argued that anthropomorphic terminology is related to the human capacity for feeling empathy with animals and hence should not be assumed to always be negative. 77 With the young audience in mind, it is especially important to reflect upon how anthropomorphizing affects their views of what is "natural" human behavior, such as common references to human forms of child care as observed in nonhuman animals: "father of the children", "carrying a fetus", "single father". 78 These wordings, combined with value judgments following societal expectations of females to care, and notions that male caring is optional (see above and below), has the effect of mirroring and reproducing societal norms in accounts of animal behavior. Other textbooks use "harem", "betray", "nuptial gifts", "childhood", "casual relation", and "prowl around", many of which have sexual connotations and give value-laden meanings to the descriptions, especially those of sexual relationships outside of a social pair. There is one textbook in which I did not find any anthropomorphic terminology, namely Henriksson's "Biologi Kurs A". 79 Yet another example of anthropomorphic language is the description that: "One might say that four different roles have crystallized among males/females: faithful and unfaithful males, faithful and unfaithful females". 80 Biologists use the same terminology of fidelity/faithfulness/cuckoldry, but this use has also been criticized within the behavioral sciences. 81 Moreover, the question is whether it is appropriate to simplify animal behavior by categorizing males and females into four roles depending on their fidelity to their partner. What does the term "role" imply here? Value judgment of male and female behavior Deserting a partner with eggs in the nest is described in positive terms for males who "of course readily seek out another female as quickly as possible" and this "has been beneficial from a genetic point of view". The same behavior in females is described in negative terms involving the attribution of blame: "[when she leaves] the male has to choose between caring for the young himself or letting them perish", and "in this way even the female can increase the number of offspring somewhat". This is a notably extreme example of how cultural conceptions of male promiscuity and female caring are inscribed onto animals in the textbooks' accounts. From a scientific point of view, the male and the female increase their fitness equally, and their behavior is just as beneficial from a genetic standpoint. This is the only example 79 Henriksson, 2003. 80 Peinerud et al., 2006. 81 Gowaty, 1982 in which these value judgments are so salient (but see the section of anthropomorphic terminology for more subtle examples). Diversity of sexual behavior Only one of the textbooks mentions non-heterosexual sexual behavior, namely male frogs mounting both sexes. This same-sex interaction occurs because males are unable to distinguish the sex of other individuals until they emit sounds, which only males do. 82 I do not claim that this is untrue, but it is remarkable that there are no other accounts of same-sex sexual behavior in the textbooks. In the scientific literature, same-sex sexual behavior has often been described as abnormal, arising from mistakes, or renamed in order to avoid sexual implications -all reasons why it took a comparatively long period of time before the extent of such behavior to became known among biologists in general. 83 Gunnar Björndahl et al. even write that: "Even if all behavior aims at increasing the survival ability and carrying the genes on [to the next generation] it is especially obvious when it comes to the animals' different mating behavior". Thus, they express the (criticized) assumption that every behavior is adaptive. 84 This expression is especially noteworthy as it ignores the diversity of mating behavior, such as same-sex sexual behavior. Another book states that "reproduction is among those urges that are totally governed by instincts". 85 This wording suggests that sexual strategies are genetically determined and hence fixed, which is greatly misleading. 86 82 Henriksson, 2003. 83 Bagemihl, 1999 For a critical perspective see e.g. Gould andLewontin, 1979. 85 Ljunggren et al., 2007. 86 See for example a chapter summarizing mate choice flexibility in relation to ecological and social circumstances: Ah-King, 2010. Discussion Current Swedish biology textbooks describe female and male sexual behavior as generally dichotomous and mutually exclusive: males compete, showing their ornaments and abilities, while females choose and care for the offspring. Although these generalizations may be in accordance with scientific consensus of general patterns in nature, females caring for offspring is a generalization based on the behavior of certain species, especially mammals. The most common pattern among animals overall is to not take any care of offspring, and among fishes it is common for males to care (Gunnar Björndahl et al. do point out that among many fishes neither sex care for their young). Overall the textbooks display a male-biased focus on male activity and male ornaments/weapons/strategies which, nevertheless, reflects the scientific literature. 87 All the textbooks provide one or more counter-examples to these descriptions, and open up for a more varied view of sexual strategies as varying between species as well as being also dependent on ecological circumstances. This approach is an effective way of providing insight into nature's diversity. The number of animal species used as examples gives a hint as to whether the authors have maintained this provision of insight as a goal in their descriptions. Relying on bird and mammal examples alone allows for only a very limited view of female and male sexual behavior. Excessive simplification gives the impression that there is a lawfulness to how females and males behave, when in fact scientists are trying to make sense of, and often making generalizing explanations for, an immense diversity. 87 Fausto-Sterling, Gowaty and Zuk, 1997. Furthermore, all descriptions of animal sexual behavior are focused on reproduction, and none of the textbooks mention the research of recent decades which shows enormous diversity in sexual behavior among animals. 88 This selective exclusion, combined with adaptationist claims such as: "Even if all behavior aims at increasing the survival ability and carrying the genes on [to the next generation] it is especially obvious when it comes to the animals different mating behavior" 89 and "reproduction is among those urges that are totally governed by instincts" designate all non-reproductive sexual behavior as abnormal. These descriptions reflect the heteronormative assumptions built into the Darwinian evolutionary theoretical framework combined with reductionist, adaptationist claims. Textbooks are inherently oriented towards consensual understandings of current knowledge, since including the most recent and most controversial research findings could render editions redundant as new findings continue to be reported. It is perhaps not a coincidence, then, that there is such a thing as "the textbook version" -the simplified, conventional and perhaps outdated version. In this light, given the practicalities of textbook production and publication, it may seem unfair to criticize the textbook authors for simplifications and generalizations. However, writing textbooks involves the power of deciding what knowledge should be included and excluded. Furthermore, what is taught in most schools is guided by the content of the textbook. 90 At the same time, textbook authors have to relate to the Swedish curriculum goals of gender equity. 91 In the preceding analysis I have sought to distinguish between what is normative 88 Bagemihl, 1999;Bailey and Zuk, 2009;Sommer and Vasey, 2006. 89 Björndahl et al., 2007. 90 Snyder and Broadway, 2004 within animal behavioral studies and what may be due to the popularization of animal behavior in the textbooks. I have also provided a feminist critique of conventional wisdom in the animal behavioral sciences, such as the over-representation of the evolution of male behavior and ornaments, and the underrepresentation of sexual selection in females. 92 It might seem unfair also to criticize the use of anthropomorphic terminology, which is commonly used within the scientific literature, but it is important to note that within the scientific literature the term usually has a well-defined meaning that differs from its everyday meaning. The use of terms such as nuptial gifts, casual relations, father, parents and harem are loaded with culturally-specific meanings and also encourage the drawing of parallels between animal and human behavior. Furthermore, there is ongoing criticism within the scientific community of the use of such terms. 93 Although this analysis reveals some problematic aspects from a gender and queer perspective, it also provides examples of solutions: showcasing diversity; avoiding stereotypes of female and male behavior; explaining how behavior varies in relation to ecological circumstances, and using gender-neutral language such as "parents invest in their offspring", and "different species have different sexual systems". When seeking to include examples of natural diversity across species within textbooks, there are pitfalls, one of which is that the diversity described may mirror normative understanding. For example, the description of one counter-example in particular, in which abandoning a nest is described in terms of completely different values depending on whether the subject is male or female, strengthens stereotypes instead of broadening perspectives. These portrayals may have a 92 Gowaty, 1997a;Hrdy, 1981. 93 e.g. Gowaty, 1982Karlsson Green and Madjidian, 2011. large impact on what students perceive to be "natural" male and female behavior. What does it mean to teenagers to read that males naturally have higher sexual motivation than females? Martha McCaughey has shown how projections of the cave man have been used by people in motivating male sexual aggression against females, behaving in unruly, brutal, and asocial ways. 94 Additionally, scientific findings of sexual difference have been distorted and misappropriated, which has affected Western society's collective understanding of gender roles. 95 Furthermore, the dominant paradigm's contentions of eager, indiscriminate males and coy, choosy females are not in accordance with current evidence of females' active roles in sexual interactions. 96 Females mate multiply in many species and have been shown to overtly initiate and seek matings. 97 Indeed, a rather depressing picture of female sexuality emerges from reading recurring, male-focused descriptions, and in addition, there is one example of a female sea elephant screaming when a male mates with her, leading to a higher-ranked male chasing away the first male and mating with her instead. The text does not report whether females ever do not scream during mating, or whether they may not approve of any mating they are subjected to. Although animal examples are not meant to be taken as mirroring human behavior, it is nevertheless useful to ponder what picture emerges of female and male sexuality in nature. In contrast, it is generally known that it is impossible for male butterflies to mate with a female unless she accepts to mate. 94 McCaughey, 2009. 95 Eliot, 2011. 96 Tang-Martinez, 2010. 97 e.g. Hrdy, 1981;Lawton et al., 1997;Small, 1993;Tang-Martinez, 2010. In what sense does it matter that sexual behavior in animals is described almost only in a heterosexual context by secondary school textbooks? The silence and omission of variation in nonreproductive and non-heterosexual sexual behavior does impact on students' understanding of biology. Our understanding of biology, in turn, affects our social identity-making and often shapes discussions about, for example, having children or not, and sexual orientation. The belief that homosexuality "is unnatural" is one of the misconceptions many people have to deal with on a daily basis. Of course, morality should not be based on arguments of how things are in nature, because it is perfectly possible to argue for any stance depending on which natural examples one chooses and which perspective one adopts. For example, all the four possible combinations of claims about the incidence of homosexuality among humans and animals have been used: homosexuality among humans is unnatural/refined because it does not occur among animals, or homosexuality among humans is natural/beastly because it does occur among animals. 98 However, teaching about sexual diversity among nonhuman animals is one way to counter claims of homosexuality's "unnaturalness." It is worthwhile here to recall that the term "heterosexuality" was coined only a little over one hundred years ago to describe sexual acts between a man and a woman that did not aim to result in reproduction, a practice which was considered by physicians at the time as a perversion that required a medical cure. 99 A norm-critical perspective of sexual selection Biology still describes, explains and generalizes sexual behavior 98 Sommer and Vasey, 2006. 99 Katz, 1995. in stereotypic terms of what is the most common behavior for females and males. The language used expresses the norms of biological discourse by pointing out certain behavior or patterns as alternative or reversed. 100 Hence, such behavior is viewed as an exception to a general pattern while dividing several continua of behavior into conventional or reversed "sex-roles". 101 Recently, it has been suggested that sex should be viewed as a dynamic interaction between genetic sets and environments, as illustrated by multiple evolutionary examples of changes between genetic and environmental sex determination, as well as variability within individual development. 102 This is in line with recent developments in the field of ecological developmental biology. 103 Many animals change sex in relation to environmental or social circumstances. Mate choice strategies are flexible in relation to predation risk and density of potential partners (as pointed out in one of the textbooks), parasite load, age, and experience. 104 These findings should be incorporated in textbooks and teaching in order to provide a more contemporary and inclusive education for secondary school students. Gilbert and Epel, 2009. 104 Jennions andPetrie, 1997. 105 Bagemihl, 1999;Bailey and Zuk, 2009;Small, 1993. Current textbooks describe female and male behavior as if they were distinctly different and mutually exclusive. It is important to give students knowledge of variation and overlapping distributions and to emphasize that an average represents a summary of data rather than what is "normal". 106 Recommendations Even if the textbooks at hand are lacking information about variations in sexuality, there is much information available elsewhere about variation in sex and sexual behavior in animals. These are topics that usually generate interest, so why not develop student exercises involving exploration of sexual diversity among animals? Several chapters in Bagemihl's Biological Exuberance: Animal Homosexuality and Natural Diversity, for example, can be used to provide historical accounts and reviews over evolutionary explanations of variation in sexual behavior. Some museums have produced exhibitions about variation in animal sexual behavior, such as "Against Nature?" at the Natural History Museum in Oslo 107 which has ambulated around Europe in the subsequent years. Sociologist Myra Hird describes how her social science students often take sex as an unchanging biological given and that they rely heavily on biological explanations of sex differences. She then describes how she problematizes their understandings of sex as static -through showing animal and human diversity (asexual reproduction, sexchanging and intersexuality), and introducing the perspective of science as a cultural system. 108 I urge textbook authors to deepen their awareness of how gender and heteronormativity bias shapes the representation of animal behavior, and to describe such behavior with care, care for what knowledge about biology means for the identity-making of young people. These textbooks have power over how biology and what is "natural" comes to be perceived in society at large. Feminist critiques of male bias in the natural sciences apply to science education too. Furthermore, as the analysis shows, simplifications do not have to be over-generalizations; variability and natural diversity are often more interesting than those examples sought out merely to mirror a human, pair-bonding, heterosexual, males-competing-and-females-caring norm. In addition, gaining knowledge about variability in sex, sexual behavior and sexual characteristics, such as genitalia, includes not only awareness of deviations from norms, but the realization that we are all included in these continua. In my own teaching practices I aim to destabilize dichotomous conceptions of sex, as illustrated by a students' take-home-message from one of my lectures: "[I learnt] that sex is not two poles but a scale and that I cannot know my sex". This is not to imply that I deny sex differences or categorizations of women and men, but rather should be seen as a result of a discussion of intersexuality 109 and the insight that some intersex people realize their condition rather late in life. Hence, my goal is to problematize understandings of biological sex and to encourage students to adopt a critical attitude to knowledge itself. Conclusions Overall, the textbooks offer dichotomous descriptions of females and males, and they are heteronormative in that they all describe sexual behavior in only the context of opposite-sex interactions and reproduction. However, there are also examples of openings for understanding biological (heterosexual) diversity and sexual strategies as also dependent on ecological circumstances. Much remains to be done before current textbooks will include recent developments in the understanding of sex and sexual behavior in animals. Changing stereotypical portrayals of animal sexual behavior into a more variable view of sex and sexuality will benefit students and provide a more accurate basis for the development of these issues. DNAanalysis has shown that "up to a third of the young among some bird species have other fathers than the mother's partner" Two dads / two moms: Defying and affirming the mom-dad family. The case of same-gender families in Slovenia Ana Sobočan amily' remains a site of ideological struggles. What constitutes a family and who can become/have/define a family is a matter of ongoing political and other debates and discourses. These become evident in the programmes of political parties, for example, as well as in the agendas in family legislation and social welfare policies, even in the changes in sociological textbooks, and so forth. Families where two male or female partners are parenting together are simultaneously gaining visibility in the public space (and legislation in certain countries) and their children are becoming central in different discursive practices, where their presumed interests are used in argumentations of (mostly) the opponents and advocates of equal rights for all family constellations. A vast research body of studies about lesbian and gay families (begun in the 1970s) contributes to the visibility and understanding of a variety of forms in which families are created. As Malmquist and Zetterqvist Nelson write, it is 'important to understand "family" as something that is continuously performed -"doing family" -rather than a specific 'F structure -"the family".' 1 Weeks, Heaphy, and Donovan claim 2 that it is exactly non-heterosexuals who are at the forefront of wider changes to family life, and Haimes and Weiner, 3 for example, write how non-heteronormative family models present an important challenge to the heteronormative model. The times of transitions and transformations are usually the most interesting because the dynamics of resistance and empowerment in relation to change are most visible. In regard to families where both parents are of the same gender and are in a partnership relationship, 4 Slovenia is one of the countries in such transformative times. Between the commencement of the struggle for equal rights and, subsequently, for the first time explicit opposition to such equality, parents and children from samegender families are developing strategies for survival in an environment where conflicting and deficient legislation 5 is set against a background of negative public opinion and often very positive interpersonal experiences. This essay will present some of these strategies, drawing on research on the intersection of same-sex families, their children, and the school environment and 1 Malmquist andZetterqvist Nelson, 2013, p. 1. 2 Weeks, Heaphy, andDonovan, 2001. 3 Haimes and Weiner, 2000. 4 I will use the term same-gender families in this essay when referring to families where both parents identify with the same gender and are recognized as individuals with the same sex in their environment. Because of their gender identification, parents in these families are also recognized as homosexual (names such as gay, lesbian, rainbow, etc., families are also used elsewhere). Recognizing the vast array of human experience and identities, I will nevertheless in this essay not address, problematize, or discuss these different experiences and identities (and will hence not refer to queer, intersex, transgender, bisexual, etc., identifications), because I will not be interested primarily in the adults' sexuality practices, gender practices, or other practices and identities, but in the experiences and strategies of children whose families don't pass as 'normal' (mom-dad families), because the parents have a recognized same gender. 5 Parents from same-gender families do not have by far the same rights as different-gender families; nevertheless, there are some children in Slovenia who have two same-gender parents in a legal sense. homophobia. 6 I will use this research, which aimed at elucidating the school experiences of children from same-gender families (denormalization, 7 homophobia, and the strategies to deal with it), to focus on how parents in same-gender families face and deal with their children's school environment, and I will present the wider context of the struggle for equality and responses to it in Slovenia. I will thus shed light on the current debates relevant for same-gender families in Slovenia and discuss the phenomenon of the moral homophobe, both of which will serve as a framework for understanding the parents' strategies to deal with their children's school environment. Another aim of this essay is to reflect on the research production in relation to children in samegender families. To frame these discussions, I will first refer to the existing research and research interest related to same-gender families, as well as try to bring attention to how the classic research actually frames the family debates with heteronormativity. Researching life in same-gender families A vast collection of research on non-heterosexual parenting has been growing since the 1980s. 8 Importantly, the majority of this research grounds in, reconfirms, or does not at all challenge the dominant ideas about gender, gender roles, and sexual identity. It is exactly by referring to the mainstream ideas about 6 Zaviršek and Sobočan, 2012. The research taking place in Slovenia was part of an EU (Daphne II) funded research study involving researchers from Germany, Sweden, and Slovenia who explored the intersections between society, school, raibow families, and children from these families (see Streib Brzič and Quadflieg, 2011). The complete reserch study involved interviews with 34 children from rainbow families, 63 parents from rainbow families, and 30 expert interviews. 7 Streib and Quadflieg, 2011;Sobočan and Streib, 2013. 8 For meta-analyses of the research, see, for example, Anderssen et al., 2002;Gartrell and Bos, 2010;Lesbian and Gay Parenting, 2005;Perrin, 2002;Parks, 1998;Stacey and Biblarz, 2001;Tasker, 1999. 'normality' that these studies aim to show that empirical data and findings do not confirm the general stereotypes, prejudices, or negative claims about life in families where both parents are of the same sex or/and are not heterosexual. Such research nevertheless has been valuable to an extent in securing more equality and 'acceptance' for same-gender families. The research has suggested that children in same-gender families are not experiencing more crises or emotional/mental health troubles than those who grow up in different-sex families, 9 that they are not experiencing more peer violence compared to other children, 10 that their sexual identity is not more often homosexual than in the general population, and that their gender roles (as adequate to the normative model) are clearly defined. 11 Some studies speak of more equal and quality relationships between parents and children in same-gender families in comparison to the 'average' different-sex family, 12 and of the quality of the relationship between children and non-biological parents as comparable to relationships between children and biological parents. 13 The research has shown that sexual orientation or identity is not relevant to the benefits and interests of children in their development 14 and that the processes inside the family (for example, the quality of parenting and attachment) importantly influence the child's development, whereas the structure of the family (for example, the number of parents and their gender and sexual identity) does not. This has been 9 For example Chan et al., 1998;Golombok et al., 1983;Patterson, 1994;Tasker and Golombok, 1997;Wainright et al., 2004. 10 For example Lindsay et al., 2006;Tasker and Golobok, 1997;Vanfraussen et al., 2002. 11 For example Golombok, 2000;Tasker and Golombok, 1997;Wainright et al., 2004. 12 For example Brewaeys et al., 1997;Chan et al., 1998a;Flaks et al., 1995;Golombok et al., 1997. 13 For example Bennett, 2003;Vanfraussen et al., 2002. 14 For example Ryan-Flood, 2009. confirmed by various research approaches -research in families where the children and parents are biologically related and in families where children are adopted, as well as research in families where parents identify either as heterosexual or nonheterosexual. 15 One of the more recent research studies that compares families with adoptive and biological parents has shown that the processes in families are more important than the structure of the family: regardless of the sexual identity of parents, the children were prospering the most in families where parents were using effective parenting techniques and were happy in the relationship with their partner. 16 Hence, all this research production in the field of same-gender families demonstrates the irrelevance of sexual identity in regard to parenting competence and child development. At the same time, it also clearly exhibits a specific research interest in relation to children, childhood, and child development. A larger part of research on non-heteronormative families is focused on researching the anticipated risks for children and the psychosocial consequences for their development and childhoods. The main question that usually seeks to be answered is: is the life with homosexual parents in any way deficient or risky for children? The research interest thus speaks mostly to how scientific epistemologies cannot avoid the demands of heteronormativity. 17 I agree with Hicks that the research interest should actually be distanced from 'proving the acceptability' of same-gender 15 For example Chan et al., 1998;Erich et al., 2005;Lansford et al., 2001. 16 Farr et al., 2010 With heteronormativity I refer to a set of norms, beliefs, and attitudes that prescribe and frame the reality in a way that people belong to either of two genders (male and female; in relation to their biological givens), which involve also 'natural' roles in life. In this frame, the appropriate / natural sexual orientation is heterosexuality, and hence the sexual and marital relations are 'naturally' between a man and a woman. Heteronormativity thus prescribes alignment of biological sex, gender identity, gender roles, and sexuality. families towards exploring why certain family forms remain marginalized (socially, legally, etc.) and ostracized, as well as how the discourses of the 'otherness' and 'deficiency' of these family forms keep being reproduced. 18 In this sense, the most valuable research pays attention to the lived experiences of children (and parents), away from comparability and comparisons (and assessments of the behavioural, psychological, social, and sexual 'appropriateness') with the norm, and away from building arguments against the background of 'otherness'. Such research also holds the promise of stepping away from the victim/success narratives, which currently still dominate the research on nonnormative families. Drawing on the available research on same-gender families (for example, the research I refer to in the previous paragraphs), (at least) two kinds of narratives can be observed: the victim narratives and the success narratives. The victim narratives speak of the 'inherent difference' of such families and children, which is potentially a cause for discrimination and violence; they call for political action, but can be used at the same time to strengthen the 'otherness' discourses. The success narratives speak of such families and children as 'absolutely the same as everyone else' and claim the right to equality against the background of 'sameness'; they potentially delegitimize positive discrimination and political action, and possibly contribute to heteronormative discourses. Nevertheless, even if these two narratives seem to oppose each other (which would hint at the 'authenticity' of one narrative and the 'falseness' of the other), they do not exclude each other, because different perspectives of the life-world and experiences of families and children can be legitimately and correctly observed and understood from different viewpoints -the difference in the viewpoint creates a different contextualization and does not necessarily reduce the veracity of the findings. The first narrative-set usually speaks of the attitudes in the society/environment (school, peers, etc.) as they affect the child's and family's reality; the second is focused on researching the child's development and achievements. Both narratives are relevant, important for understanding family life and social life; nevertheless, to answer some questions, the first narrative victimizes the children, and the second narrative unifies themerases their specific experiences. Both narratives reinforce heteronormativity: by incorporating an anticipation and inscription of their 'sameness' or 'otherness' in the research instruments itself. Families: Gender and sexual identity trouble The concepts of 'otherness' and 'sameness' speak foremost to how both narratives cannot escape heteronormativity and how they hence reinforce it. The norm of heterosexuality with adjacent gender roles and the binary division between what is normative and non-normative are the grounds, a reference pool for the majority of all interactions. 19 Most research studies until now have measured the factors that influence child development and the childhood life-course 20 (social and family factors: the intertwining of interactions between the child, his/her family, and the environment); these studies are inevitably marked by the contextual viewpoint and normativity that is framing both the researcher's view as well as the responses of the researched. The alignment of these expectations and offered responses is homosexuality. The sexual identity of the parents (self-identified or prescribed) is the focus: many children have two carers of the same gender (mother and grandmother, biological father and mother's new male partner, etc.), and many parents do not practice only heterosexuality; nevertheless, concern is raised primarily in one of these combinations -parents of the same gender who practice homosexuality. Why is this combination particularly alarming and disturbing? Two issues seem to be especially provocative: (visible) homosexuality and the question of the gendered division of labour. Despite the fact that homosexuality, at least in some Western countries, seems to be less and less pathologized in interpersonal relationships and that homosexual individuals and groups may be less demonized and excluded than they used to be, this kind of 'acceptance' and 'tolerance' in most cultures often still necessitates a silencing of sexual identity and even 'way of life'. Smith, 21 drawing on Britain, for example, wrote about how the 'homosexual citizen' is -in exchange for certain rights -coerced into keeping his or her sexuality confined by the socially and legally defined limits of privacy. Ward and Winstanley, 22 in their research on workplaces in the United Kingdom, use the term 'absent presence' to describe the dynamics of forced silencing among sexual minorities; Švab and Kuhar 23 in Slovenia write about the transparent closet and intimate citizenship 24 to explain consenting to invisibility and silencing of one's own (homo)sexual identity. As Švab and Kuhar claim, homosexuality, at least in Slovenia, is accepted, 'permitted' as long as the sexual activity and identity are limited to private spaces and nonheterosexual environments-that is, away from the public 21 Smith, 1995 in Richardson, 2000, p. 269. 22 Ward andWinstanley, 2003. 23 Švab and Kuhar, 2005. 24 sphere. 25 Such a tightly closed (even if transparent) bubble, which disables contamination (of the presumably sexually neutral) public space with homosexuality, becomes in the case of samegender families very fragile and prone to bursting. Even if the majority of the same-gender families involved in the first research study in Slovenia (2006Slovenia ( -2008 had positive post-coming out experiences in their interpersonal relationships, the generalized public response was negative. 26 The fear of general visibility and presence of same-gender families, foremost in the legislation, has generated a considerable and loud public opposition against making these citizens/families more equal. The entry of these parents and children into the institution of family (legally and socially) is still unsupported and unwanted in Slovenia. 27 This 'interdiction' is a consequence of not only the negative attitude towards (visible) homosexuality, but also a consequence of the negative attitude towards destabilization of gender roles and division of labour and power. Heimes and Weiner 28 write about three main challenges to the existing social order for samegender families: ideological (because they are seen to destabilize the fixed gender roles and phantasms about who/what is/can be a mother), structural (because they change the 'ordinary' and 'proper' family constellation), and biogenetic (reproduction, which used to be exclusively in the domain of the normative family, is no longer limited to heterosexual intercourse, neither to medical interventions). Inclusion of different family forms as legitimate thus signifies foremost a destabilization of the role and the superiority of the image of the normative family -mother 25 Švab and Kuhar, 2001. 26 Sobočan, 2009 In Slovenia, a public referendum about new family legislation was held at the beginning of 2012 and the result was a denial of the proposed legislation. I referred to this further along in the text. 28 Heimes and Weiner, 2000. (who nourishes, cares), father (who disciplines, teaches), and their (biological) children. Despite the fact that such family form is actually a novum -at the forefront only a bit longer than the last two centuries -is its exclusivity of grave importance for maintaining the structures and power relations in society (from the perspective of gender, national, economic, etc., interests)? 29 As can be observed in public reactions to it, when a minority breaches the forced silencing and thus destabilizes the prescribed gender roles, the initial response of the dominant group that we can most surely expect is a general opposition -with an attempt to strengthen and reinforce the power relations that it shook for a moment. 30 Hence, the response to the first wave of public visibility and demands for equal rights of same-gender parents in Slovenia was reactive. If I started this paper saying that lately, same-gender families and their children are becoming more visible in the public sphere, the newly acquired visibility nevertheless does not erase their absence from 'family' -this absence seems to be one of the central characteristics of the life of same-gender families in Slovenia. Namely, families build their legitimacy mostly on two pillars: biological and legal ties. In families where both parents are of the same gender, the children are usually biologically tied to only one parent, and Slovenian legislation does not provide the right to marriage or joint adoption to homosexual partners. 31 Legal non-recognition thus both creates and maintains the cultural attitudes towards nonheterosexual partnerships and families. The first research on 29 Coontz, 2000;Goody, 1983. 30 Sobočan, 2013a. 31 Currently, there are two families where both male partners are legal parents of the child (both adopted the child abroad, and acquired parental rights there), and six families where the female partner of a biological mother adopted the (fatherless) child. The one-parent adoptions actually took place within a legal 'loophole', so it cannot be claimed that the rights of social parents are secured. same-gender families in Slovenia 32 demonstrated that lack of awareness about the existence of non-heteronormative family forms, along with a domination of biological ties, often leads to posing questions, such as: 'Whose actually is this kid?' or 'Who is the kid's real mother?'. The second research study on samegender families in Slovenia showed that the family life and visibility of same-gender families does pose a challenge to the social concepts about what/who is a family, as well as what/who is a parent, and with this addresses the limits that are set with heterosexuality as well as those that homosexuality seemingly delineates. 33 Moral homophobes When borders are shaken and fences are crossed, the keepers of the borders awaken. The effects of protecting the (presumed) limits and borders of the family definitions were especially visible in Slovenia in early 2012, when there was a possibility for new family legislation to be passed -one where marriage rights of heterosexual citizens would be extended also to homosexual citizens. As a result of a referendum, the legislation was not passed. The public debates about the possible legislative changes involved expressions of intolerance, hate speech, open homophobia, and violence against those who attempted to cross such borders -that is, against homosexual adults. In Slovenia, the topic of homophobia has been discussed (only) in the last decade: 34 The testimonies of young homosexual adults vividly portray the attitudes towards homosexuality in Slovenia. Such attitudes can be expected in all situations connected to 32 Sobočan, 2009. 33 Sobočan, 2011a. 34 Kuhar et al., 2008;Kuhar et al., 2011;Magič, 2008;Magić and Janjevak, 2011;Maljevac and Magić, 2009;Švab and Kuhar, 2005;Tuš Špilak, 2010;Velikonja and Greif, 2001. homosexuality, because homophobia targets not only persons who openly identify as homosexual, but actually uses 'homosexualization' to legitimate intolerance, hostility, and violence. Homophobia is a mechanism which uses the label of homosexuality as a tool for hostility: homosexuality as a label is used to mark an individual or a group with 'otherness'. 35 A homophobe 36 needs an individual, group, or phenomenon which he/she can label with homosexuality to justify his/her acts: this may be a person's self-identification with homosexuality or homosexuality 'externally' ascribed to a person. Therefore, homophobic responses also can be expected in the case of children from same-gender families, where the sexual identity of their parents is used to 'homosexualize' the children. What is important in this scenario is the way the main (moral, but not rational) argument against same-gender families or childrearing in same-gender families is formed. The moral homophobe does not expose himself or herself as violent and intoleranthe/she is someone who claims to defend the rights of the child, who advocates for the child's good and a healthy childhood for her/him, who calls for protecting the (innocent) child against the parents who will supposedly harm the child with their homosexuality -and parents who expose the child to homophobic violence identified in society by such moral homophobe. 37 The moral homophobe himself/herself generates 35 See also 'new homophobia': violence and discrimination against different social groups; in Kuhar, Humer, Maljevac, 2012, p. 53; the authors also refer to Rener, 2009;Švab and Kuhar, 2005;Ule, 2005. 36 Homophobia (and a homophobe) does not signify only a violent, discriminatory act or ideas of an individual or a group. As Kuhar, Takacs and Kam-Tuck Yip write, we can talk also of the 'social and cultural norms and values, which explicitly and implicitly construct homosexuality as "the other"', in: Kuhar, Takacs and Kam-Tuck Yip, 2012, p. 16. 37 The term 'moral homophobe' may sound like an oxymoron; nevertheless, it adequately descibes individuals, groups or ideas which can be identified as homophobic, but who present themselves and claim to be moral, against the intolerance and hostility in the society to which he/she refers; nevertheless, his/her claims and behaviour are effective because they mobilize emotions through forming the victimization of children. The mobilization of emotions is especially effective because the moral homophobe presents the children's rights as opposed by the agendas of adults, who -according to the interpretation of the moral homophobe -fight for equal rights of all families exclusively to gain rights for themselves (and not the children) and answer their own (and not the children's) needs. This perverse shift portrays the parents as violent, as those who sexualize their children with their sexual identity and hence are dangerous to the child. The moral homophobe identifies this sexualization in at least two ways: as symbolic -social sexualization, that is, contamination of the child with the homosexuality of the parents, which will evoke negative responses in the environment (in school, etc.), and as moralidentity sexualization, that is, involving fear that such parents cannot 'teach' their children right, normative sexuality-that is, heterosexuality. Parents in same-gender families in Slovenia Attitudes towards homosexuality in Slovenia, which are presented in various research studies (see above) and were confirmed in public debates around possible legislative changes, also provide a background for understanding that parents and children in same-gender families can expect intolerance, discrimination, and negative attitudes, which might be why they have difficulties speaking out about their family reality. Previous background of certain societal, cultural, or religious values. I coined this term when I was describing and discussing the public debates around suggested changes in the family legislation in 2010-2012; see research studies about same-gender families in Slovenia 38 have been explorative: they opened a space and gave voice to topics and meanings that the interviewees conceptualized as the most important and relevant to their family reality. Thus, the first research presented topics connected to the dynamics inside the family and issues that describe the position of same-gender families in the society. 39 The next research identified a growing awareness about the unequal status and treatment, strategies for establishing legitimacy of family life and potential effects for the conceptualizations of the 'family' and homosexuality. 40 The last major research study about same-gender families also involved the narratives of the young people living with two parents of the same gender. 41 The analysis showed that parents (and children) expect homophobic responses from their environments and identified the different behaviours or strategies that the parents developed with the aim of protecting their children from the negative attitudes of others. 42 Even if every family story is specific, sixteen in-depth interviews with parents from same-gender families provided information on the basis of which an understanding of strategies for dealing with (expected) homophobia could be developed. In Slovenia, 16 parents from 11 families were interviewed: two men, 14 women, 29-54 years old, all except one from urban areas. In these families 15 children are growing up (five aged up to 6 years, six aged 6-14, three aged 14-18, and one older than 18). 43 The composition of the families of the interviewed is quite diverse: 38 Sobočan, 2009;Sobočan, 2011a. 39 Sobočan, 2009. 40 Sobočan, 2011a. 41 Zaviršek and Sobočan, 2012. 42 Sobočan, 2012. 43 A detailed description of methodology that was used in this research, along with ethical and other considerations, can be found in: Streib and Quadflieg, 2011 as well as Zaviršek and children in five families were born in heterosexual relationships (eight children), and children in four families were born in homosexual relationships (five children), and in one family, one child was born in a heterosexual relationship and one in a homosexual relationship. Ten of these children have (more or less active) fathers and five children were conceived either with assisted donor insemination or donor insemination at home, but the identity of the donors is anonymous. In relation to previous research in Slovenia, 44 in which families of two same-gender partners, families of two same-gender partners who share custody with a previous (different-gender) partner, and families of two same-gender partners who parent together with two other same-gender partners or a gay person, this sample includes families in which children have been conceived in a heterosexual relationship but after the recognition of a parent's homosexual orientation, both parents still take care of the children on a daily basis (possibly also by still living together). In addition, three young persons who grew up in same-gender families were interviewed. Their ages were between 16 and 23 years; all of them were conceived in heterosexual relationships and have two active biological parents of different genders. A boy (17) and a girl (16) are living with two mothers; a young woman (23) has a gay father. All the interviewed parents expressed the expectation of homophobic responses, even violence, while at the same time they cannot fully control-or protect-the lives of their children; they address and deal with the expected homophobia in ways they feel best. The parents experience constant pressure to 'justify' and 'demonstrate appropriateness' of their family life and fight for recognition of the parental status of both parents, symbolically as well as legally. 'Justifying' along with fighting for equal rights 44 Sobočan, 2009, Sobočan, 2011a can be very demanding, and the pressures create feelings of uncertainty and fear and encourage silencing and invisibility. Being recognized like 'all others' or as 'normal', according to the opinion of many parents, still guarantees the most safety for children from same-gender families, especially in an environment where there are no known or recognized models for how parents and children should behave or present their families at school or in a wider environment. The strategies of parents can be classified into three clusters, with different approaches, different levels of understanding what would be best for their families in school, and different ways in which they themselves (re)construct 'normality'. Family structures and passing strategies Passing strategies are a response to societal expectations (in Slovenia) that every child needs to have a father and a mother, because this is how the 'real', 'natural' family is constructed. 45 It can thus be expected that a child living with two mothers who has a father (i.e., a child born in a heterosexual relationship or a child with a known donor or father) will be perceived and accepted differently than a child who does not have a father or was conceived with anonymous donor cells. Namely, the child whose biological mother and father are both involved in his/her life might more easily answer the pertaining questions (voiced by just anyone in their heteronormative environment)-'Don't you have a father?' 'Where/who is your father?'-and pass as 'ordinary' child, who has the 'proper' role models in his/her life. These strategies give a chance for the environment (teachers, etc.) to relate to what they believe is 'normal' or 'right'. The ways in which the interviewed parents 'normalize' the situation, approximate their family to the normative pattern, are through involving both biological parents and through the legitimation of family relationships through biological connections, such as presenting the mother's partner as the child's aunt (mother's sister). The last strategy was explained by one parent: To make it easier for the child, we decided that in [primary] school, I would function as his aunt. They accepted this completely normally, they even found that we [the biological and the social mother] are visually very similar. (Ina) As the mother explained, the role functioned well in a suburban school, where these two mothers felt it was too dangerous to disclose themselves as a lesbian couple. They felt this worked well, and it gave the opportunity to the social mother to participate in the school-life of the child (e.g., teachers' meetings, etc.). The child also has an identifiable (but not present) father, which probably cast aside any other 'suspicion' about the 'aunt' being in any other relationship to the mother. A model also identified in the interviews can be described as a family model where the parents were previously a heterosexual couple but now have new sexual partners, yet remain in a close familial relationship, functioning fully in the child's life on a daily basis without necessarily disclosing information about their sexuality. Thus, the family functions in a way recognizable as a 'proper', as just a 'divorced' family, while other carers of the child (parents' same-gender partners) are not really involved in the child's life in the sense of being recognized, positioned, or (self)identified as persons who hold a parental/carer role. In these parents' views, such passing strategies protect the family from 'sexualization' -that is, against being identified as homosexual parents, which produces the 'deficits' of one of the parents and consequential 'illegitimacy' of such family forms and family relations. It needs to be noted also that the respondents have spoken about violence and discrimination against children who have disclosed in school in what kind of a family they live. Rigidity and fixation on the limits of the normative concept of family also constrain the parental status outside the nuclear matrix: legally and symbolically (but not on the level of everyday practices), two parents simultaneously mean the exclusion of the third parent (for example in the mother-mother-father constellation). This also is demonstrated by the imperative of social services in cases of single-parent adoptions -for example, in cases where the non-biological, social mother wants to adopt the child, the father needs to be excluded from the relationship with the child, not only legally, but also physically and symbolically. 46 Not only does the strategy of passing protect the family against homophobic responses; exclusion of the social parent is coerces the family into choosing which of the parents will be invisible in the public space -and the parents rarely choose the exclusion of the other biological parent (especially in cases when the child was born in a heterosexual relationship). Such strategy simultaneously perpetuates the invisibility of samegender families in society: invisibility is thus both an experience of same-gender families (invisibility in the legal and symbolic sense, invisibility in public representations -schoolbooks, advertising, and the like) as well as their strategy: the parents consent to invisibility or maintain it because of the expected negative attitudes and intolerance for a non-normative family reality. The passing strategies where the presence of both 46 The praxis in this field is developing only now, because of the low number of cases they are dealing with. As testified in the conversations with those who are in the process of second-parent adoption of the child, the absence of the other biological parent (father) is necessary for a successful adoption. See also Sobočan, 2011b. biological parents (sometimes or often at the cost of the social parent) is important involve selecting who will get to know the family situation and when; the strategies of protecting are connected with a (full) invisibility of the partnership relationship between the adults, whereby the partners do not assume a visible parental relationship with the child. Invisibility and strategies of protecting Certain parents understand that the invisibility of their sexual relationship protects the child from becoming himself or herself sexualized, which is a part of these strategies; that is, some parents do not even disclose their (same)-sexual relationship to the child -which they justify by their wish to protect the child. This invisibility seems to be restricted not only to the school (public) life, but it sometimes or often overarches the family sphere. Many parents who were previously living in a heterosexual relationship felt reluctant to speak about their (new) sexuality to the children, even if they were, for example, already living with a partner of the same gender. One of the parents explained that she is reserved about coming out to her children (aged 10 and 13) because she believes she has to protect them from the burden of (their) coming out in a non-urban homophobic environment -if the children knew their mother was a lesbian, they would have to be open about it when someone asked them questions. This kind of behaviour is often connected to the issues of custody: parents fear that the other biological parent (usually the former partner) will demand full custody of the child and would be successful. Some parents said that they believe that their children already 'suspect' their homosexuality, that they 'understand what is going on', but that they have not yet gathered enough courage to speak about it with themagain, not because of their personal relationship with the child, but because of the anticipated consequences for the child in his/her environment. In this way, parents perceive the secrecy of their sexuality as actually protecting the children from being part of it. One of the gay fathers spoke of the mother of his child confronting a schoolteacher when the pupils were supposed to speak about their families in school: she claimed these were personal issues which should not be addressed. Such assertiveness protects the family by preventing an 'information leak'. Much effort is invested in the information not leaking -one of the mothers spoke about her daughter confiding in her best friend only after they had been friends for almost ten years (and the family obviously managed to remain invisible). Nevertheless, parents recognize that there are two sides to the coin of invisibility. One of the mothers presented a case of abuse of her daughter in school after she told in class that she lives with two women: bad marking and bullying from teachers led to deteriorating health conditions, while her mother was constantly confronted by two teachers who claimed 'that the reason for that was that her daughter terribly misses her father'. The mother transferred her daughter to another school, but only after recognizing that the reasons for her daughter's bad school outcomes and hospitalizations actually lay in the attitudes of two homophobic teachers. Her family appealed to her that she should report to the police what was happening and sue the school, but she decided against it, concluding that because they were not officially 'out' at school, she would not be able to claim discrimination on that basis. When signing out of this school, the mother said: The headmaster agreed immediately as she wanted to be out of this matter as soon as possible. All she was actually interested in was whether anyone would 'pay for it': if we would report them -she was afraid of that. (Irina) Activism and positioning strategies The parents who are less reluctant to out themselves as a family in school or in public space are those who jointly planned the family and where the child was born in their (same-gender) relationship. It is more frequent in such cases that both the biological parent as well as the social parent present themselves as parents in school and elsewhere, partly because of the absence of the threat of custody issues. Nevertheless, social parents who are out to the child's teachers as 'parents -partners of biological parents' report that this is often a struggle: they have to be active in the relationship with the school, which they report is often cold and distanced. Some teachers have a hard time getting used to the equal parental role of the same-gender social parent, but in time and with persistence, they become used to it and accept it. Nevertheless, these parents often find the active role really important because, as one mother explained, it is likely that the teachers would 'discover' the family structure through the children's narratives, essays, and the like. Some parents report that they believe the teachers know they are a same-gender family, but do not feel like discussing it with them yet. On the other hand, one mother said: My partner didn't agree that we tell them that the kids live with two women; she said, it's not their business, who is sleeping with whom. But I told the teacher. She never said anything to me about it afterwards. But when they were drawing families in school, there were no comments anymore. With the first kid, when she drew two grown female figures, the teacher said: 'today we are drawing family, not friends.' Now, there were no more comments. (Ela) Some parents also feel that it is important that they are out as a same-gender family in school, but would themselves not be out in some other spheres of life (such as their work environment and the like). Recently, more and more families purposefully speak or plan to speak about their family to kindergarten and schoolteachers in what they conceive and describe as a truly activist manner. They see the importance of 'educating' teachers -so that the children would be able to talk about their family reality freely, without any confusion, secrecy, or doubts. Especially the very young families in the research sample, where children were born with the aid of donor insemination, feel that what is important is immediate confrontation of the teacher with their family form and parental roles, as well as clear demands for introduction of images of various family structures in the learning materials. These mothers would all agree that what is important is how one positions oneself: as a 'potential victim of homophobia' or as an 'equal parent, who just wants the best for his/her child, as most parents do'. They see this open position as an opportunity to demand equal recognition and participation. At the same time, it is of crucial importance for them to raise their child in a selfconfident, empowered way and to equip her/him with the strength needed for an ongoing social battle. Young people from same-gender families The young people who were interviewed in the framework of the same research study have not yet developed such 'family pride' as the activist parents. For these young people, the main strategy was silence and secrecy about the family reality. 47 The young people's experiences show that their environment (peers, 47 Zaviršek and Bercht, 2012. teachers, extended families) often implicitly demands and rewards silencing. 48 The strategy of silencing partly protects the children and young people against violence while at the same time has its consequences for the young people's perceptions of themselves and their relationships with others. The concept of 'normality' is very important for young people: their strategies of dealing with the environment and the expected homophobia are tightly connected to the feelings of denormalization 49 and a desire to be accepted, to have their families recognized as 'normal'. Belonging is equally important in both cases -loyalty and belonging to one's family as well as to one's peer group and other non-family contexts, which creates a conflict. How heavy this conflict is depends on the severity of expectations and pressures of the heteronormative environment. Summary: Same-gender families in Slovenia All the strategies that parents employ are directed towards protecting their children from anticipated homophobia in school and relate to the different approaches and understandings of what might be beneficial for their families and school and the different levels of what the parents perceive as being open as well as how they (re)construct 'normality'. These strategies were identified as: passing strategies (father figure strategy, biological relative strategy), protective strategies (strategy of invisibility in the family, strategy of the invisibility of the family), and positioning strategies (active parent strategy, activist parent strategy). All of the participating parents anticipate a danger of homophobic attitudes or even violence, but the school life of their 48 Ibid. 49 Streib Brzič and Quadflieg, 2011. children is to some extent uncontrollable, so they approach this anticipated danger in different ways. What is characteristic is that there are no models, even to some extent no culture of families where both parents are of the same sex, which surely is a consequence of the fact that same-gender partners in Slovenia are only recently really embracing and claiming their right to become parents. Nevertheless, in the current social climate, the parents seem to have experienced pressures and demands connected to their family life, which result in insecurity, fear, and secrecy on many levels. The feeling and appearance of 'sameness' or 'normality' seem still to be the most promising and safe place for children in the view of their gay and lesbian parents, who are only now developing models of how to approach schools, talk with children, and deal with their environment. 50 Concluding remarks Children in same-gender families surely have some specific experience linked to their family reality. Gustavson and Schmitt, for example, use the expression by Stefen Lynch, 'culturally queer', to describe their particular situation: an experience of associative stigma, that is, stigma that is acquired on the basis of their parents' sexual orientation and at the same time through association with the LGBTQ community. 51 To better understand and give recognition to the role of their experiences, new research in the field of childhood and family life should be encouraged, research that conceptualizes children and childhood outside of the matrix of adaptability, success, and victimization. Critical research should address and present the 50 As one of the reviewers of this paper remarked, 'it is a paradoxal tragedy that safe space means remaining in homophobic normality'. 51 Gustavson and Schmitt, 2011, p. 161. experiences of children and youth through a perspective relevant to them. Children and youth are recognized today as social agents, who are not simple copies, victims, or rebels in relation to their environment or parents but actively co-create meanings in the society. 52 Such perspectives may hold a promise to defy the discourses of moral homophobes and abuse of children that suit their different agendas. These approaches might also be important for trying to confront the heteronormative discourses in which the two-dad or two-mom families can present only a challenge (sometimes presented as threatening) or an affirmation (sometimes presented as heteronormative conformity) of the mom-dad families. Dr. Ana M. Sobočan is a researcher and lecturer at the University of Ljubljana -with different topics on social justice and inclusion, and recently predominantly in ethics. Since 2006 she is researching family lives of nonheterosexual parents and their children in Slovenia. She has presented the findings in domestic and international journals and conferences, publications, seminars and in teaching (for social workers and educators), as well as in the media. Following from her research work, she is also developing recommendations for social workers working with LGBTQ families. ana-marija.sobocan@guest.arnes.si Landscapes of safe(r) spaces Mel Freitag A whole history remains to be written of spacewhich at the same time would be a history of power -from the great strategies of geopolitics to the little tactics of the habitat. - Michel Foucault, 1986 hat does it mean to queer a schooled space? When queers are physically visible in schools, how does that change the power relations and relationships within it? Researchers in the field of Human Geography have explored physical spaces that are "queered" -the gay ghettos -such as the gay bar, neighborhood, or city. 1 While celebrating these gay spaces, and markers such as the safe space triangle sticker that allies in schools in the USA utilize to mark their offices as places where LGBTQ (lesbian, gay, bisexual, transgender, queer/questioning) students can "go" to feel comfortable, or at least not bullied, that does not always mean that queers feel safe(r) in those spaces. Also, if one space is marked safe, what happens to the other 1 Rushbrook, 2002. W unsafe spaces? Do they stay intact, and if so, is that to the detriment of all students? Therefore, it is imperative also to define what a safe or safe(r) space is, and then why they should exist at all. According to a recent nationwide survey conducted by Joseph Kosciw, Emily Greytak, and Elizabeth Diaz, 2 nine out of ten LGBTQ-identified youth state they have been harassed and bullied in their schools. This is unacceptable. One option in particular for queer subjects is to construct, live, and utilize these "queered" cities, neighborhoods, and schools. A physically separated "gay space" could be a countersite for other, more privileged landscapes and narratives. For example, geographer Dereka Rushbrook takes Michel Foucault's idea of "heterotopias" and defines it as "places that hold what has been displaced while serving as sites of stability for the displaced", 3 which I will use as a framework in this article. Much of the literature on queer geography has been on isolated or commercialized spaces, neighborhoods, cities, workplaces, bath houses, media, drag shows, sex workers, and more recently on immigration, transnational politics, public health, and globalization. 4 The level of inclusivity of a school, for example, is traditionally a space that holds potential economic and social power for underrepresented students, including but not limited to queer-identified individuals. Queering a school: Is it possible? Safe schools are not and should not be limited to exclusively queer-identified students. Although queer-identified students are in these safe spaces and in fact do "feel" safer, it is because of the practices, strategies, curricula, and policy decisions that the schools make in and outside of the classroom. I argue that it is possible for a heterosexual-identified student to in fact feel "safer" in a queer space. There is a gap of work on heterosexualities, and as long as queers are discriminated against, "queer spaces will remain something that," to borrow Spivak's phrase, "queers cannot not want." 5 In this article, I would like to argue how and why schools should be queered, and not only with exclusively queer-identified subjects. For the purpose I have done fieldwork at the Unity Charter School, as a space and opportunity for this space to be produced. Unity Charter School produces a model not only to build a safe(r) space for queer-identified subjects, but for all students. Queering these architectural sites of power could also point to how even material spaces, or maybe especially material spaces that are more formal and institutional -schools -can and do become "queered". Without reproducing sexual identity politics that singles out one student against another, I will analyze what practices and curricula are used to queer a school. School context and data collection For purposes of comparison, it is important to acknowledge that currently, there are two known schools that are queer-positive in the United States. I will later discuss how the policies and climate at Harvey Milk is similar and different from Unity school, where I conducted my fieldwork. Harvey Milk High School in New York City is one of the two only schools in the United States that explicitly states that all students, regardless of their sexual orientation, "deserve a safe and supportive environment." 6 The second school that I will define as queer-positive is Unity Charter School. The Unity Charter School is a public school located in Great Lake City, which is a large, urban metropolis with over 560,000 residents in the Midwest. Through Unity's definition of what constitutes safety in a school, their (dis)location from heteronormative schools will play a crucial role in re-defining their own queer geography -and also complicate the idea of physical, psychological, and social safety within and outside of those walls and boundaries. Unity and Harvey are the two only known schools where the mission is explicitly to address bullying and students who have been bullied in their previous schools. Unity and the Harvey Milk High School are somewhat unique in the United States in that they are part of the larger public school districts in their cities, which means that they are able to enroll any student who wishes with no additional fees. According to the Great Lake City School District's website, the district is one of the largest in the region with over 80,000 students and 29 high schools. Unity Charter School's demographics reflect much of the same racialized diversity as the district. The following statistics are racial and gendered categories that are pre-determined by the school district as a whole, and is not necessarily reflecting how the Unity students self-identify. It is important to understand the demographic and academic context of the school to lend a broad perspective and to highlight how the intersections of race and socioeconomic status influence and interplay with sexual orientation. Of Unity's current 163 students, 52% are African American, 20% are Hispanic, 26% are White, and 1% is Native American. Although the racialized categories are linked to the United States census categories and race is a social construction, it is important to note the racialized diversity of Unity students. 7 One reason may be to compare it to the White teachers at Unity, and what factors have contributed to not hiring teachers and staff of color. Furthermore, 58% of the students identify as female and 42% as male. 8 The irony is that these pre-conceived categories of race and gender are prescribed by the district, and currently there is no categorical box for transgender students, for instance. Although Unity is well aware that many of their students identify as transgender, there is no district-wide or school-specific statistic for that population. In addition, the state-wide reading, language and math scores at Unity were comparable to the average of the school district at large. The school is racially and ethnically diverse and also has a high percentage of special education students and English Language Learners, but their numbers are very close to the school district's as a whole. In addition, many of the students are from economically disadvantaged neighborhoods, and the school itself is within one of these identified communities. The mission of Unity school is to provide a safe space for students who have been bullied and harassed in their previous schools. Although the school's reputation is being the "gay school" to the outside community, the school does not explicitly state that they only enroll LGBTQ or nonheterosexual students. It is important to note that factors such as the students' race, socioeconomic status, family backgrounds, and learning and physical disability status also put the students at-risk for bullying. As discussed later, the reasons students were bullied many times were because they were marked as "different," or outside of the norm of whiteness and heterosexuality. Although this study focused on sexual identification or lack of sexual identification of its students, these other factors braid into the students' identities and communities as well. Charter schools are smaller schools that still remain in the public school system, but have more autonomy when it comes to decision-making in regards to school policies, procedures, hiring practices, curricular strategies, and discipline. Historically, charter schools have been formed around a specific theme or focus, such as science and technology, fine arts, or honors courses. Unity is unique in that its focus is not exclusively about an academic subject, although the mission is clear that one of the school's goals is to be academically challenging. Based on interviews with the teachers and students and my own hallway and classroom observations, the curriculum and pedagogy, for instance, are not much different than other small schools in the area, both private and public. Unity's test scores, attendance rates, graduation rates, and many of the other indicators of what makes a "good school" according to many of the policymakers are similar to its other educational counterparts. 9 Using narrative inquiry 10 , over a six month period, I conducted 21 individualized life history interviews with twelve current Unity School students, six teachers, one "lead" teacher, one social worker, and one school psychologist. I also conducted numerous classroom and hallway observations. The students identified as female, male, and transgender, and their sexuality identifications were more diverse than the LGBTQ categorical box as discussed previously. Of the twelve students interviewed, five students presented as White, two as African American, two were Latino, one was Native American, and two were multi-racial. Ten of the twelve students were seniors and two were juniors. By being in the hallways, offices, and classrooms of the school, I was able to build relationships with them and ask them to have a conversation with me during their lunch hour or free time before or after school. After transcribing every interview, I then used both inductive and deductive analyses to find themes, patterns, phrases, and stories that cut across all of the interviews. All of the names of the participants, the school, and the city are pseudonyms to protect the confidentiality of the subjects, the institution, and the context of them. The school as a whole is well accustomed to media and research attention alike. In fact, I met two separate researchers from different states at Unity during my time there. During one of my full days at the school, one of the teachers pointed out: "what would [Unity] be without a resident researcher?" This question illustrated not only the amount of local, state-wide, national and international attention the school has received, but also that the staff, teachers, and community are probably aware of how different their school is from others around the U.S. and outside of it. As a queer researcher, my assumption was that I would gain leverage with the students because of my sexual orientation as an out lesbian. Although I mentioned my identification in a few interviews, it did not seem to matter. Before the study, I naively assumed that I would simply come out as a lesbian and we would proceed to have an in-depth conversation about all of the participants' experiences of being LGBTQ. Because of my identification alone, I assumed I could build more trust in the researcher-subject relationship. Since many students did not come out, or chose not to identify, I had to change my questions and adapt to this newly found, perhaps more uncomfortable space. Some of the strongest supporters of Unity Charter School, its students and teachers, identify as straight. On the other hand, just because an individual identifies as queer, does not mean that they automatically queer a space when they enter or reside in it. Many queer-identified individuals may even, intentionally or unintentionally, want to "fit in" to the heterosexual matrix. 11 Queer spaces, then, are distinct from LGBTQ spaces. Imploding the binary between queer and non-queer subjects occupying a space, then, is crucial to understand what it means to queer a space. Therefore, a queer space or geography transgresses binaries such as hetero/homo or man/woman in order to go beyond normativity. A definition of queer In order to use queer geography literature as a framework for how safety and community are defined in the Unity Charter School, it is important to define queer and then queer geography in these contexts. First, I use the term queer as both a subjectidentifier and a politic, as defined by US-based education researcher Marla Morris. 12 Queer-identified students lend room for the in-between sexualityidentifiers, including polyamorous, pan-sexual, or un-identified. Many of the students at Unity did not identify as LGBTQ, even though the school is labeled "the gay school" from outside their community and even in the media. When asked what their identification was, many of them chose not to identify at all. This lack of identification by many of the youth, regardless of age, social class, race, or other factors, was not simply because they wanted to resist the label of the "gay school." In fact, many of the students I talked to took pride in their school, and insisted that it was not just a "school for gay kids," but rather that those sexual identification markers did not matter. The teachers echoed the same sentiment when they argued that the school does not necessarily have students who are "unique" or had different problems or stories from students at traditional schools. The difference, as I will discuss later, was how they responded and listened to these stories. In this way, the institutional policies and teacher practices specifically were perhaps more out of the norm, or queer, than the students themselves. The second definition of queer is when the term is used as politic, a verb, a state of mind, an action, and a way of being. Queering is about re-defining the traditionally-held norms, binaries, beliefs, values, institutions, and structures. 13 Therefore, a queer-positive school can and does enroll queer-identified students, but the purpose, policies, and culture of a queer space can go well beyond what the sexualities are of its subjects. Recent work in the field of queer geography defines a queer space, then, as dissident, progressive, resistant, and claimed, but also challenges the very "privileging of sexuality [markers] above all processes of identity formation by considering queer subjects as simultaneously raced, classed, and gendered bodies". 14 Further, space is not naturally "straight" or heteronormative, but rather constructed, "actively produced and (hetero)sexualized." 15 According to Eve Kosofsky 13 Morris, 1998. 14 Oswin, 2008, p. 91. 15 Binnie, 1997, p. 223 Sedgwick and Michael Warner, even people who identify as heterosexual may not be heteronormative. 16 When queer subjects enter into heterosexualized spaces, it reminds people that these streets, malls, motels, and schools have been "produced as heterosexual." 17 Phil Hubbard further explains in "Here, There, Everywhere: The Ubiquitous Geographies of Heteronormativity" that everyday, 'normal' space, then, is "perceived, occupied, and represented as heterosexual" 18 and that "non-heteronormative heterosexuality would be based on not privileging heterosexual identity over other categories." 19 Non-heteronormative heterosexuality, would have a place in queered spaces, that is, these types of allies can and do belong in queered spaces. This notion of heterosexuals "belonging" in queer(ed) spaces, which often times seems contradictory, was a challenge throughout the study. That is, originally the proposal was to research a space where queered subjects resided, but the more students I interviewed, the more I came to the realization that I would have to re-frame one of my major questions: "are you LGBTQ, and then, was that identification the reason you were bullied?" The old question assumed that the student would identify within the LGBTQ categories, and since that was not the case, it changed not only my definition of their sexual orientations, but also my definition of what populations the school served and how they served them. 16 Sedgwick, 1990;Warner, 1993. 17 Bell andValentine, 1995, p. 18. 18 Hubbard, 2008, p. 644. 19 Johnson, 2002. Notions of safety and pedagogies at Unity We need to look seriously at what limitations we have placed in this "new world," on who we feel "close to," who we feel "comfortable with," who we feel "safe" with. -Minnie Bruce Pratt, 1984 What makes Unity different from other initiatives such as Gay Straight Alliances (GSA's)? One of the distinctions between Unity and other schools is that its mission is to enable students who are able to communicate, not judge, and explore or "try on" their own identities, religious beliefs, and sexualities. The traditional solution to the question in schools in the US and Canada "what do we do with the gays?" has largely been to create GSA's, or Gay Straight Alliances, which are generally student-run groups within larger high schools. 20 These are intended to be a "safe place" for queer-identified youth to go, and they often sponsor various activities, social outings, and programs to support queeridentified students and their allies. However, even though Gay Straight Alliances have been supported and successful in many schools, some members of GSA's have struggled to gain respect from school administrators, parents, and other students. American Geographer Christopher Schroeder points out that GSA's run the risk of becoming "complicit with heteronormativity. With a fragmented and much more manageable queer youth population and with minimal influence from queer adults, the school becomes much more efficacious in its (re)production of docile bodies." 21 Vancouver-based Lori Macintosh further pushes this notion of teaching 20 Macintosh, 2007;Mayo, 2004. 21 Schroeder, 2012 "antihomophobia curriculum" in schools, and argues that "we subsequently assume that it is homophobia that must be understood, leaving heteronormativity as a live incendiary device." 22 If educators continue to create these "Band-Aid" solutions or add on a day or class to talk about the "Other" LGBTQ kids, we miss turning the table on teachers to examine their own positionalities and learn how to engage with and facilitate conflict in the classroom. This argument reflect many of the queer practices inherent in Unity, and further contests how queer theory as it relates to education and schools is not just about learning about queer subjects. Since there is such a strong prevalence and recent surge of GSA's throughout many high schools, 23 much of the outside community wonders why there needs to be a separate "school for the gays." However, this label, as the students and teachers informed me, does not accurately reflect Unity's mission. Although Unity engages with and creates queer programs, policies, and curricula, as stated before, not all the students or teachers are queeridentified. I argue that a school can be queered regardless of the sexual identifications of the teachers and students residing within it. The idea of safety for whomever enters the school's door, then, becomes a central theme, and it is a work in progress. Mary Louise Rasmussen examines the idea of safe spaces by calling on Foucault's definition of heterotopias. Rasmussen looked at Harvey Milk High School's policies, and argues that Harvey Milk High School, much like Unity become "heterotopias of deviation." 24 That is, in order to exist, these schools must create spaces that "illuminate the exclusions produced by wider 22 Macintosh, 2007, p. 36. 23 Schroeder, 2012. 24 Rasmussen, 2006, p. 165. social and educational relations of power. These relations of power continue to be simultaneously contested and reinscribed by the people who construct the heterotopic spaces." 25 She names these "spatial dividing practices" and points out that many of the teachers and administration would argue that these students have nowhere else to go, which many of the teachers and administration echoed at Unity. In fact, simply by being a student within Unity's walls, these students are marked as different. Laura, the school social worker at Unity, shared that a lot of people from outside the school think this is an "alternative school," that is, a school separated for the "troublesome" students, i.e. the ones with multiple disciplinary problems, pregnant students, or students who have criminal records. Unity's mission is not to support students who are "troublesome," but rather students who are different and want a space to explore their identities, as any adolescent would. She also spoke about "individual choices" as they relate to physical and emotional safety, which is true for many teenagers, regardless of their queer identification. When asked to define what a "safe school" means, she replied: Well, there's physical and emotional safety. Ideally, that's what we're striving for. You know, I think it's always a work in progress. I think people's individual choices can make themselves unsafe -and we try to address that. Whether it be plugging them into resources outside of school or working with resource people in school. Our own work -I mean, everyone kind of wears a counselor hat. That doesn't happen in other schools. Are we perfect? Absolutely not. We try to be proactive, though. I think that makes a difference. We're a work in progress. Because everyone has "stuff". Unity also provides social services and case management, or refers students to external community resources. This is reflected in the space of the school. When I first entered Unity, it felt more like a community centre. Students were in the hallways, in the classrooms, teachers were present. However, the space itself felt different from a school. Many of the students also agreed that Unity didn't "feel" like a school, but more like a home, a family, a comfortable place, and a place of belonging. Foucault echoes this by arguing that "space is fundamental in any form of communal life; space is fundamental in any exercise of power." 26 Unity reflects much of the inherent power struggles, and as the social worker pointed out, Unity itself is a work in progress. "We're not perfect" is a phrase I heard a lot during the interviews, even though many schools do come to the school to observe the practices and community building there, and even attend training sessions for restorative justice circles and other ways to create a safe(r) community. The space is intentionally created by its teachers, staff, and students, but people are aware that inter-school bullying still exists. That is not what makes Unity different. What makes Unity different from other schools is their response to bullying; their ability to listen, respond in a thoughtful way, facilitate conflict, and mentor their students to do the same. Much of the media focuses on physically separated spaces for students who are discriminated against in school, stating that it is an "extreme solution" to bullying and harassment in the regular public schools. When I asked Terri, the lead teacher and founder, about these comments that separating to support is a radical solution to the problem, she responded: 26 Foucault, 1984, p. 252. I don't think that's what it's about at all. Like I think that the bigger schools could do a lot of things that we're doing now. I mean one of the first things that I would do if I was an administrator of a bigger school would be go and start talking to students about what they wanted…not just student government students who are always part of everything, but really pulling in groups who are traditionally underserved or ignored…listening to them and trying to implement some of the things they say. Because their -their issues are real. And it makes a difference like when you see that they are part of that community, too, then they'll work to keep it strong. As I interviewed the teachers and staff at Unity, I began to ask not only what their definition of a safe space and safe school was, but also what other elements of this school were unique, and could be perhaps transferred to all other public schools to address bullying. I chose the following excerpts to discuss further because they begin to construct a definition of what it means to be a queered school. Interestingly, the school does not have any explicit anti-bullying workshops for teachers or students; and does not say the word "bullying" or "LGBTQ" in its mission or even on the posters that state the school's objectives posted on nearly every door of every classroom. 27 Instead, words like "community," "welcoming," and "safety" are used to describe the school. One of the first differences I noticed about the school was before I even entered into its doors. On the school's website, there was no principal listed. I was looking for someone to contact for my research study, but I wasn't sure who was in charge. Then, I noticed that Terri, one of the teachers, had "lead teacher" next to her name. I wasn't sure what that meant, and I remember thinking that maybe the principal just was not listed. However, when Terri returned my message and confirmed that I could visit, I suspected that she was in fact the leader of the school, but she chose not to have "principal" next to her name. Later I realized that this first encounter accurately represented the school's democratic culture and intentional community. After asking Terri about it in the interview, she echoed the school's mission for democratic governance, and that she has always believed in shared decision-making: I don't make decisions and give them out to people. I'm going to bring it to the community and we're going to vote on it. We're gonna discuss it and you know -if I'm making assumptions, people will call me on it right away. This culture of trust, team-building and community is not just part of the mission statement; the teachers and staff live it every week during their 3 hour staff meeting. They participate in "circles," which is actually a ritual that is adapted from "restorative justice circles." Restorative justice is a concept that is often used in the criminal justice system in the United States for finding alternative methods for the criminal to repay or "restore" his or her debt to the community in which he or she hurt. For instance, for a minor crime, instead of serving time in prison, the convicted person may volunteer at the local homeless shelter or apologize to the families he or she hurt. The restorative justice circles in the school are used for alternative discipline measures, but also a way for students and teachers to connect and dialogue with one another. The circles are one of the defining features of the school, and they are taught and used explicitly in a restorative justice class that many of the students take throughout their time at Unity. Even though many of the students are enrolled in the restorative justice class, other students can request a formal "circle" if they are having a conflict with each other. I participated in one of these circles during one of my classroom observations. At first I thought I would just sit in the back of the room and observe, but I quickly realized that I was going to have to be an active participant. The lights were off, and about ten students were sitting around a circle, along with two teacher facilitators. There was a candle lit in the middle of the room, and there was a "talking stick" that the teacher facilitator had. When we opened, she said "we're just going to start off today with a check in and go around and see how everyone is doing." She had given me a few materials to read before about these "circles," so I knew what to expect. Still, it was a little uncomfortable at first to be put in the position of "checking in" as the researcher. How should I respond to this? What was I feeling? What was I doing here? I was surprised about the candor of many of the students to talk about their issues, their stories, and their feelings in the middle of the day at school as they passed around the talking stick one by one. If this had been a support group, for instance, it would not have seemed out of place, but for some reason it did in a "school" environment. When it came to my turn, I was honest. I talked about how excited I was to be here, but I was tired from the drive. Previous participants also talked about their upset about the recent unsuccessful recall of the governor since it was the day after, and so I felt compelled to talk about my perspective on that issue. One of the students admitted that she did not know much about what was going on, and asked us to explain it to her. The nonverbal communication during the "circle" was just as critical as the person talking. The students made eye contact, asked follow up questions, nodded, and genuinely cared about what one another was saying. I was glad that I participated in this circle because it is the foundation of Unity charter school. When I later interviewed Jennifer, the restorative justice teacher and facilitator, I asked her what one of the main differences was between Unity and other schools. "We listen to students' stories." This was echoed in many of the other teacher interviews as well. The teachers also participate in their own "circles" during the staff meetingsometimes they serve as a quick check-in, and sometimes they go for more than 45 minutes to address deeper issues and maybe even conflict within the teacher and staff community. In addition to the formal restorative justice circle class and the teachers "circling" during their staff meetings, Terri also has observed students "circling" on their own time, in the hallways and outside of class. "We do it for both community building…if the conversation starts out with people interrupting each other…somebody will go, OK, hold on, hold on, we need to pass a talking piece." The students circle "automatically." Community When Laura, the school social worker, first gave me a tour of the school, she said it was interesting that the students who were truant stayed in the building. In her 23 years of being in the school district, she had never seen students staying in the school -the bathrooms, the hallways, outside on the grounds -when they were supposedly "skipping." This is one of a few first indicators that this school was different -not only in its mission and practices, but also in the students' behavior. My initial response was: why are these students skipping at all? But when looking back at the attendance and truancy rates, I remembered that this school was similar to the many of the other district's schools, both small and large. The more compelling question, then, was why were the students staying the same place where they were "supposed to" be in school? Why would they want to stay there if they were not in class? Would they not want to go somewhere else? Somewhere like home? According to Sue Kentlyn in her article regarding domestic labor practices in gay and lesbian homes in the United States specifically, she discusses how sacred the notion of "home" is for gay and lesbian adults in her study, many of whom cannot and do not go back to their home of origin because of a very real fear of rejection. For gays and lesbians, Kentlyn defines home as a "place of belonging, intimacy, security, relationship, and selfhood." 28 One of the most interesting pieces of that definition is the notion of home as a place to "be yourself." Most heterosexual-identified people, or more importantly people who present as traditionally male or female, most likely do not make the distinction between "being themselves" in public versus private places and everything in between, simply because they are accepted in many location that queer subjects historically have not been. For queer subjects, however, the notion of performativity and where they can feel "safe" to be who they are hinges on where they are standing, many times quite literally. For instance, one of the transgender-identified, male to female students who chose her pseudonym, "Exotic Barbie," shared that she "had to dress like a boy" at her previous school. This made her feel uncomfortable, and so she used to never go to that school. At Unity, however, Exotic Barbie presents and dresses as a woman, and even though she still chooses to "dress like a boy" at family barbeques and other spaces, at Unity, she feels safer enough to always accurately express her gender. Lisa Weems discusses how many times school is imagined to feel like home, as many of the participants iterated during my conversations with them. She argues that instead of imagining school as home or even school as prison, perhaps home as camp is a better metaphor. 29 Camp is a retreat, a positive location, where students are separated from their traditional homes, but also a place where a new home, a new community can be formed. Perhaps this community, this camp or classroom, could be more comfortable and arguably safe(r) than some students' actual homes. Since the classroom is a contested space already with historical, cultural, social, political, and psychological discursive practices, 30 it is important to conceptualize how schools and classroom spaces are reproducing heteronormativity and hegemony, or are places of resistance to these gendered, sex, and sexualized norms. Thinking of school as camp still conjures up collective positive memories of respite and support, but also keeps the institutional practices, some of them mandated by the local and state governments in mind as a backdrop of the story. Because school and the classroom more specifically are contested spaces, this distinction is important. Still, many of the students at Unity used the word "home" and not "retreat" or "second home" or even "camp" to literally describe how they felt in that location. In fact, instead of defining Unity as their "second home," some of them said that their relationships at Unity were closer than their home relationships. Some of the students who I interviewed were currently homeless or living in a group-home, and so going to Unity was the first physical place they want to "go to." Further, Terri echoed this by talking about how excited students are the days before school, and even post to Terri's facebook page about how excited they are to come back, and how much they missed her and everyone. Bobby, a gay, African American student 29 Weems, 2010. 30 Lefèbvre, 1991. at Unity, knew about Unity during elementary school, and always knew that he would be going to Unity once he was in high school. When I asked many of the students what they would have done if Unity had not been an option or did not exist, they said that they would have dropped out, been homeless, or even been dead. This space, then, becomes more than a school, although many of the teachers reminded me that this is in fact a school -a public school -which means there is the reality of grades, state test scores, funding, and renewal contracts for the teachers and the school itself. Although Unity looks like a school, it is much more than that. It is a community. Does a community have to "happen" or be created a separate space? A separate school building? There may be another way to think about how these types of communities could infiltrate into larger schools and spaces. Marc Augé defines non-places as places where there are not necessarily just brick and mortar walls, but rather a discourse of belonging, and places to build community. Augé argues that we need to "relearn how we think about space," 31 perhaps creating a hybridity between places/non-places and instead of looking at them like binaries, they are more like "palimpsests on which the scrambled game of identity and relations is ceaselessly 31 Augé, 1995, p. 29. rewritten". 32 Augé not only argues that we need to rethink what it means to have space and place, but also how non-places function in/around "real-life" or "in real life" spaces like schools. Queer spaces would be both non-places and places simultaneously. If we define these spaces as non-places, it may mean that more meaning making and identity construction can "happen" here. Queer identities must have places and non-places to breathe, and these environments, as stated before, may be the place to do it. "Who does that?" Terri said when I asked her if she ever expected to be "doing this" ten years ago. "Who starts a school?" This seemingly simplistic question resonated with all the other students' stories about Unity as home, Unity as family, and Unity as a welcoming, accepting, and very different place from their previous schools or experiences. Creating an identity of solidarity We must not see any person as an abstraction. Instead, we must see in every person a universe with its own secrets, with its own treasures, with its own sources of anguish, and with some measure of triumph. -Elie Wiesel, 1995 Unity School sits behind a parking lot in a low-income, highcrime neighborhood of Great Lake City. Most of the students take the bus from other areas of the city, and receive bus passes every day from the teachers. Directly next to the Unity school's building is a middle school for the arts, and some of the Unity students have had bullying issues with middle school students. In fact, Terri, the lead teacher at Unity, told a story of Unity solidarity. A few years ago, some of the middle school students from the arts school nearby ran up to the Unity building and said they wanted to "touch" the stairs of the gay school. They ran back to their school, laughing, and continued to shout, "gay school!" as they were running away. Terri noticed what they were doing, and walked outside up to the middle school students in the parking lot. She asked a few Unity students to come with her. Terri and the Unity students asked the middle school students what they were doing, and they responded that they were just messing around. She told them that they were not "the gay school," but rather a school that accepted everyone, including gay people. Terri and the Unity students also gave the middle school students a pamphlet about the school's mission and goals. Unity school has also experienced picketers protesting the school itself, and she has used the same strategy as she had with the middle school students from the arts school. Terri has decided to make the reputation of the school and administrative policies not just her "problem" or decision, but rather constructed a culture of school-wide responses and decisionmaking. For instance, many of the students decided what media could and could not be allowed in the school. Terri told a story of how CNN wanted to come and interview some of the students on during the first week of the school's existence eight years ago, and the students said no, we're not ready. Terri had to call CNN and tell them that they could not do the interview. In the same vein, the students decided not to let MTV do a reality show in the school. Rick, one of the students I interviewed, reiterated his sentiments about MTV coming, which really spoke to Unity's mission: The solidarity was also echoed by the use of the pronoun "we" throughout numerous interviews. Rick's last line regarding the MTV invitation was that "that's not what we're about," and I began to notice throughout my interviews with the teachers as well that participants expressed a sense of community and that school was more about "we" than "I." This small pronoun really speaks to how the participants view one another and their community in this space. Bodies in queer spaces The definition of queer(ed) spaces goes beyond the physical and emotional manifestations of a shared community like a school, and infiltrates the body as well. When a school or any space has queer(ed) subjects moving through it, especially if they are predominantly queer(ed) subjects, it is necessary to define and grapple with the queer(ed) body, and its re-construction in these safe(r) spaces. The queered bodies at Unity, mostly students but even teachers, are a reflection of how the hybrid queer identity in/outside of schooled spaces could reside. The queered body is a walking contraction; a student may feel safe to wear a wig or present as a different gender than when they go home for a family barbeque, for example. Marginalized bodies have always, already been re-constructed in these dynamic ways throughout time. How do queered bodies currently get constructed in these worlds? Queered bodies are both how the individual subject identifies nonheterosexual, but also the ascriptions of these identities by others. Many times students' bodies are (mis)read as different from the gendered norm, and that is the justification for bullying. This has nothing to do with their actual queer identifications or dis-identifications. How does queer corporeality complicate Judith Butler's notion of perfomativity, specifically for sexual minorities? According to Butler, performativity is not a one-time, single act, but rather the "reiterative and citational practice by which discourse produces the effects that it names." 33 Further, Butler goes on to argue that "heterosexuality shapes a bodily contour that vacillates between materiality and the imaginary." 34 This imagined, figured world then could reside and be in material spaces and places. Performing in a space "matters" to the body in that there are many of the same representational codes, and embodied manifestations that take place. The representation of emotions and identities, for instance, that are displayed in these queer, separate spaces have just as many real behavioral and social consequences as their similar counterparts in the mainstream and master narrative worlds. 33 Butler, 1993, p. 2. 34 Butler, 1993 Students who were harassed and bullied in their previous schools were not necessarily discriminated against because of explicit sexuality identifications. In fact, Elizabeth Meyer reminds us that many times the reasons students bully has to do with clothing, behavior, and mannerisms outside of the gendered norms. Queer bodies are regulated and violated not because of the subject's identifications, but because of their perceived defiance of what it means to be traditionally male or female. Meyer argues that the "social constructs of ideal masculinity and femininity are at the core of much bullying behavior." 35 Karen Corteen agrees that sexual dissidents are only allowed to be gay in specific spaces and places just like one of the particpant's, Exotic Barbie's, decision to "dress like a boy" depending on where she was, and that lesbians need to display the "signs of being lesbian" or possess "signifiers of lesbian-ness" in order for bullying and violence to happen. 36 Other students have echoed this by telling stories of how their bodies were interpreted to be anything from outside the norm of what it means to be a traditional male or female, and often had little or no correlation with their sexuality identifications. Elizabeth Grosz discusses how the body's surfaces already have "inscriptions…in three-dimensional space," and that materiality should "include and explain the operations of language, desire, and significance." 37 Grosz's definition of virtuality, then, could be used as a framework to ask questions about what it means to be virtually embodied, as a framework for how queer students have both a spatial present and their "link" (figuratively) to a larger world space. 38 Grosz defines virtuality as "the spark of the 35 Meyer, 2008, p. 39. 36 Corteen, 2002, p. 271. 37 Grosz, 2001, p. 210. 38 Grosz, 2001, p. 128. new that the virtual has over the possible…the capacity of the actual to be more than itself, to become other than the way it has always functioned." 39 This new embodied virtuality may be a new embodied utopia, which could be argued is paradoxical and an oxymoron. When cultural inscriptions are made on the body, these cultural inscriptions must be transformed because of their environment, including their school. Although we can agree that virtuality is permeable, these identities are not protected by the reality of the spatial worlds -these spaces could be initially safe(r) places (spaces?) than their rural communities, farms, families, schools, homes. These spaces where (queer) students, as well as their teachers, "try on" different gender expressions, for instance, may be utopic at first glance, because this very re-location, for queer youth, as Grosz would argue, can in fact change their memories of experiences, 40 or how those memories (both "good" and "bad" ones) are constructed, told, and re-told in these environments and communities. One of the teachers discussed his wardrobe choices at Unity, and how his clothing may change if he worked at a different school. Augustine is a middle school Math teacher at Unity and presents as fairly traditional-looking, White, heterosexual male. He shared a few stories about his clothing choices throughout the last year he has been a teacher there. M: What would you miss if you had to leave? C: Everything. My haircut. My outfit. I mean -this -this is me. I'm not joking. This is me -this is me before I started student teaching in college. This was me in high school. This is me in the summer time. It's just -it's me. Anywhere else -I'm not calling it a lack of respect or respect for any type of dress code or culture, but I would -I would respect another school's culture if that's what it was. And 39 Grosz, 2001, p. 130. 40 Grosz, 2001, p. 119. I would maintain a different type of professionalism. But like, I'm comfortable. And I don't lose any respect with my students because of the way I look. Augustine's assertion that "this is me" and "I don't lose any respect with my students because of the way I look" reemphasizes how performativity of the body and clothing choices go beyond the student community. The way in which the teachers choose to express themselves in the material world also plays a role in how Unity is a safe(r) and perhaps more comfortable space than other schools or settings. During my first day as a researcher at Unity, I dressed more professionally with "business casual" attire, and my response from the students was not unkind, but it was not friendly either. I was unintentionally creating a separation and looked more like an observer than one of the teachers, staff, or students. After about a week there, I changed my attire to more casual, to a T-shirt sporting queer-of colour idol Margaret Cho one day, and quickly realized that not only were the students more comfortable with me, but perhaps I was more comfortable in the space as well. Partly because of this, my interactions changed, and so did my research. I also became more accustomed to the space, the people within it, and their comfort with me in that space as well. What will their memories of Unity be, and how are these memories, these stories, going to change how they quite literally walk through these spaces? Many of the student participants told me that they realized that their definitions of a school and a community began to shift. They were able to literally dress and express their gender in ways they had been intensely scrutinized for in their previous schools and homes. This new embodiment has shifted not only how they view and accept their own queer and nonqueer identities, but also how they view their relationships with their teachers. Because their spatial world changed, their expression through their bodies, which is vital to any youth's development, began to change as well. Even the students who were not transgender have expressed how surprised they were at their ability to "dress the way they wanted" at Unity. It could be something as simple as dying their hair blue, or wearing makeup, or having long hair. Butler argues that if these bodies are visually represented in these safe spaces, then perhaps the norms of heterosexuality will be repeatedly "subverted, parodied, or challenged, [and then] dominant 'scripts' might change…geographers argue that place is the stage on which such performances are played out." 41 The students are not the only ones subverting the gendered norms and boundaries at Unity. Augustine, shared one of his favorite Unity stories with me. He challenged the students in his class to improve their Math test scores with an incentive: he would dress in drag with two of his biological brothers and play a game of basketball with them. Trusting that Augustine would actually do it, many of the students test scores improved drastically in the next few weeks. In true Unity form, Augustine and his brothers all dressed in drag and played a game of basketball with the students. Augustine's team won, and he still has the dresses they wore hanging up in his classroom. Many artifacts such as these from this newly constructed queer spatial world are evident at Unity. Augustine pointed to the dresses hung up on his wall with pride. Walking through the halls, it is evident that this place is truly the "island of the misfits," as one of the art teachers so eloquently named it. Many of the students are defiant of the gendered norms simply by how they walk, talk, dress, breathe, and present themselves in this school. The school psychologist is currently starting up a 41 Butler, qtd in Valentine, 1993, p. 650-651. transgender student support group, and many of the students whom I interviewed talked about some of the transgender students having a "clique" and their own set of drama at this school. The transgender students are perhaps the new terrain and frontier of what it means to have a body that is well outside of the gendered scripts in schools. Still, there are grades of difference within Unity, specifically for the transgendered students, but they may not be as distinctive. Many of the non-transgendered students noted the transgender student clique, but instead of speaking about them as a marginalized group or a group that was not as popular as their own, they simply noted that the transgendered students felt they could "be themselves," which was their example of how Unity was different from other schools. These are just some examples of what it means to be materially represented at Unity, and how those queer manifestations shape how the community defines the school. Beyond violence and safety: Problems and implications Earlier, I have argued with Foucault, Morris and Rushbrook that queer bodies can be part of queering a space, and gone on to expand this view with Hubbard's assertion that a queer person might choose to disengage from this process to protect themselves. When queer subjects occupy a space, one could argue that they are also making new meaning for that place, but this visibility, this being or living in a space, has its limitations. According to Larry Knopp, this very visibility that placement brings can "make us vulnerable to violence as well as facilitate our marginalization and exclusion from the security and pleasures that placement usually brings members of dominant groups. Many queers find a certain amount of solace, safety, and pleasure being in motion or nowhere at all." 42 This transitory "feeling" is echoed with many of the students' literal homelessness, or sitting in between two different homes or families. This vacillation between these spaces and places provides a location to interrupt -specifically as it relates to not only social relations in and of these spaces, but also identity construction within them and through them. Kristie Fleckenstein emphasizes the reciprocity of space and relations, and explains that "places are created by actions and the interpretations of individuals as they wrestle with the problems posed by the place they create." 43 Further, places emerge as a result of social interactions, relationships, and these places are nonlinear, always shifting constellations of identity formations and re-formations. "Space is often understood as interrelational, open, and multiplicitous" 44 and "not entirely synonymous with physical place." 45 What does it mean, then, to not just think of space as a "backdrop," 46 but rather multiple constructions of community, safety, and even visibility? One example of "Unity transference" was shared during my conversation with Jennifer, a teacher at Unity and the leader of the restorative justice program there. She and some of the other teachers planned a workshop for some teachers at another school to learn restorative justice circles. The outside teachers were interested in learning "how" to facilitate the circles so they could "bring them back" to their school. As Jennifer and the other Unity students moved through the circle process, Jennifer could tell that some of the teachers just were not "getting it" because they were not fully participating in the process. They still had the 42 Knopp, 2007, p. 23. 43 Fleckenstein, 2005, p. 165. 44 Massey, 1999, quoted in Chavez, 2010. 45 Chavez, 2010, p. 4. 46 Shome, 2003 mindset that they wanted to "fix" the students problems, instead of facilitate a discussion and conversation between the students, and ultimately set up a community of trust. Jennifer said she was disappointed, but pointed out that Unity's practices cannot necessarily always be simply transplanted into another school simply by taking a day-long workshop or retreat. Unity lives and breathes its foundations, and the teachers in particular are committed, above all else, to "listen to the students' stories." Yet, as Laura, the social worker, has pointed out, Unity is not a totally frictionless, un-problematic space. The intersections between race and queerness specifically should be addressed, and the fact that all the teachers are White, which was pointed out by the school social worker, is still an issue. How can many (queer) students of color, for instance, feel truly safe when all of their teachers are White? Zeus Leonardo tackles the idea of safe spaces in relation to race dialogue. His argument is that no space can really be safe when there are subjects present who are already in positions of power. In this case, one could make the case that since all of the teachers are White and many (but not all) are heterosexual-identified, how safe is Unity? Further, Leonardo suggests that the violence that Whites embody toward people of color is often "violence of the heart rather than the fist." 47 One of Leonardo's solutions to this is to create risk as the antidote to safety, 48 and perhaps a comfortable dialogue about race "belies the actual structures of race, which is full of tension. It is literally out of sync with its own topic." 49 I agree that safety is not always possible even within spaces where community is strong, and even in places that people define as home, as is the case with Unity. 47 Leonardo and Porter, 2010, p. 151. 48 Leonardo and Porter, 2010, p. 153. 49 Leonardo and Porter, 2010, p. 153. There are the realities of race and power relations embedded and seeping through all seemingly "safe" spaces. Unity is not immune. Queered spaces and Unity in particular provide a new space of occupation for marginalized groups, a new area of exploration for underrepresented populations, however limited, constrained, and reflective of the "real world" (former school, home, community) they may be. These situated identities within these imagined spatial worlds and spaces provide different avenues for expression, identification, and identity work to take place. What does it mean to queer a space, and to "make it safe"? Catherine Fox calls for a re-definition of safe spaces by changing the "safe" to "safer". She contends that by adding an "r" to safe: … calls attention to the tensions inherent in any discussion and action aimed to counteract multiple forms of terror and violence…it calls to 'unfix' our definition of safety, and, instead engage safety as a process through which we establish dialogues that create and re-create spaces where queer people are more free from physical and psychic violence…it calls us to consider the ways that safety has been too often equated with comfort around normative gender and race identities that reproduce a White male guy at the center of these spaces 50 Through practices that range from the more formal restorative justice circles to conversations with picketers to a basketball game in drag, Unity has set a standard for a more transformative learning process for its students, regardless of their identifications. By committing to simply listen to students' stories, teachers have re-created and been integral players in this community as much as the students. Through taking risks that resist some of the norms of formal education, Unity is in a way creating different avenues of learning and being. 50 Fox, 2010, p. 643. Dr. Mel Freitag is currently the Director of Diversity Initiatives in the University of Wisconsin-Madison's School of Nursing. She recently finished her PhD from the Department of Curriculum and Instruction, and her dissertation is titled "Safety in Spaces: A School's Story of Identity and Community." Using the insight she gained from her research, she plans to continue to serve historically underrepresented students in her new role through mentoring, student programs, curriculum initiatives and faculty/staff professional development. She hopes the students' and teachers' voices and stories will shape how and what it means to be a welcoming, supportive, and safe(r) school. She lives on the bustling east side of Madison, Wisconsin with her partner, two cats, and one adventurous dog. Position Paper Safety for K- students: United States policy concerning LGBT student safety must provide inclusion April Sanders tudents who identify as lesbian, gay, bisexual, or transgender (LGBT) are at risk for harassment due to their sexual orientation or gender identification with over % of LGBT students in the United States (US) reporting such harassment. 1 These statistics demonstrate one aspect of the significance of this issue, but the cost of human life in some instances has revealed another layer of importance related to a need for safety policies for LGBT students. Even though a need exists for such policies, the practice of heteronormativity found in US policymaking regarding bullying does not protect victims or curb the violence. This essay highlights several recent developments in anti-bullying policy in US schools that shows the existence of heteronormativity, which is not helping to protect LGBT students. By understanding the discrimination encouraged by current policy, future policy can be better shaped to protect LGBT students. 1 Biegel and Keuhl, 2010. S Overview of heteronormativity Heteronormativity is a theoretical concept that analyzes the difference between homosexual and heterosexual, and establishes heterosexuality as the norm. Homosexuality is then judged as an alternative against the norm. Even though heteronormativity does not explicitly label homosexuality as deviant, the practice does encourage the inference that homosexuality is in opposition to what is considered normal. Silencing is one way to practice heteronormativity, and it can be done through the process of systematic exclusion. 2 Systematic exclusion can be defined as "ignoring or denying the presence of lesbian, gay, and bisexual people." 3 Such silence does not always have to come from heterosexual individuals. When LGBT people remain silent about their relationships and lives, they convey an LGBT identity as something of which to feel shame. 4 Additionally, when teachers and administrators are silent about anti-LGBT bullying, the same inference about shame is given to students. Along with silence, teachers and administrators imply negative connotations about LGBT identities when they demonstrate they are not comfortable saying words like gay and lesbian. 5 Yet, the way to oppose heternormativity is to be open when discussing LGBT issues with students so that they can form their own truth. 6 Hoffman describes such absence of discussion and acknowledgement as a "conspiracy of silence we have all entered into" with a result that "can only damage their [students] chances of emerging whole from their school years." 7 2 Friend, 1993. 3 Friend, 1993, p. 210. 4 DePalma and Atkison, 2009. 5 DePalma and Atkison, 2009. 6 Nelson, 2009. 7 Hoffman, 1993. US education and policy All children in the United States have access to free public schools. Formal schooling in the US lasts 12-13 years, beginning at age 6 in kindergarten and lasting until around age 18 in the 12 th grade. The requirement to attend school ends by age 16 in most states; the remaining states require students to attend school until they are 17 or 18. Education is primarily the responsibility of state and local government; the individual states have great control over their schools, and policy is largely created by each individual school district at a local level. 8 This brief explanation is included to demonstrate that school policy affects the life of US school children for the majority of their first two decades of life, thus shaping their perspectives. LGBT students: An at-risk population The National Mental Health Association (NHMA) has designated LGBT students as an at-risk population in US schools, and reports that their high level of risk is a result of the stress around them and "not because of their inherently gay or lesbian identity orientation." 9 The high level of suicide rates as well as homelessness in this population of students could be connected to Tomsho's study showing LGBT students or those perceived to be LGBT were bullied twice as often as students who were not LGBT. 10 In a 2008 study conducted by the Gay, Lesbian and Straight Education Network (GLSEN), students said they did not report bullying due to their belief that no action would be taken by school officials, and 1/3 of the students surveyed said they had reported the mistreatment with no response from the school. The lack of response from school officials is another link in the chain 8 United States Department of Education, http://www.ed.gov/ 9 National Mental Health Association, http://www.nmha.org/go/information/get-info/children-s-mental-health/bullyingand-gay-youth 10 Tomsho, 2003. of harassment LGBT students experience resulting in negative self-images and stunted emotional growth, which contributes to problems with social interaction. 11 LGBT students are developing an identity in a society that is telling them that homosexuality is deviant. Most of their credible sources of leadership, such as ministers or teachers or family members, are sending the message that homosexuality is not the accepted norm, and these young people then could begin to learn that hiding their identity when their adolescent years begin is one way to navigate when "social interaction and sexual strivings coincide with formulating an adult identity." 12 Although, the precarious nature of how LGBT students will respond to developing their identity will vary, especially as various perspectives of inclusion are introduced. Heteronormativity in policy Local policies within school districts across the US vary in whether or not sexual orientation is specifically listed in the bullying policy observed by school administrators. One trend in policymaking is to avoid discussing LGBT issues as they are connected to the bullying. Tennessee State Senator Stacey Campfield is the sponsor of State Bill 049, which is also known as the "Don't Say Gay" bill. Campfield believes school officials should be banned from discussing LGBT issues at school even in relation to anti-gay bullying and harassment. The bill is described as a neutral bill since school officials would not be allowed to discuss LGBT topics through the ninth grade. 13 Far from neutral, the bill encourages discrimination against LGBT students through the silence mandated in this attempt of neutrality policy. The message this bill teaches youth is that school officials cannot even talk about LGBT topics because of the associated shame: "Schools are always and already addressing oppression, often by reinforcing it or at least allowing it to continue playing out unchallenged, and often without realizing that they are doing 11 Ryan and Futterman, 1998. 12 Ryan andFutterman, 1998, p. 5. 13 Humphrey, 2011. so." 14 The silence mandated by this bill is a clear reinforcement of oppression against LGBT students through the practice of heteronormativity. Anoka-Hennepin School District in Minnesota has been debating this neutrality policy. This district is Minnesota's largest district serving over 40,000 students. The district had 6 suicides throughout the 2009-10 school year, and friends and parents of the students claimed that all were experiencing anti-gay bullying and harassment. One of the suicide victims was Justin Aaberg who was 15 years old and hanged himself in his room in July of 2010. Justin's mother, Tammy Aaberg, believes the neutrality policy encouraged anti-gay bullying against her son, and she claims to have not even been notified of some instances of antigay bullying of which school officials were aware. The neutrality policy instructed administrators not to discuss that anti-gay was the root of the bullying. In August 2010, the district amended the policy to specifically include anti-gay bullying, but opponents of this policy contend that addressing specifics about the victim is not necessary and should not be discussed in the school setting. 15 The silence in schools when discussing anti-LGBT bullying is a clear example of how heteronormativity works to create an environment where only one sexual identity -heterosexualityis considered normal and without shame. The neutrality policy is in essence a silence policy, and silence leads to further prejudice. Solutions for future policy Even though school districts can choose whether or not to include sexual orientation in policy, one particular landmark court case in the US could begin to have great impact on local policies created by school districts. In Nabozny v. Podlesny, the ruling determined that a public school could be held accountable for not stopping antigay abuse. 16 Jamie Nabozny experienced repeated 14 Kumashiro, 2004, p. XXIV. 15 Crary, 2010. 16 Brief of Appellant, Nabozny v. Podlesny, No. 95-3634, 1995 antigay harassment at his public school in Ashland, Wisconsin, eventually leading to his need for surgery from being kicked excessively in the stomach. When Nabozny reported the bullying, his middle school principal told him: "If you're going to be openly gay you have to expect this kind of stuff." 17 This case is important because it demonstrates that one possibility for providing protection for LGBT students in a heteronormative society is through the legal system. Since school districts and school officials can legally be held accountable for not intervening in antigay harassment, the legal system could motivate school officials to protect LGBT students. Such protection might be motivated only by fear of large settlements that could financially bankrupt the school district, but protection would still be provided. The Nabozny ruling was a historic decision and held public schools responsible for intervening in LGBT bullying in order to provide a safe school environment for all students -no matter the sexual orientation or sexual identity. Nabozny settled for just under $1 million in damages with the school district. 18 This significant case relates to local policy because school officials and districts can now be held responsible for not stopping anti-LGBT bullying, which means students and school officials must be allowed to discuss LGBT issues related to the bullying. Overcoming silence is one very effective way to combat heteronormativity. Legal action is not a fully effective solution for helping LGBT students targeted by bullying. In spite of the Nabozny ruling, most states only have a policy that prohibits bullying based on race, sex, religion, national origin, and disability. 19 Only 13 states prohibit sexual orientation discrimination against students who are victims of bullying: California,Colorado,Connecticut,District of Columbia,Illinois,Iowa,Massachusetts,Minnesota,New Jersey,New York,Vermont,Washington,and Wisconsin. 20 Additional measures must be taken to help overcome heteronormative policies. The Safe Schools Improvement Act (SSIA) would amend the Elementary and Secondary Education Act to require school districts that receive federal funds from the national government to create a policy addressing bullying based specifically on sexual orientation. The SSIA would also require states to report data on bullying and harassment to the Department of Education, and this report would be provided to Congress every two years. Senator Robert Casey (Democrat Party Member from Pennsylvania) and Senator Mark Kirk (Republican Party Member from Illinois) reintroduced the SSIA in the Senate on March 8, 2011; currently, the bill is being discussed in committee. 21 In the past two years, several significant changes have been made in policy at the district level in some areas across the country concerning the bullying and harassment of LGBT students. In April of 2011, the San Diego Unified School District Board of Education unanimously approved an anti-bullying, harassment and intimidation policy including anti-LGBT specifically as a cause. 22 The Minneapolis School Board voted unanimously in January of 2011 to add to the district's anti-LGBT bullying policy with a resolution requiring incidents of anti-LGBT bullying to be tracked. In addition to the policy change, the district will also add LGBT health issues to the sexual health curriculum and provide a yearly training for teachers on how to deal with LGBT training. 23 By addressing anti-LGBT bullying, the silence can begin to be broken because allowing policies that do not address anti-LGBT discrimination further justifies that the discrimination is acceptable and should be tolerated. 20 Biegal and Kuehl, 2010. 21 S. 506--112th Congress: Safe Schools Improvement Act of 2011. 22 Braatz, 2011. 23 Williams, 2011 A model policy should be enacted within all school districts across the US to protect LGBT students as well as the school district. Clearly stating in policy that bullying and harassment of LGBT students will not be tolerated sends a message to teachers, administrators, and students that the school should be safe for all students and not just the socially favored ones. The NEA, the National PTA, the American Association of School Administrators, and the National Association of Secondary School Principals all endorse the specific listing of anti-gay bullying and harassment in public school policy as a way to help provide a safe school environment for LGBT students. 24 Policy alone will not solve the problem of violence and homophobia directed at LGBT students. The recognition of the problem in policy at all levels including local, state, and national is simply a starting point in an attempt to provide LGBT students a basic right of safety in school. By establishing a policy that is uniform across all US school districts, students will then be able to go beyond the silent tolerance of difference and instead be able to discuss, respect, and accept differences. Conclusion In spite of the heightened awareness of the bullying issue and the strong concern for students, the majority of states within the US do not have anti-bullying laws specifically focusing on anti-LGBT bullying. By avoiding the inclusion of anti-LGBT bullying measures in school and public policy, a silence related to homophobia is currently being allowed to exist around the issue of protecting LGBT youth. Such silence and avoidance of including anti-LGBT bullying in the policies demonstrates the practice of heteronormativity. Local school policy as well as state and national legislative measures should break the silence and very clearly include anti-LGBT bullying, and until such inclusion exists, public officials and school administrators in the US are encouraging a clear expression of discrimination. 24 school-policies-should-protect-all-students-including-lgbtstudents April Sanders is a former English teacher and curriculum specialist for K-. She received her Ph.D. in curriculum and instruction with a specialization in language and literacy from the University of North Texas. April is currently an assistant professor at Spring Hill College and teaches courses for pre-service teachers focusing on content-area literacy and language arts methods for the classroom. Rachel Epstein, Becky Idems and Adinne Schwartz his article is about the school experiences of young people with LGBTQ parents. 1 Based on 31 interviews with youth, ages 10 -18, the article attempts to summarize what these young people had to say about the challenges they encounter in school, and the strategies they adopt in the face of them. There is a large and growing body of literature addressing the experiences of sexual minority youth. Many studies have documented the stresses of lesbian, gay, bisexual, trans and queer (LGBTQ) identities (disclosed or not) on young people. Schools, in particular, are identified as environments where LGBTQidentified youth experience ongoing harassment and bullying. 2 Distressingly, the literature shows that little is done to address homophobic aggression. It appears that, while teachers are aware of homophobic bullying, they are "confused, unable or unwilling to address the needs of lesbian and gay pupils." 3 In recent years, this research on the impacts of homophobia on LGBTQ youth has been utilized, alongside the efforts of community activists, to support struggles for basic human rights with regards to sexual and gender diversity. One such hard-won victory is the legislated requirement that all publically funded school boards in the province of Ontario, Canada must support students who want to establish a Gay-Straight Alliance (GSA). However, anti-homophobia initiatives in schools typically focus on queer youth, often excluding children and youth with LGBTQ parents, sometimes referred to a "culturally queer" or "queer spawn" (QS), terms coined by Stefan Lynch of COLAGE (Children of Lesbians and Gays Everywhere). 4 Many young people with LGBTQ parents are recognizing, as they grow older, that their experiences being raised in LGBTQ communities and cultures can have a bearing on their identities and sense of belonging. Many are challenging queer communities to create spaces that are welcoming to them, particularly to those who are, in Lynch's terms, erotically straight but culturally queer. The term "queer spawn," like "queer", is not embraced by all to whom it refers. Differential responses to these terms are embedded in history, in preference, and in identity. We choose to use the term "queer spawn" (QS) in this article to refer to children and young people with one or more LGBTQ parents. We recognize that not all the people for whom we are using the term would self-identify in this way. However, we do think that most young people with LGBTQ parents would agree that they often have a unique experience at school. The homophobic, transphobic and heterosexist teasing and harassment of which they may be targets are not necessarily due to their own sexual orientation or gender identity, but often stem from their parent's sexual and/or gender identities and their family structures. They may be straight-identified themselves, but find themselves identifying with and defending queer people and cultures. Abigail Garner, in her book Families Like Mine, refers to the "bicultural identity of heterosexual children who are linked to queerness through their heritage." While not all children of LGBTQ parents identify as straight, those that do sometimes find that it is not always clear where they fit, in relation to queer or straight culture. 5 Sometimes even in anti-homophobia initiatives and committees such as Gay/Straight Alliances (GSAs), queer spawn have to explain their presence, as reported by one of our participants: There was one instance where I was at the lesbian/gay orientation week activity. And people were like 'why are you here?' They were kind of confused and so I had to explain my history to them… (girl/16/lesbian moms) This exclusion of queer spawn within LGBTQ communities is echoed in the relatively scant literature attending to their lives and concerns. Studies that do exist on culturally queer children and youth link their safety at school with strategic choices about whether, and how, to disclose the sexual and/or gender identities of their parents. 6 Elsewhere, queer spawn experiences of school are framed more theoretically, exploring how experiences of heterosexism and homophobia impact personal identity development. 7 For the most part, research on queer spawn experience provides broad accounts of queer spawn life, with school as one facet. Between 2007 and 2009, the Egale Canada Human Rights Trust 8 surveyed more than 3,700 students across Canada and found that more than a third of youth with LGBTQ parents reported being verbally harassed about their parents' sexual orientation, and 27 per cent reported being physically harassed. Those youth were also more likely to be harassed about their own gender expression, and their own perceived sexual orientation or gender identity. Just over 60 per cent of students with LGBTQ parents reported that they feel unsafe at school, and that young people will sometimes avoid disclosing that their parents are LGBTQ in order to protect themselves. This article foregrounds the voices of 31 queer spawn, as they share the day-to-day nuances of the challenges they face at school, the strategies they adopt in response to these challenges, and the supports they feel are important. Based on these accounts, we offer QS-centered recommendations to help parents, teachers, and administrators offer appropriate supports, while working towards transformative changes that will make schools safer for all members of LGBT communities, including queer spawn. The study The LGBTQ Parenting Network (PN), a community-based program located in Toronto, Canada, provides resources, information and support to lesbian, gay, bisexual, trans and queer (LGBTQ) parents, prospective parents and their families (see www.lgbtqparentingconnection.ca). The PN was initiated in 2001 by the Family Service Association of Toronto, and is currently a program of the Sherbourne Health Centre in downtown Toronto. At its inception in 2001, the PN held a series of focus groups asking LGBTQ parents about the kinds of programs they would find helpful. Across the board, the issue of biggest concern was schools: How will our children experience homophobia/heterosexism at school and how do we prepare them to respond? When and how do we intervene individually and/or collectively with other parents and community members? In 2004, partially in response to these concerns, the PN initiated a research project designed to explore the experiences of young people with LGBTQ parents in relation to the ways that homophobia, transphobia, and heterosexism manifest in their daily lives, with particular emphasis on their school experiences. The project took place at a particular political moment in Canada: a nation-wide debate about same-sex marriage. While, in fact, the majority of Canadians supported same-sex marriage, the debate unleashed a torrent of homophobic outrage, based on arguments about the "natural connections between marriage, sex and procreation," on the immorality of homosexual relationships, and the risks to children living in lesbian/gay households. Many LGBTQ parents were concerned about their children being subject to these debates; some were shielding their children from news sources, and others felt isolated in the face of this backlash and worried for the well-being of their children. In this context, and with funding from the Wellesley Central Health Corporation, the PN launched a research project designed to explore the impact of the same-sex marriage debate on children and youth with LGBTQ parents, with particular emphasis on what was happening in schools. Centered around the level of awareness of children and young people about the public debates on the marriage rights of parents like theirs, this study engaged 31 queer spawn, as well as 17 parents and 15 teachers in discussion about the school experiences of culturally queer kids. These conversations were specifically focused on the impact of the public debate about whether or not it is good for children to live in LGBTQ households, on queer spawn and their parents; while more generally exploring the experiences of culturally queer kids in urban, rural, and suburban Canadian classrooms. Our questions included: What have teachers who are committed to anti-homophobia work in their classrooms noticed in terms of the impact of the debate on what is happening in their classrooms? What kinds of experiences are kids and young people with LGBTQ parents having in schools, with extended family, in community? What factors help them to feel safe to talk about their families, experiences of discrimination, exclusion, bullying, name-calling or other forms of homophobic and transphobic harassment at school, in their families and in communities? Our research methodology was guided by principles of community-based participatory research as synthesized by Israel,et al. 9 These include the establishment of collaborative working partnerships between community members, organizational representatives and researchers in all aspects of the research process, with the aim of increasing understanding and knowledge of research priorities and questions that arise from community concerns. The knowledge generated is used to enhance the health and well-being of community members and to further social justice. The project was guided by a community advisory committee, consisting of partner organizations, academics, community 9 Israel, Schulz, Parker and Becker, 1998. activists, LGBTQ parents, teachers, and service providers to LGBTQ families. Our triangulated research approach included documentation of the public discourse surrounding the same-sex marriage debate; interviews with key informants; and on-line surveys and group interviews, with children/youth living in LGBTQ-led families, LGBTQ parents and teachers. In total we conducted group interviews with 31 young people with LGBTQ parents, 17 LGBTQ parents of teenagers, and 15 teachers. This article is based solely on the group interviews with 31 young people with LGBTQ parents. The interviews were conducted by Rachel Epstein, a long-time LGBTQ parenting activist, coordinator of the PN, and an LGBTQ parent herself. Interview groups consisted of 2 -7 young people at a time, based on age group (10-11; 12-14; 15-18) and availability. Most were held at the Family Service Association offices, although one took place at a regular meeting of COLAGE (Children of Lesbian And Gays Everywhere), a support group for children/youth with LGBTQ parents. Interviews were guided by a set of questions (see Appendix A), with room to follow up on areas of interest and themes generated by participants. We found that the interviews, in most cases, became primarily focused on school experiences. Young people spend an enormous amount of their time at school and it appears to be at school that young people with LGBTQ parents are most confronted with negative ideas and behaviours based on the composition of their families and/or the sexual orientation/gender identity of their parents. We have focused in this article on young people's accounts of their school experiences. Below we have tried to capture some of the distinct and underrecognized school experiences of queer spawn, and to draw out some of the strategies they employ to deal with the homophobia and heterosexism they encounter. Our interviewees range in age from 8-18. 18 are girls, 13 are boys. More than a third speak a language in addition to English, and they identify with a variety of cultures and ethnicities, including Canadian, WASP (White Anglo-Saxon Protestant), Jewish, Sri-Lankan, First Nations, Caucasian, Portuguese, Italian, Polish, African-Canadian, British, Chinese, and Armenian. They describe an array of family arrangements. About one quarter have at least one heterosexual parent. Others describe a gay, lesbian, and/or trans two-parent "nuclear family," or a "blended family," created when their birth parents separated and formed new families. Several are coparented by lesbians and gay men. Because the majority of the young people we interviewed have parents who identify as gay or lesbian, the workings of bi and transphobia are less addressed in this article. For an excellent resource for children of trans parents, see the Kids of Trans Resource Guide, 10 developed by COLAGE. 11 The main commonality amongst the QS interviewed here is that almost 90% have at least one lesbian parent. Another common feature is their urban location: 87% were living in a large Canadian city at the time of the interviews; 4 respondents describe living in a mid-size community. This article is written by three queer activists, one of whom is also a parent. Thus our use of the words "our" and "us" rather than "they" or "them" when talking about members of LGBTQ communities. Interspersed with our reflections, the voices of these 31 queer spawn offer insight into the questions: How do homophobia and heterosexism manifest at school? What helps? What doesn't help? More specifically, how do those who are involved with QS in school (their peers, parent of their peers, teachers and administrators) contribute to making the experiences of QS more or less challenging? This article is written for parents, teachers and school administrators and we conclude with a summary of suggestions from QS about the factors that assist in creating positive experiences at school. These suggestions can help inform the practice of parents, teachers and administrators as well as others who are in a position to advocate for the well-being of QS. What happens: Queer spawn at school It is important to state at the outset that while the young people we talked to described profoundly heterosexist and homophobic school cultures, they do not have only negative experiences at school. Some have experienced very little homophobic harassment at school; others describe supportive actions and attitudes from teachers and peers. This section will focus on QS's accounts of their experiences of homophobia and heterosexism within classrooms, and attempt to tease out their understandings of the links between institutional practices, and the attitudes and actions of teachers, parents and peers. Everyday heterosexism: "Straight until proven otherwise" Despite the positive experiences described by some respondents, the culture in most schools continues to be deeply homophobic and heterosexist. QS describe a range of ways this manifests in daily school life, from every-day put-downs, to direct teasing, to harassment and bullying from peers and their peers' parents, as well as from teachers. They are aware of heterosexism within day-to-day administrative practices and curriculum: It's also about forms, when it says 'father' and 'mother' (a lot of agreement in the background) and we have to cross it out and write 'mother.' I hate that. I should be like parent or guardian one and parent or guardian two. It's really oppressive, every time having to cross it out…even at my school which was very progressive, a very awesome school, but even they had forms that said 'mother' and 'father.' It's jut annoying…it's like straight until proven otherwise. (girl/18/lesbian mom) Last year I was taking an introduction to sociology, anthropology and psychology and you had to make this chart and I couldn't do it -it didn't work with my family so I went up to my teacher and she's like "oh well, you can just do it on some other famous family." And I'm like, "No, I don't want to. I want to do it on my family, just like everyone else is doing". She was like, "No you can't." It's this scientific stupid thing. So I made one up and was like "You can fail me if you want because it's not real, but I don't care. I'm not doing it". She's like "do the Eaton's." I was like "No, I want to do my family." She knew my parents were lesbians and didn't even think when she gave the assignment that it might be an issue, and it was just ridiculous. (girl/16/lesbian moms) Identifying the exclusionary functions of ordinary classroom practices such as permission forms and classroom activities, respondents describe feelings that range from invisibility and notbelonging, to a sense of being deliberately ignored, uncared for, and/or excluded. Harassment: "That's so gay! Who's your real mom?" A sense of not belonging is heightened when QS become the target of teasing or harassment. QS describe harassment from peers that ranges from yelling "ewwww" at them in the playground, to taunting them for supposedly "gay" behaviours, to shutting them out of social circles. They recount many variations on the ubiquitous "that's so gay": many of their peers commonly use words like "Gaylord", and "Lesbo", and sing homophobic rhymes and songs. The time I felt most awful… I was talking to one of my best friends and I told him my parents were gay….He kind of like sat there and looked at me and he's like 'are they Gaylord?' (boy/10/ trans lesbian mom and bi mom) Some interviewees distinguish between these more generic insults, which are often applied as random put-downs, devoid of understanding, and more deliberate teasing, name-calling and harassment. They were just always teasing me…I'd be minding my own business in the playground or doing whatever at lunchtime and they'd just come up and start calling me names…I don't think they knew the word lesbian, they weren't smart enough, they were just like 'you're gay' or 'you're a fag'. …always asking me questions about my mother, 'do you have two mothers…that's so weird, that's so stupid.' (girl/16/lesbian mom) Name-calling, calling me stupid and saying that it was my fault that my mother was a lesbian and that it was a problem that she has a partner that was a woman…and that it was against every religion known to mankind and that it was the wrong way to be… He wasn't a Christian, but he used that as an excuse to pick on me. (girl/14/lesbian mom) QS also describe questioning from both peers and adults, based on stereotypes and misinformation, framing it as unwanted and intrusive: 'So who's your real mom?' 'Where or who's your dad?' 'Do you know your dad?' 'How were you born?'…the worst I got that from was actually adults...a close family friend [of a friend] was there and she found out I had four moms and she just didn't get it, and I spent the whole TTC ride trying to explain. (girl/16/lesbian moms) It is within this context of teasing and unwanted questions about the intimate details of their home lives that QS describe the emotional and social impact of negative messages and homophobic attitudes: I kind of built a wall against myself like to shield myself from certain people. (girl/14/lesbian mom) They would suddenly accuse that boy of being gay and say 'Oh, you're so nasty. Oh that's wrong.' It's kind of like a movementsensored dynamite -you flick, you take one little move, the dynamite goes off. (boy/10/ trans lesbian mom and bi mom) I especially wanted to beat the crap out of one guy…but I knew that I'd be the one who'd be hurt, cause it was all of them who were saying it…I was like really sad and angry at the same time, but I didn't do anything. I didn't say anything, I just, I just stood there, and then I felt like, why am I gonna stand here with six bastards around me, so what I did was go back inside the school…they like, nobody knows, nobody except people I can actually trust. (boy/9/lesbian mom) Faced with the ever-present possibility of a homophobic comment or unwanted question, QS describe their school experiences as sometimes involving constant vigilance, selfprotective behaviour and a sense of helplessness. The target of teasing: "They go for your weak spot" Some kids note the constant presence of teasing in their lives, "every day, every week." Many come to understand that homophobic teasing, like most teasing, is designed to hit at your 'weak spot.' One young woman describes how information about her parents was used against her: …once they found out about my parents they used it against me. I was harassed on MSN…they accused me of looking down girls' shirts, and because my parents were gay they suspected that I was gay. And everyone knew it and no one defended me and honestly it was terrible, and I'm thinking to myself 'you know that I'm not, and you're just making this up so you can get to me'. And then it really did. (girl/16/lesbian moms) This account stands in stark contrast with that of another respondent who describes mostly positive school experiences: Both of these accounts suggest the need to look more deeply at how classrooms address bullying and harassment more generally. They also suggest the need to examine individual supports for children and youth-the ways that teachers and parents might encourage comfort and confidence in QS, which the second respondent seems to suggest has the effect of inoculating her against potential teasing. Attitudes from home: "Bad as poo" While education of teachers, school administrators, and students is critical, these accounts from young people call for education on a much broader front, by reminding us that children's attitudes do not develop within a vacuum. Many QS suggest that many of their peers learn homophobic attitudes at home, from parents and other family members. ..there are the kids who are exposed to homophobic views from their parents or wherever…when I first started school they weren't knowledgeable enough to even verbalize what they thought, like they wouldn't even know what a lesbian was, because if your parents don't literally talk to you about the issues, you wouldn't be able to even approach it at all. (girl/18/lesbian moms) …with the kids you kind of have to say 'look, this is what it is,' and then after they've learned a bit about it then often they're fairly supportive but often they don't even really know about it at all…and then they'll say something that they've learned at home or that they've heard somewhere and it will be something bad about gays or lesbians, like once somebody actually said he heard it at home that gay and lesbian people were as bad as poo. (girl/13/gay dads) These accounts, and others, call for recognition of the complex and layered ways that the beliefs and prejudices of families of origin play out in the schoolyard and classroom behaviours of individual students. In particular, they suggest that lessons learned at home have an impact on what children and youth perceive as normal or deviant, and thus might view as a 'weak spot' in their QS peers. Teachers' attitudes: "A child should be raised by a man and a woman" Complicating matters is the reality that not all teachers are on side. Many lack the cultural competency necessary to fully support the QS in their classrooms, while still others inadvertently or intentionally perpetuate homophobia and heterosexism. This lack of knowledge, awareness, and sensitivity to the realities of LGBTQ families can lead to serious exclusions in curriculum and classroom activities: When I handed [the family tree assignment] in to the substitute he was just utterly confused about how I could not have a father and how could I not have filled it out properly. So I just didn't fill it out and I sat at my desk the whole day, the whole day, because he said that until I finished my work I wasn't allowed to do anything. (girl/11/lesbian moms) …my teacher was really great except my mom told me that when I was in senior kindergarten, we were making pots for Mother's Day, and they didn't buy me two, but just because they forgot…like, the teacher was really supportive and it wasn't because she didn't want me to have two pots… I guess they just weren't aware to buy the second one. It wasn't anything against me, it was just like they weren't thinking about it. (girl/17/lesbian moms) These accounts, and others, uncover heterosexist ignorance and oversight by teachers, which respondents link with feelings of invisibility and not belonging, as previously discussed. While these actions seem to be perceived as unintentional by QS, some young people report blatantly homophobic attitudes from their teachers: This teacher was completely and entirely horrible and when he said that a child should be raised by a man and a woman I completely ripped his head off. I'm like, "You know what, you're completely, totally wrong 'cause I've grown up all my life with a woman and a woman raising me and I've had no problems." And he goes "Well, wouldn't you have liked a male role model in your life?" And I'm like "you're raised by who you need to be raised by. (girl/14/lesbian moms) My Grade 5 teacher openly confronted me one day, he held me back from recess and he's like "Your parents are lesbian, and that's really wrong. You're like really screwed up"…I was really depressed for the next couple of days cause I didn't know anyone else with gay or lesbian parents, so I thought that I was the only person in the world who was royally screwed up like this… (girl/12/lesbian mom/FTM parent) Respondents report feeling more or less able to respond to teacher homophobia, for a variety of reasons. The second young woman chose not to tell her parents about this incident, because: I didn't want them to get all mad or something and get him in trouble or fired or anything like that. (girl/12/lesbian mom/FTM parent) This participant's comments demonstrate the powerful effect that the attitudes of teachers and other authority figures can have on QS. Lack of intervention: "There's so much homophobia and they never do anything!" In the face of ongoing and pervasive use of homophobic language as insult, the young people we talked to were sometimes astonished at the lack of intervention on the part of teachers and administrators. Over and over, they relate how, even within equity-mandated boards, homophobia goes ignored and unchallenged: …it's weird at my school cause there's so much homophobia and I know there are a few gay teachers, and they never do anything. They just see the kids doing it and they just sort of pretend like it didn't happen, like when kids say stuff they'll just look the other way, when it comes to the gay stuff they just brush it over. (boy/15/lesbian mom) One participant explains that while certain types of teasing are off limits, homophobic teasing continues to be acceptable: …there's hardly any kids who tease kids about fatness or anything else…cause they get in trouble more about the fatness and other things…this boy in my class came up to my friend and said 'oh you're gay, you're stupid' and everything like that, and the teacher didn't do anything. (girl/9/lesbian moms and gay dad) Confronted with the pervasiveness and acceptance of heterosexist, homophobic, and transphobic attitudes, and the use of these prevalent societal attitudes as targeted weapons by their peers, it might be tempting to view QS experiences as overwhelmingly negative, consisting of constant harassment and bullying. However, as mentioned previously, not all respondents reported such experiences, and those who did experience homophobic bullying were not hapless victims. What helps: Queer spawn fight back! This section focuses on QS descriptions of resistance and support. It explores the complex strategies they deploy; the ways that they access support within their peer groups; and their perceptions of the impact of these strategies, on themselves, their peers, and their families. Strategies: "Confront, deflect, diffuse, poke back" Many QS do carry a deep sense of confidence in themselves and in their families, and choose to directly confront homophobia as a problem that is external to them, and not a reflection of their worth. Sometimes they find themselves defending themselves, their LGBTQ friends, other kids with LGBTQ parents, and LGBTQ people generally: …my friend whose dad is gay, they wouldn't stop bugging him and teasing him and all that, so I just went looking for the guys. I said, 'You make my best friend cry one more time, you will have to deal with me, and trust me, I am shorter than you but I can beat your ass up.' And then they like just stopped bugging him after that cause I think they kind of got scared… (girl/15/gay dad) Many expressed incredulity at the ridiculousness, ignorance and stupidity of some of the remarks and attitudes they encounter. One response strategy involves toying with this ignorance by reversing what are perceived to be silly questions, agreeing with or not responding to provocative statements, and generally using humour to diffuse and to poke back: She walked up to me with four girls behind her and they kind of pushed her forward and she looked back and she's like, 'can I ask you a question?' And she stood there for like 20 seconds and I'm like, 'what do you want to know?' 'Are your parents lesbians?' After like 20 seconds and I'm like, 'yeah' and she's like 'oh.' So then I said, 'okay Nancy, let me just back up here. Just stand there for a second.' And I walked down to the other end of the hall and I walked up and I like looked behind me sort of to the side and stuff and I'm like 'Nancy, could I ask you a question?' She was totally confused. And I'm like 'Are, are your parents straight?' (laughter) She was so taken aback. It was hilarious. And then she asked, 'why did you do that?' And I'm like' cause you ask the stupidest questions in the world. You know, just ask me, 'are your parents lesbians?' And I'd be like, 'yeah.' But no, you know, she had to make a big deal about it, be all like creeped out by it. So that was fun. (girl/14/lesbian mom) We were talking and I was like, 'yeah, no, I come from a sperm bank' and she's like, 'what's that? I was like, 'it's this place where you go if you don't have a male. She was like 'oh, really,' So she asked me all these questions like, 'how did the sperm get into your body?' I was like, 'you breathe it, it like goes through your mouth,' and she's like 'really?' (laughter)…It took like 20 minutes to describe what a sperm bank is. And then she's like 'which mom do you like better?' She actually asked me that, like which one. Like uh, 'both,' and she's like 'no, but like which one do you like more?' Like, 'do you like your mom or your dad more?' and she was 'neither' and I'm like 'there you go.' It was just really funny…I really enjoyed it. (girl/13/lesbian moms) Although elsewhere in their accounts, both respondents describe feeling annoyed and targeted by intrusive and ignorant questions, they have each developed sophisticated assertiveness techniques to deflect and diffuse these unwanted questions, while educating their peers. Moreover, their accounts suggest that when these strategies are successfully deployed, they feel a sense of enjoyment and pride. Peer support: Queer and straight In the face of the uncertainty of support from school staff, and because so much of young people's school experience is centred around their peers, QS often give prime importance to peer interactions. Decisions about whether, when, and how to disclose their family configurations can be big issues for QS, and their disclosure and coping strategies vary widely. Some embrace a strategy of coming out early and always, as a way of heading off homophobic reactions and establishing their family structures as "not their weak point". Others are more careful and selective about where and with whom they disclose. Always involved is a process of safety assessment: I don't really know, it's just sort of like you have a reluctance bringing it up with certain people, there's just something about them... (boy/13/lesbian mom and gay dad) I went to a day camp and there would be two boys playing together and then kids would go, like 'ewww, that's nasty' and then later they were making rude jokes about gay people…Oh no! I never told them, the first time I heard those comments I zipped my lips, I did not want to get tormented. (boy/10/trans lesbian mom and bi mom) In these, and many other, accounts, QS emerge as sensitized to clues about safety, and picky about choosing friends. Sometimes it is hard to describe what the clues are, but there is just "something about them" that inspires caution; while in other cases, they listen for homophobic remarks and limit their disclosure accordingly. Youth describe the significance of a single bully in creating situations where QS are not safe to come out to their peers, for fear of being targeted: He pretty much changed everyone's mindset to 'you have to pick on her because she has two moms.' (girl/14/lesbian moms) The bully kid who had the anger management problem...if he saw two women walking down the street near my school he would be like 'oh my God, they're lesbians, oh my God everyone.. And then he would get everyone to point and laugh…there was no direct bullying but…it had an effect because…I knew that if I was…out like that…people would do that to me also. So now this person isn't in my class anymore but I still don't want to say anything… (girl/11/lesbian moms) In both of these instances, QS demonstrate sensitivity to the complex dynamics of schoolyard interaction. In particular, they describe an awareness, bordering on hypervigilance, to the impact that one powerful person-whether an ally or an enemymight have on the behaviours of the rest of the children or youth in their peer group. It is within this understanding of group dynamics that knowing other QS can be an important, sometimes crucial, source of support and comfort. At my new school there is a girl and her dads invited me over and we really bonded and I found that having someone to talk to about these kinds of things, it kind of helped, because you know I didn't feel like I was the only person in living history to have parents like I do. (girl/17/lesbian moms) …at the beginning of Grade 7, we were in equity studies class, and I said "my dad and his partner are gay, so please don't use gay as a general insult around me cause I could get very mad at you"…and then a number of other people stood up and said, 'yeah, my parents are gay or lesbian too…so we'll all get mad at you.'…I'm not sure if they would have said it if someone else hadn't said it already because there are other people in the school who have gay or lesbian parents, you can see it on the phone chart, but they don't say it…it's nice to have help, instead of being the only one (girl/13/gay dads) …(knowing other kids with LGBT parents)…I don't feel like E.T. or something. And they back me up in lots of situations. (boy/10/ trans lesbian mom and bi mom) These accounts speak to the powerful roles that both visibility and shared experience can play not only in lessening isolation, but in creating opportunities to challenge homophobic harassment and bullying. Similarly, support from straight peers-friends who will recognize and confront the homophobia of other kids, and who will put themselves on the line-is equally, if not more, significant. "…then one of the guys made a joke, I knew they were talking about me but they weren't saying my name, and then a girl goes, 'oh my god, gay people are so egghhh.' And one of the other guys says 'shut up and sit down, no one wants to hear you talk.' Everyone was just quiet then. (girl/17/lesbian moms) …and then she's like 'you're dad's gay. Oh my god, that is like so weird!' At first I kind of started crying a bit, and then my other friend she was like, 'what's wrong?' and I said '...is talking trash about my dad…' So then my friend, she's known my dad the whole entire time, for like seven years almost, we say like she's their adopted daughter, she just rolled up her sleeves, and she's a year younger than me, and she's like, 'that's it, where's that …(she called her the 'b' word) and then she went looking for her. (girl/15/gay dad) These accounts point to the importance of recognizing 'strength in numbers' approaches as powerful strategies for resistance and education within child and youth peer groups. Sometimes, given the expectations young people come to have, they describe a sense of surprise and relief when they are supported: …one time this 11 th grader girl came up to me and she's like, 'is it true that your dad is gay? And I was like, 'what makes you think that?' and she's like 'I don't know, we saw him come and pick you up..' and I'm like, 'well, maybe he is, maybe he isn't,' right, so kind of like not your business, right? And then she's like, 'no, no, no it's just I wanted to ask you cause like a lot of kids when they're your age and they come here they're all worried about it,' and she's like, 'don't worry, here it's a good school, everybody's open about it. Like if your dad's gay, good for him…' I was like almost crying cause I was so happy… (girl/15/gay dad) Teachers and parents: To tell or not to tell While direct confrontation, peer support, and other forms of assertiveness can help, young people are often compelled to make complicated decisions about if and when to tell teachers or parents about painful incidents. As discussed above, QS describe teacher interventions as being rare and outside the norm: This, combined with experiences of homophobic attitudes from teachers, often makes asking for adult interventions a last resort. Moreover, these are not easy decisions when the consequences of teacher/parent interventions are not always straightforward, predictable, or helpful. Sometimes, despite good intentions, teacher and parent interventions backfire: One day I couldn't handle it (harassment from other kids) and I went to talk to the teacher about it. She seemed pretty okay and stuff, so the next day she tells me to go next door and so I leave the class, I hear her slam the door and yelling…when I came back the girl next to me told me she had screamed at them because they were treating me different and if she heard anything they would be suspended…. She made it worse. Because I couldn't even go outside, I had to stay inside to help the teacher with something, because I couldn't handle it out there. You know it was ten minutes, but ten minutes of hell. "Oh, you need a teacher to defend you. Oh, you and your gay parents, why don't you just move out, go to the country man, no one wants you here. We're straight." Like, oh my god, it was terrible. (girl/17,lesbian mom) In this case, the ongoing harassment that this student experienced was exacerbated by a teacher's well-intentioned intervention, which failed to take into account how a punishing lecture might be received, and the impact of this on the child in question. In other accounts, parental attempts at support or intervention had similar results, further alienating the student and escalating the behaviour of their peers. I was working in the office and the girls come in "oh look, that's the girl with the gay parents, neh, neh neh." So my mom, for Easter, she sent me a flower to school right, to make me feel better. And then people found out, "Oh my god, see, see, she is gay, her mom had to send her this, neh neh.'. The thing is I know my mom had good intentions but oh my god, it was terrible. I had such a bad experience, like honestly half the time I can't even talk about this stuff because it really hurts. [crying] (girl/17/lesbian moms) In Grades 4, 5 and 6 I had a lot of problems, the students were making fun of me, calling me a fag, and I never told my mom and then one day I just got so upset and I called her and I just started bawling and she went and told my principal and then the principal suspended the two people who were doing the most. But then one of my best friends at the time was friends with them and she stopped talking to me because she said I got them suspended. (girl/16/lesbian moms) Do you guys generally tell your parents when stuff happens at school? You better believe this, never! You never tell your parents? Hell no! How come? Because once I told them and they told the principal and it made me really embarrassed in front of my friends. (boy/10/lesbian moms) From these and other accounts, it emerges that zero tolerance approaches can have unforeseen negative impacts on the students who are targets of harassment. These accounts point to the need for sensitive, thoughtful and non-formulaic interventions from teachers and parents. In the instances above, the adult responses, while well-intentioned, are made without consultation with the student involved. This serves, in the end, to disempower them. We would advocate for approaches that are consultative and that leave targeted students with some sense of control. Violence: "The build up just made me snap" In the face of inaction from school staff, and the complexities involved in turning to parents or teachers for support, some young people respond to homophobic harassment from their peers with violence. Interesting, and potentially troubling, is the number of young people who respond with anger and with violence when they were harassed-and who describe it as the most effective strategy. Kids who do not perceive themselves generally as violent or angry people, talked about how, when incidents and anger accumulate, they sometimes snap: I wasn't the type of kid who would yell and get aggravated, but I guess the build up of these kids just constantly tormenting me…it was winter and I think they were throwing snow at me, and so the build up just made me snap and I threw him in a tree…It was really an odd action for me to take cause I'm not usually that physical with anybody, but I don't know what happened. I just got really aggravated. But he never did anything like that ever again. (boy/16/lesbian moms) I've known six kids that have had lesbian and gay parents, or bi or trans. And basically we would just hunt out the homophobic people and nail them down…Someone actually came up to me and said that they didn't like the fact that my parents were gay. Next thing they had a fist in their face. So yeah, that like went by pretty fast…I beat up a Grade 3 when I as in Grade 1. Did you tell the teacher why you had punched the lights out of him? Yeah. They said violence wasn't the answer. (boy/13/lesbian moms) While these accounts speak to the effectiveness of violent responses in addressing the immediate problem-ending their experience of harassment-it is clear that violence has unwanted side effects. When QS respond with violence, they sometimes end up being punished, while the person perpetrating the original homophobic attack gets ignored. This can increase frustration, and reinforce that idea the only way to achieve justice is to take matters into one's own hands. One young man explains how his teachers' lack of interventions led him to react violently, and often end up being the one punished: I usually got in a lot of trouble 'cause I got mad at them [kids who initiated homophobic bullying] and started punching them. Did you ever tell the teachers? They didn't do anything. At which school? At every school. (boy/10/lesbian moms) I got all pissed off at a kid 'cause he insulted me. He made fun of me 'cause I was adopted, so I got all mad at him. I sent him home with a black eye and a bloody mouth…I was sent to the principal's office. I was starting to be suspended. And did you tell them what it was about? Yeah, and then he didn't get in any trouble at all. So the next day he was still insulting me so he still went home with bruises. And then the next day he came to school with like a hidden stick… So when he insulted me, I wasn't going to do anything that day because I had gotten in enough trouble, he started smacking me with the stick. (boy/10/lesbian moms) While we would not advocate for QS to react with violence, the above accounts illustrate how it sometimes seems like the only viable option. When harassment is incessant, when teachers ignore everyday homophobia, and when teachers or parent interventions can lead to negative reactions from peers, why not resort to violence-especially when it works? "The key to change": Queering education It is within the context of individualized actions and doubleedged interventions that the following section turns to a broader discussion of the transformative potentials that arise from the accounts of QS experiences of bullying and harassment, and their strategies of resistance. We offer some recommendations for parents, teachers, and administrators that are rooted in the voices and reflections of queer spawn themselves. Starting with QS experiences, we argue for the importance of addressing how home life filters into the classroom, both for QS and for their peers from straight families. QS who express comfort and resilience point to the importance of feeling confident in themselves and their families. For LGBTQ parents this signals a profound need to reflect on ways to encourage and build confidence in our children. This might begin with a willingness to identify and confront the internalized shame we may still be carrying. If we convey to our children, in deep ways, that there is absolutely nothing wrong with their families, and that no shame is necessary, perhaps they will carry this confidence to school, and their family structure will not be their 'weak point,' the place they can be 'gotten.' QS experiences of the ways that their peer's attitudes are rooted in their families of origin can similarly be translated into a plea to straight parents to educate themselves and their children about the existence of a diversity of sexual orientations, gender identities and family configurations. QS accounts remind us that, just as homophobia can be taught, so can acceptance: ..there's this girl across the street and she teased our other friend because she's fat and me cause I have gay parents…but then she realized what she was doing cause her parents talked to her…she had a friend who had told her gay people are bad, which is why she kept teasing me. Her parents told her it wasn't right and then she stopped…if everyone had parents and they would talk to their children… (girl/9/lesbian moms and gay dad) Little attention has been given to this kind of community antihomophobia education; that is, education that could touch and potentially change the beliefs and attitudes of QS's peers and their parents -who are often the source of the attitudes that get carried to school, and that become the basis of harassment. Our interviews suggest that young people who are educated in their families about the diversity of sexual orientations, gender identities and family configurations may be less likely to ask intrusive, uninformed questions, and less likely to harass. This shift in individual attitudes could eventually transform school climates. Moving beyond individual interactions, QS accounts point to the ways in which homophobia and heterosexism are deeply embedded in the culture of most schools. Transforming school culture requires more than a desire to oppose homophobia. It requires an ongoing commitment to understand the day to day experiences of queer spawn (and queer youth), the thoughtful implementation of education programs for teachers, administrators, students and community members, and interventions and approaches that seriously prioritize the perspectives and recommendations of young people. With regards to teachers, administrators and school practices, some of what these young people have to say is not surprising. Identified as helpful are the presence of both "out" and ally teachers and students. [The teacher] had a meeting with all the kids in our class (after an incident of homophobic name-calling)…You know, we talked about what happened and how everyone felt, and we worked it out…in fact, I don't think I heard an anti-gay or lesbian comment for a year. (boy/10/trans lesbian mom and bi mom). My (straight) teacher comes to school in like dresses and skirts and he's really cool and really supportive…He wears pink triangle shirts and he didn't want to support Canada so much because Canada doesn't really support everyone, so he hung up a rainbow flag in his classroom. (girl/13/lesbian moms) High school's been the best, people don't care and our school is really good about that, you can say whatever you want and be really open. And people are really accepting, the teachers especially. (girl/16/lesbian mom) From their teachers, QS express that a willingness to confront and challenge homophobia; gender non-conforming attitudes and expressions; the display of LGBTQ-positive symbols; and a simple attitude of openness, respect and support can go a long way. Within the classroom, and in schools, QS point to the importance of visible supports and ongoing education and activism. Some of the initiatives they identify as helpful include Gay-Straight Alliances (GSAs) and/or equity committees working on anti-homophobia; curriculum inclusion of LGBT issues, including books, films and discussions; and anti-homophobia workshops like those offered by TEACH (Teens Educating And Challenging Homophobia -Planned Parenthood of Toronto). QS particularly appreciate when LGBTQ issues are integrated into school curriculum in an everyday way: I think the biggest problem is that the only time that LGBT issues are discussed is when something like same-sex marriage comes up, when it's a huge, big controversial thing…it creates a huge gap in the two views and people feel they have to take one or the other side, it separates people, whereas it should be an issue that gets discussed in everyday life, the more basic things, like growing up with gay parents or being gay, what is homophobia…these are things that should be discussed everyday in school and in our community, and they're not. (girl/16/lesbian moms) This account asks us to think about how queer families might be integrated across subjects and activities, rather than pigeonholed into a one-time workshop or discussion. More importantly, it reminds us of the potential negative impacts of discussing queer families exclusively through the lens of controversial issues, such as same-sex marriage. As an overall strategy, the young people we interviewed stressed the need for education, on many levels, as the most effective challenge to homophobia and heterosexism in schools: …the cliché answer -education. For every social issue everybody is always like 'education', it's all about education, but it's true. The thing is you can't start when you're in high school…if the first time you're hearing about it is when you're 16 and you're struggling to be cool, it's difficult to break a bad habit. So you have to start when they're really young and that's where it becomes complicated because when you're young you don't have the ability to stand back from your parents and form your own opinions and say 'I don't agree with my parent's opinions.' That's when it becomes really hard -you're going to have parents who don't want their kids to know about this. But it really is important that you have that in school, you have those books, you have discussions, especially when you do stuff like family trees because for a kid to not see their family represented or talked about and then they have to go and make this family tree, what do they put? They know they have two moms but if the teacher didn't say anything about it, 'is it okay if I put that I have two moms?' and then other kids are like, 'How do they have two moms? That doesn't make sense.' It's really up to the education system to kind of get on it… (girl/16/lesbian moms) QS call for the education system to represent queer families in the early grades: Virtually all the young people we interviewed described the level of homophobia as much higher in elementary school than in high school. Many of the most painful incidents they described happened in Grades 1 -6. For many, life got easier in high school. While this suggests an avenue for future research, we can conjecture that it may be due to maturity of their peers, an increase in confidence on the part of queer spawn or the development of a stable, supportive peer group. Whatever the combination of reasons, it is clear that anti-homophobia education cannot begin too early. Summary of suggestions from queer spawn about what helps at school • Facilitate ways of queer spawn connecting with other queer spawn to share experience and strategies. • Discourage shame in queer spawn. • Develop strategies for community anti-homophobia education that recognizes that homophobic attitudes are often learned in heterosexual families and communities. • Establish anti-homophobia education for students from JK -high school, with special emphasis on elementary grades. • Implement compulsory pre and in-service teacher education on anti-homophobia and other equity issues, with explicit inclusion of queer spawn experience. • Include LGBTQ-led families and recognition of the particular experiences of queer spawn in school curriculum, beginning in elementary school. • Solicit commitment from school staff to intervene in the everyday use of homophobic language and insults in school environments. • Consult and empower students who are the targets of homophobic harassment when intervening in youth peer to peer conflicts. • Encourage the formation and work of gay/straight alliances and equity committees. • Display LGBTQ positive symbols in classrooms and schools. • Create or modify school forms to recognize diverse family configurations. • Promote a school environment which encourages teachers, administrators and students to be "out." • Create a school environment of openness, respect and support. To the queer spawn who so enthusiastically participated in this project -thank you! We also acknowledge the generous support Have you heard other students in your school talking about same-sex marriage or about kids growing up with lesbian or gay parents? Have any of your teachers brought up the subject of same-sex marriage or lesbian/gay parenting in their classes? Has the subject of same-sex marriage or lesbian/gay parenting come up in your church, synagogue, temple or religious school? Overall, do you think that the same-sex marriage debate and the media attention on lesbian/gay parenting has created a safer or a less safe environment for you and your family? Do you have any other comments about how the same-sex marriage debate and the arguments about lesbians and gays raising children have impacted you or LGBT families generally? What do you think would really make a difference in terms of making things easier for kids growing up in LGBT families? Any other comments generally about the discussion we've had or any of the things that have come up? Rachel Epstein (MA, PhD (c)) has been an LGBTQ parenting activist, educator and researcher for over 20 years and coordinates the LGBTQ Parenting Network at the Sherbourne Health Centre in Toronto, Ontario. She has published on a wide range of issues, including assisted human reproduction, queer spawn in schools, butch pregnancy, and the tensions between queer sexuality, radicalism and parenting. She is currently completing a doctoral dissertation on LGBTQ people and fertility clinics. Becky Idems is a doctoral student in the School of Social Work at McMaster University in Hamilton, Ontario, Canada. Her dissertation explores critical pedagogies with potential to challenge the normative tensions of undergraduate social work education, in the context of neoliberal shifts within academia and the profession. Her research is grounded in her experiences in feminist antiviolence work, front line services and queer community building. Adinne Schwartz is a sexual health educator in Toronto. She has been working in anti-homophobia education for 15 years and continues to be involved in feminist LGBTQ research, education and advocacy. She has a master's degree in Women's Studies and Education from the University of Ottawa. Her master's thesis focused on sexrole stereotyping in sexual health education. Her current initiatives are aimed at promoting inclusive sexual health education in schools and building the capacity of public health organizations to serve LGBTQ populations.
2018-09-25T18:54:44.075Z
2013-11-18T00:00:00.000
{ "year": 2013, "sha1": "c1470b6324eec23749ae5e7152aa0489ad2a58b0", "oa_license": "CCBY", "oa_url": "http://www.confero.ep.liu.se/issues/2013/v1/i2/131217/confero13v1i21g.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "00d058d881477ec5af4ea6583f9ec316a5459ce5", "s2fieldsofstudy": [ "Education", "Sociology", "Linguistics" ], "extfieldsofstudy": [ "Sociology" ] }
261880916
pes2o/s2orc
v3-fos-license
Socioeconomic disparities in Rwanda’s under-5 population’s growth tracking and nutrition promotion: findings from the 2019–2020 demographic and health survey Background Regular growth monitoring can be used to evaluate young children’s nutritional and physical health. While adequate evaluation of the scope and quality of nutrition interventions is necessary to increase their effectiveness, there is little research on growth monitoring coverage measurement. The purpose of this study was to investigate socioeconomic disparities in under-5 Rwandan children who participate in growth monitoring and nutrition promotion. Methods We used data from the 2019–2020 Rwanda Demographic and Health Survey (RDHS), which included 8092under-5 children. Percentage was employed in univariate analysis. To examine the socioeconomic inequalities, concentration indices and Lorenz curves were used in growth monitoring and nutrition promotion among under-5 children. Results A weighted prevalence of 33.0% (95%CI: 30.6-35.6%) under-5 children growth monitoring and nutrition promotion was estimated. Growth monitoring and nutrition promotion among under-5 children had higher uptake in the most disadvantaged cohort, as the line of equality sags below the diagonal line in Lorenz curve. Overall, there was pro-poor growth monitoring and nutrition promotion among under-5 in Rwanda (Conc. Index = 0.0994; SE = 0.0111). Across the levels of child and mother’s characteristics, the results show higher coverage of under-5 growth monitoring and nutrition promotion in the most socioeconomic disadvantaged cohort. Conclusion The study found a pro-poor disparity in growth monitoring and nutrition promotion among under-5 children in Rwanda. By implication, the most disadvantaged children had a higher uptake of growth monitoring and nutrition promotion. The Rwanda government should develop policies and programmes to achieve the universal health coverage for the well-off and underserved population. Background The United Nations International Children's Emergency Fund (UNICEF) defines growth monitoring as a monthly assessment of a child's development in terms of growth with reference to the World Health Organization (WHO) benchmark, using anthropometric indicators to detect growth dysfunction and malnutrition threshold [1,2].Child growth monitoring is a useful practice to evaluate the health and nutritional status of children [3].Several indicators such as stunting, underweight, wasting, undernutrition and overweight can be measured during child growth monitoring [4].However, none of these indicators are exactly the same.For example, stunting is not always same as undernutrition [5].The foundational elements for healthy growth, a strong immune system and the development of the brain are all built on a child's first year of nutrition.It also helps prevent noncommunicable diseases (NCDs) linked to obesity in the future [6,7].Despite significant recent progress in reducing child mortality, over five million children die before age five every year, mainly as a result of inadequate infant and young child nutrition (IYCN) [8].An alarming amount of food insecurity has resulted in 144 million stunted children and nearly half of all under-5 anaemia worldwide [9]. Undernutrition accounts for approximately 45% of deaths in under-5 children globally [10].The majority of children's suboptimal feeding occurs in resource-constrained settings.Furthermore, the prevalence of childhood malnutrition are rising in resource-constrained countries.For example, approximately 45.4 million children were estimated to be wasted, 38.9 million children were overweight and 149.2 million under-5 had stunted growth [11].Children who are stunted are becoming less common across all WHO regions, except in African [11].However, with regard to obesity, roughly half of all countries have either seen no improvement or a worsening of the situation [11]. Growth monitoring and nutrition promotion (GMNP) is a preventive strategy that advocates for appropriate and proper feeding practices for under-5 children and monitors, measures, interprets and analyses potential causes of adequate or insufficient child growth.Additionally, it encourages interaction and communication, promotes appropriate health-seeking behaviour, child's nutritional status and reduces child morbidity and mortality [1,3,12].Several countries around the world have very low rates of attendance and promotion toward GMNP and many caregivers have poor understanding of the growth charts.The GMNP programme implementation and subsequent changes in care practices have not been extensively studied in many countries [13,14].It is challenging to carry out effective growth monitoring activities and community involvement are frequently ignored when determining whether to include growth monitoring in national surveillance programmes [14]. The Sustainable Development Goals (SDGs), specifically those targeting to eradicate poverty in all of its forms globally (SDG 1), eradicate hunger, achieve food security, improve nutrition and promote sustainable agriculture (SDG 2), as well as ensuring healthy lives and promoting health and quality of life for all at all ages (SDG 3), must be attained adequately by reducing childhood malnutrition [15,16].Several countries have agreed to the global targets to reduce stunting (chronic undernutrition) by 40% by 2025 and to keep the prevalence of wasting (acute undernourishment) in children under the age of five to less than 5% [17].The practices of GMNP among key population such as under-5 children are key in achieving these SDGs.Taking into account global efforts to improve infant and child feeding practices through the International Code of Marketing of Breast milk Substitutes, the promotion of proper nutrition, including breastfeeding [18], the Global Strategy for Infant and Young Child Feeding [19] and The Code, the baby friendly hospital initiative (BFHI) [20], are essential part of children's growth mechanisms. The UNICEF conceptual framework on nutrition, posited that psychosocial stimulation, nutrition and health are the critical components for improving and enhancing children's quality of life.This implies that appropriate feeding practices must be accelerated to achieve better growth and development [17].Due to extreme financial uncertainty that resulted in Rwanda's genocide nearly three decades ago, malnutrition has been said to be more prevalent [21].Several progressive policies outlined in Rwanda's Vision 2020 plan have reportedly been put into practice to support the country's economic recovery [22].In turn, this has led to significant improvements in health of the populace across a range of population health metrics [23].For instance, between 2000 and 2015, the rates of newborn and under-5 mortality decreased, while the rate of vaccinations significantly increased [24].This progress might be attributed to advancements from citizens' participation to improve the healthcare system, such as the implementation of the neighbourhood health insurance policy to enhance economic access to care and the development of a strong health-care workforce. Research to investigate socioeconomic inequalities in GMNP among Rwanda's under-5 population have, to the best of our knowledge, received little or no attention in spite of several studies conducted thus fa on the subject matter [23][24][25].The dearth of under-5 GMNP data in Rwanda is a critical gap that our study is set out to fill.Therefore, we would like to answer the question of who, between the disadvantaged or well-off are more likely to uptake under-5 GMNP.The findings from this study add to knowledge base or literature and useful for stakeholders in healthcare system to develop viable interventions and adopt relevant policies.The magnitude of these inequalities is investigated as it helps to reduce inequalities in service uptake.The objective of this study was to evaluate socioeconomic inequalities in under-5 GMNP in Rwanda. Data source Data from children's survey questionnaire from the 2019-20 Rwanda Demographic and Health Survey (RDHS) was analysed in this study.A total of 8,092 under-5 children were included in the sample.The 1992The , 2000The , 2005The , 2010, and 2014-15 surveys were followed by the 2019-20 RDHS, which was the sixth round.The survey was conducted by the Rwandan National Institute of Statistics with funding from the Inner-City Fund (ICF) and the Ministry of Health.The survey was conducted from November 2019 to July 2020.Data collection was suspended for about three months (March-June) due to the effects of the lockdown that followed the coronavirus pandemic in 2020 [26].Information relevant to monitoring population health was gathered by the RDHS on topics including nutrition among others [26].A previous study has reported the methodology of RDHS [27]. Sampling design An entire nation-wide sampling frame of enumeration areas (EAs) was provided by the National Institute of Statistics, the RDHS's implementing organization.The first step in the 2-stage stratified cluster sampling approach was to select clusters made up of EAs.There were 500 clusters, with 388 in rural and 112 in urban areas.In the second phase, systematic household sampling was carried out.A household listing was done in each of the selected EAs from June to August 2019, and the households that were surveyed were selected at random.With an average of 26 households per cluster across the nation, there were 13,000 households. Selection and measurement of variables Outcome Participation in growth monitoring and nutrition promotion services was estimated in this study and measured dichotomously as "1" if "yes" and "0" otherwise.This outcome variable has also been measured by a previous study [25]. Explanatory variables There are several variables that were included in this study, thus: age of mother, family mobility, mother's education, mother's marital status, currently pregnant, currently breastfeeding, mother's employment status, child's age (months), sex of child, preceding birth interval, place of childbirth, geographical region.In addition, low birthweight (<2.5 kg) compared to normal birthweight (≥2.5 kg); male versus female household head; household wealth is divided into five quintiles, from poorest to richest; urban versus rural status of residence; households with 1-4, 5-6, and 7 + members; Furthermore, items such as rural residence, lowest household wealth level, mothers with no formal education and not working were used to compute the socioeconomically disadvantaged level.To separate the overall assigned scores to low, moderate, and high, the standardized z-score was subjected to principal component analysis (PCA). Concentration curves and indices The concentration index is a widely used method for analyzing health inequities.The indices and curves investigate the presence of health inequalities.They do not, however, quantify the degree of health disparities.The Erreygers normalised concentration indices [28] were used in this study to assess the degree of socioeconomic disparities in tracking growth and promoting nutrition.Among the several indices that may have been employed, the Erreygers was chosen because of its simplicity and capacity to be decomposable. The concentration index can be computed making use of the 'convenient covariance' as shown below: Where: y i is the health variable.ŷ is the mean of y i .R i is the fractional rank of the ith individual. COV symbolizes the covariance.Concentration indices are calculated by dividing the area between the concentration curve and the line of equality (the 45-degree line) by two [29].A concentration curve on the 45° line indicates that there is no health inequity.The concentration curve's distance from the line of equality (45° line) indicates the magnitude of the health inequality.The wider the distance between the concentration curve and the line of equality, the higher the level of health inequity.This study chose to employ the normalized formulae because it is suggested that normalizing the health concentration index formula assures that the boundaries issue for a binary Cardinal Health variable is resolved.The Erreygers normalized index (E(c)) is denoted as: In the case of binary variables, y max -y min represents the range of the health variable, which is 'one' .As both corrected concentration indices are extensively used in the health literature, the current investigation concentrated on the Erreygers normalised index. Decomposing the erreygers normalised concentration index The Erreygers Normalised concentration index can be decomposed to calculate the contributions of maternal health indicator determinants [30,31].Health inequalities were decomposed into the contributions of several explanatory factors, with each contribution being the product of health elasticity.Given a linear relationship between individual health (yi) and a collection of k explanatory variables, yi will be as follows: Wagstaff et al. [31] demonstrated that the concentration index for any health measure that has a linear relationship with a set of k exploratory variables may be divided as follows: Where: β k is the partial.ŷ is the mean of the health variable.ẋ k is the mean of ẋ k .CI k denotes the concentration index of x k against income. GC ε is the generalised concentration for the error term. Statistical analysis The survey module ('svy') command was used to adjust for sampling design.Percentage was used in the univariate analysis.To examine socioeconomic inequalities, concentration indices and curves were used in tracking growth and promoting nutrition.The concentration index value is positive when growth monitoring and nutrition promotion were higher in high socioeconomic disadvantaged children.The converse is however true when the concentration index value is negative [4,33].The level of statistical significance was set at p < 0.05.For data analysis, Stata version 14 (StataCorp., College Station, TX, USA) was utilized. Ethical consideration For the purposes of this study, identifier information was removed from a secondary dataset that was publicly accessible.In order to get the respondent's informed consent, RDHS adhered to a recognised ethical procedure.Since the authors were granted approval for this study dataset, no additional participants' consent was required. You can find information about DHS ethical standards here: http://goo.gl/ny8T6X. Results A weighted prevalence of 33.0% (95%CI: 30.6-35.6%)under-5 GMNP was estimated.It follows that in 2019-20, approximately two-thirds of Rwandan under-5 children did not utilize growth monitoring and nutrition promotion services. Table 1 shows the distribution of under-5 GMNP across child and mother characteristics.Based on the results, the most disadvantaged children had higher prevalence in the uptake of under-5 GMNP in Rwanda.The prevalence of under-5 GMNP increased as children get older.Similarly, normal birthweight (≥2.5 kg) under-5, female folks, those delivered at health facility, native, those having mothers with no formal education, who listen to radio or currently in union, covered by health insurance or from households with male headship, resident in South, West region or rural areas, reported higher prevalence of under-5 GMNP respectively. The socioeconomic inequalities for under-5 GMNP in Rwanda are depicted in Fig. 1.How far the curves deviate from the line of equality indicates whether there are greater inequalities and to what extent.Figure 1 demonstrates that the most disadvantaged cohort had higher uptake of under-5 GMNP, as the line of equality sags below the diagonal line. Table 2 showed results of socioeconomic disadvantaged inequalities for under-5 GMNP.Overall, there was propoor under-5 GMNP (Conc.Index = 0.0994; SE = 0.0111).Across the levels of child and mother's characteristics, the results show higher coverage of under-5 GMNP in the most socioeconomic disadvantaged cohort.In addition, there was difference in the concentration indices across the levels of the following variables: family motility (p = 0.005), mothers who watch TV (p = 0.001), sex of household headship (p = 0.021) and geographical region (p = 0.005) respectively. Discussion This is among the foremost studies in Rwanda to examine socioeconomic inequalities in under-5 GMNP.Similar to a previous research, the uptake of under-5 GMNP is low [25].In addition, we found pro-poor GMNP among under-5 population.The key finding indicated that the most disadvantaged children had higher uptake of under-5 GMNP in Rwanda.This could be as a result of social and economic changes in resource-constrained settings which have experienced age-long developmental, epidemiological and demographic catastrophe.The socioeconomic distribution of health outcomes in several countries, has shifted in a way that has led to global health inequalities [34].Notably, children aged 48-59 months that are most socioeconomically disadvantaged had greater uptake of under-5 GMNP, when compared with other age groups.This is in line with a recent study [25].On the other hand, previous studies conducted in Ethiopia found children between 12 and 24 months to be more likely to utilize childhood GMNP services [12] [35].Those studies however covered under 24 months old.It is well known that children from families with low socioeconomic status, have greater need for healthcare services.The poor health conditions of under-privileged and vulnerable children requires improved health management and maintenance [36].The higher uptake of under-5 GMNP reported among the socioeconomically disadvantaged population reflects the higher probability of older children experiencing greater food insecurity and could be malnourished.This paper is first to explore socioeconomic inequalities in the uptake of under-5 GMNP using concentration index and Lorenz curves.We found differences in concentration indices across the levels of certain variables, such as family motility, mothers who watch TV, sex of household headship and geographical region respectively.We found considerable inequalities related to years lived in an area of residence.The degree of inequality in under-5 GMNP was wider for children who live in an area for less than five years, when compared with the native.This corroborates with a recent study from Rwanda that found that children from families who are native residents have higher uptake of under-5 GMNP [25].A possible explanation could be that native residents may likely be more aware of the availability of under-5 GMNP services, than the non-natives.It is also possible that indigenous residents have better geographic access to these services. In addition, the differences in regional coverage could be attributed to diverse interventions related to under-5 GMNP which may have been executed in various regions and in varied capacity or scale.A recent study found that the uptake of under-5 GMNP was higher in the southern, western and northern regions, when compared with children from Kigali [25].Another study conducted in Rwanda have reported similar findings [37].This disparities in the uptake of under-5 GMNP across regions could be that children who reside in the geographical regions with lower uptake, may be unaware of the services available or unable to attend the sessions due to economic or transportation challenges.Since this survey was conducted during the Coronavirus pandemic, it is possible that the uptake may have been disrupted by the pandemic, especially that some regions may have served as the epicenter of COVID-19.Thus indicating that more of this intervention is needed in these regions by promoting the coverage and supporting caregivers to present their children as when scheduled.We found that the uptake of under-5 GMNP was significantly higher among children from mothers who watch TV, when compared with those from mothers do not watch TV.It could be that mothers who watch TV are more aware and enlightened about under-5 GMNP, than mothers who do not watch TV.It is known that mother's exposure to mass media play an important role in enhancing health services uptake.Mothers who are expose to mass media are better informed about programmes that promote the health children as well as know about healthcare initiatives [34] [38]. The sex of household headship influenced the uptake of under-5 GMNP.We found that children from male headed household had higher uptake of under-5 GMNP.Conversely, the degree of inequality in under-5 GMNP was wider among children from female headed household, when compared with those from male headed households.This is in contrast with a recent study conducted in Rwanda that did not find any association between under-5 GMNP and sex of household head [25].Women's empowerment is still required to improve health services uptake in a patriarchal society.Women could have lower levels of education, less access to employment and consequently become socioeconomically disadvantaged [39].Policies and strategies need to be design and implemented to empower women and increase their socioeconomic development. We conducted further analysis to decompose selected child and mother's characteristics related to under-5 GMNP.Based on our findings, place of residence was the largest contributor to inequality, contributing about 82.4% of inequality to the uptake of under-5 GMNP.Other important contributors to inequality included mother's internet use, household wealth, family motility, mother's employment and education.Certainly, these variables have been identified as significant factors to consider in designing polices to increase under-5 GMNP in resource-constrained settings such as Rwanda. The findings from our study would play a vital role in shaping nutrition policies for under-5 children.These could be used in evidence-based policy formulation and implementation.We identified socioeconomic inequalities in the uptake of under-5 GMNP.Hence, policies can be tailored to address socioeconomic inequalities directly by promoting universal health coverage to reach all under-5 children in Rwanda, irrespective of their socioeconomic status.In addition, the findings can be used to design and implement effective nutrition education programmes for caregivers, parents, communities and empower local leaders to promote nutrition within their communities.These programmes can help raise awareness about proper nutrition and child growth, as well as promote healthy feeding practices.Moreover, stakeholders in healthcare system can use the findings of our study to implement nutrition policies with built-in evaluation mechanisms to regularly assess their effectiveness.Our findings can also guide in taking a comprehensive approach to addressing the nutritional and health needs of under-5 children including the uptake of growth monitoring, as it is possible to make significant improvements in their nutritional status and overall well-being by enhancing the socioeconomic development of the country at large.Furthermore, our study has brought to limelight, using a population-based data, the coverage and inequalities in under-5 GMNP in Rwanda.It is our hope that the findings of this study will be useful to stakeholders in healthcare for designing and implementing viable programmes that will help the socioeconomically disadvantaged cohort recover from child undernutrition in near future and address the disparities in prevalence of nutritional status even with the advantaged children. Strength and limitations The use of recent nationally representative household survey data is a major strength of this study, and the findings are generalizable to under-5 children in Rwanda.The main outcome variable was, however, measured using self-reported data, which may have recall bias.Consequently, the uptake of under-5 GMNP may have been over-or underestimated.DHS did not obtain information on household income and expenditure.Therefore, asset-based wealth index was used in this study.In addition, variables on caregivers' attitudes toward children's health were not available because the study conducted secondary data analysis.Furthermore, because availability of under-5 GMNP sessions can affect attendance, we were unable to conduct an exhaustive assessment of this factor due to the use of secondary data, which had the flaw of not containing information on the availability of sessions for growth monitoring and nutrition promotion.Moreover, we found inadequate coverage of growth monitoring and nutrition promotion among under-five in Rwanda.As we conducted a secondary data analysis, there was no information regarding the growth monitoring system, whether it has sufficient manpower, infrastructure, standard operating procedures or demand creation strategies, which could influence the level of uptake.We relied on self-reported uptake of growth monitoring and nutrition promotion. Conclusion This study demonstrated a pro-poor inequality in under-5 GMNP.The study showed that the most socioeconomically disadvantaged children had higher prevalence of under-5 GMNP.The findings show that individual socioeconomic characteristics such as place of residence, wealth status, maternal education, are contributors to improving inequality.Therefore, intervention policies should be centred on these elements to reduce the disparity in the uptake of under-5 GMNP.A further effective policy strategy for reducing socioeconomic inequalities in the practice of growth monitoring and promoting optimal nutrition could be helpful in healthcare system's collaboration with other social and development sectors. Table 1 Distribution of under-5 GMNP in Rwanda across socioeconomic disadvantaged level * Significant at p < 0.05 Table 1 (continued)
2023-09-16T13:28:51.134Z
2023-09-16T00:00:00.000
{ "year": 2023, "sha1": "efc6599431168c40d48fb1706a97849b0c8014fb", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/counter/pdf/10.1186/s12887-023-04284-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3434f8bbcc9efa44b2feb1a4c8bfdb9bafd4cb30", "s2fieldsofstudy": [ "Sociology", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
267954477
pes2o/s2orc
v3-fos-license
Phillygenin Suppresses Glutamate Exocytosis in Rat Cerebrocortical Nerve Terminals (Synaptosomes) through the Inhibition of Cav2.2 Calcium Channels Glutamate is a major excitatory neurotransmitter that mediates neuronal damage in acute and chronic brain disorders. The effect and mechanism of phillygenin, a natural compound with neuroprotective potential, on glutamate release in isolated nerve terminals (synaptosomes) prepared from the rat cerebral cortex were examined. In this study, 4-aminopyridine (4-AP), a potassium channel blocker, was utilized to induce the release of glutamate, which was subsequently quantified via a fluorometric assay. Our findings revealed that phillygenin reduced 4-AP-induced glutamate release, and this inhibitory effect was reversed by removing extracellular Ca2+ or inhibiting vesicular transport with bafilomycin A1. However, exposure to the glutamate transporter inhibitor dl-threo-beta-benzyl-oxyaspartate (dl-TOBA) did not influence the inhibitory effect. Moreover, phillygenin did not change the synaptosomal membrane potential but lowered the 4-AP-triggered increase in intrasynaptosomal Ca2+ concentration ([Ca2+]i). Antagonizing Cav2.2 (N-type) calcium channels blocked the inhibition of glutamate release by phillygenin, whereas pretreatment with the mitochondrial Na+/Ca2+ exchanger inhibitor, CGP37157 or the ryanodine receptor inhibitor, dantrolene, both of which block intracellular Ca2+ release, had no effect. The effect of phillygenin on glutamate release triggered by 4-AP was completely abolished when MAPK/ERK inhibitors were applied. Furthermore, phillygenin attenuated the phosphorylation of ERK1/2 and its major presynaptic target, synapsin I, a protein associated with synaptic vesicles. These data collectively suggest that phillygenin mediates the inhibition of evoked glutamate release from synaptosomes primarily by reducing the influx of Ca2+ through Cav2.2 calcium channels, thereby subsequently suppressing the MAPK/ERK/synapsin I signaling cascade. Introduction Glutamate is a major excitatory neurotransmitter of the central nervous system and the most abundant neurotransmitter in the brain.Glutamate plays a crucial role in synaptic plasticity, learning, and memory [1,2].Maintaining optimal glutamate levels is essential, as excitotoxicity caused by excessive glutamate release induces an increase in intracellular Ca 2+ levels.This event, in turn, initiates a cascade of reactions inside the cell, including increased oxygen free radical formation, impaired mitochondrial function, and protease activation, ultimately leading to cell death [3,4].Pathology can occur in numerous neurological disorders, such as ischemia, traumatic brain injury, epileptic seizures, Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis [4][5][6][7].Therefore, reducing the release of glutamate from nerve terminals is a promising strategy to protect against neurological disorders linked to excitotoxicity-related pathologies. An increasing number of studies suggest that medicinal plants are attractive sources of molecules for the development of novel pharmaceuticals and have shown promising results in the prevention and treatment of brain disorders [8,9].Phillygenin is a lignan compound extracted from the medical herb Forsythia suspensa that is traditionally used to treat inflammation, pain, fever, nausea, vomiting, and abscesses [10].The pharmacokinetics of phillygenin in rats exhibit first-order kinetics, with rapid distribution and elimination, while in mice, also shows high oral bioavailability, peaking within 30 min [11,12].Previous studies have shown that phillygenin has diverse biological activities, including anti-inflammatory, antioxidant, antitumor, antibacterial, antiviral, analgesic, and hepatoprotective effects [13][14][15][16][17][18][19][20].Since free radical-induced oxidative damage to the brain is recognized as a primary cause of neuronal death in various neurodegenerative disorders, compounds with antioxidative properties, such as phillygenin, may be effective at preventing or delaying these central nervous system (CNS) disorders [21,22].Moreover, phillygenin possesses anti-inflammatory properties [23], which may provide neuroprotective benefits by potentially reversing cellular damage and slowing the progression of neuronal cell loss in individuals with neurodegenerative disorders [24].Therefore, we predicted that phillygenin may protect against glutamate-induced neuronal excitotoxicity. Since the excessive release of glutamate constitutes a pivotal factor in the pathogenesis of neurological diseases, in this study, we aimed to investigate the impact of phillygenin on glutamate release.Isolated rat cerebral cortex nerve terminals (synaptosomes), which is a well-established model for studying synaptic transmission, were utilized in this study.In particular, synaptosome preparations can accumulate, store, and release neurotransmitters without any postsynaptic interactions.Using this model, we further explored the synaptosomal plasma membrane potential, activation of voltage-dependent Ca 2+ channels (VDCCs), the intrasynaptosomal Ca 2+ concentration ([Ca 2+ ] i ), and the potential underlying mechanisms of phillygenin on evoked glutamate release. Animals and Ethics All animal work was performed in accordance with the Guide for the Care and Use of Laboratory Animals of the National Research Council (8th edition, 2011) and approved by the Animal Care and Utilization Committee of Far Eastern Memorial Hospital (approval numbers IACUC-2022-FEMH-03 and IACUC-2023-FEMH-02). Efforts were made to minimize animal pain and distress and reduce the number of animals utilized.Adult male Sprague-Dawley rats (150-200 g) were used in these studies (BioLASCO Taiwan Co., Ltd., Taipei, Taiwan).All rats were maintained in environmentally controlled rooms (22 ± 1 • C; 50% humidity) with diurnal lighting on a 12 h light/dark cycle and with free access to fresh, clean drinking water and food. Synaptosome Isolation The synaptosomes utilized in this study were purified from the rat cerebral cortex through the discontinued Percoll gradient procedure [25,26].Briefly, animals were sacrificed by decapitation, after which the brain was removed and placed on a chilled Petri dish.The cerebral cortex was then dissected and placed in cold isotonic sucrose homogenization buffer (0.32 M sucrose and 4 mM HEPES-NaOH, pH 7.5) and homogenized with a Potter-Elvehjem tissue homogenizer (capacity: 55 mL).The following procedures were carried out at a temperature of 4 • C. The homogenate was centrifuged (3000× g, 10 min) to remove debris.Following centrifugation at 15,000× g for 10 min, the crude synaptosomal pellet was resuspended in ice-cold sucrose buffer.The resulting supernatant was gently layered on top of a discontinuous Percoll gradient, consisting of layers with sucrose concentrations of 3%, 10%, and 23%, and then centrifuged at 33,000× g for 7 min.The pure synaptosomal fraction was obtained from the interface of 10% sucrose and 23% sucrose.After washing with 30 mL of HEPES-buffered media (HBM, containing 20 mM HEPES, 140 mM NaCl, 5 mM NaHCO 3 , 1.2 mM Na 2 HPO 4 , 5 mM KCl, 1 mM MgCl 2 , and 10 mM glucose; pH 7.4), the synaptosomes were further centrifuged at 27,000× g for 10 min to remove Percoll.The synaptosomal pellets were then resuspended in HBM buffer, and the protein concentration was determined using a Pierce™ BCA protein assay kit (Thermo Fisher Scientific, Waltham, MA, USA). Measurement of Glutamate Release A continuous fluorometric assay was used to assess glutamate release [26,27].The synaptosomal pellets (0.5 mg) were reconstituted in 2 mL of HBM buffer containing 16 mM bovine serum albumin (BSA).The resulting mixture was placed in a temperature-controlled cuvette with continuous stirring at a constant temperature of 37 • C.After 5 min of stirring, 2 mM NADP, 50 U of GDH, and either 1 mM CaCl 2 or 0.3 mM EGTA were added.After an additional 5 min of incubation, 1 mM 4-AP or 15 mM KCl was added to stimulate glutamate release.The oxidative decarboxylation of the released glutamate, which resulted in NADP reduction, was monitored by measuring NADPH fluorescence at excitation and emission wavelengths of 340 and 460 nm, respectively.The data were collected at 2-s intervals.A standard of exogenous glutamate (5 nmol) was added at the conclusion of the experiment, and the change in fluorescence was used to calculate the amount of released glutamate; this value was expressed as nmol mg −1 protein.Glutamate release was calculated until the fluorescence reached an equilibrium (approximately 5 min).Cumulative data were analyzed using GraphPad Prism (version 8.4.3;Boston, MA, USA). Measurement of Synaptosomal Plasma Membrane Potential DiSC3(5), a carbocyanine dye that responds to voltage changes, was used to measure the electric potential of the nerve ending membrane [28].This positively charged dye gathers on hyperpolarized membranes and relocates into the lipid bilayer of the synaptosomal plasma membrane.Upon membrane depolarization, DiSC3(5) released from the membrane bilayer results in a rapid increase in green fluorescence [29].Synaptosomes were resuspended in 2 mL of HBM and incubated in a stirred thermostatted cuvette at 37 • C in a Perkin-Elmer FL-6500 fluorescence spectrophotometer (PerkinElmer, Inc., Waltham, MA, USA).After 5 min, 5 µM DiSC3(5) was added to allow for maximal dye uptake.Following an additional 3 min of incubation, 1.2 mM CaCl 2 was introduced into the cuvette before initiating depolarization with 1 mM 4-AP for 10 min.DiSC3(5) fluorescence was moni-tored using a FL-6500 spectrofluorometer with an excitation wavelength of 646 nm and an emission wavelength of 674 nm.The results are presented as arbitrary fluorescence units, with each accumulation period set at 2-s intervals; the data were analyzed using GraphPad Prism (version 8.4.3;Boston, MA, USA). 2.6.Measurement of Intrasynaptosomal Ca 2+ Concentration ([Ca 2+ ] i ) Fura 2-AM, which is a calcium chelator and fluorescent probe, was used to monitor dynamic changes in cytosolic free calcium in synaptosomes [27].In brief, synaptosomal pellets were reconstituted in 2 mL of HBM.Subsequently, 5 µM Fura-2-AM, 100 µM CaCl 2 , and 16 µM BSA were added.Following a 30-min incubation at 37 • C, the synaptosomes were centrifuged for 1 min at 10,000× g to eliminate excess Fura-2-AM.Following preincubation with phillygenin and 1.2 mM CaCl 2 for 10 min, the synaptosomes were depolarized using 1 mM 4-AP.Using an FL-6500 spectrofluorometer, Fura-2 fluorescence was assessed through dual-wavelength measurements at an emission wavelength of 505 nm and excitation wavelengths of 340 nm and 380 nm.The data were collected at 4-s intervals, and the calcium concentration was subsequently determined [28]. Data Analysis The data are presented as the mean ± standard error of the means (SEMs).Statistical analyses were performed using two-tailed Student's t tests or one-way repeated measures analysis of variance (ANOVA) for comparisons among three or more distinct groups.Post hoc analysis was conducted using Tukey's honest significant difference (HSD) test following one-way ANOVA.The analysis was carried out using GraphPad Prism (version 8.4.3).p < 0.05 was considered to indicate statistical significance. Effects of Calcium Ion Chelators, Glutamate Transporter Inhibitors or Vesicular Transporter Inhibitors on the Inhibition of 4-AP-Evoked Glutamate Release by Phillygenin The combination of Ca 2+ -dependent and Ca 2+ -independent release mechanisms contributes to the overall release of glutamate [30].We then explored whether the impact of phillygenin on release was indicative of its influence on either Ca 2+ -dependent exocytotic vesicular release or the Ca 2+ -independent release of glutamate, which can be attributed to the reversal of glutamate efflux by the glutamate transporter.Figure 2A shows that 1.2 mM 4-AP stimulated 1.35 ± 0.15 nmol mg −1 glutamate release over 5 min in the presence Effects of Calcium Ion Chelators, Glutamate Transporter Inhibitors or Vesicular Transporter Inhibitors on the Inhibition of 4-AP-Evoked Glutamate Release by Phillygenin The combination of Ca 2+ -dependent and Ca 2+ -independent release mechanisms contributes to the overall release of glutamate [30].We then explored whether the impact of phillygenin on release was indicative of its influence on either Ca 2+ -dependent exocytotic vesicular release or the Ca 2+ -independent release of glutamate, which can be attributed to the reversal of glutamate efflux by the glutamate transporter.Figure 2A shows that 1.2 mM 4-AP stimulated 1.35 ± 0.15 nmol mg −1 glutamate release over 5 min in the presence of calcium-free medium containing 300 µM EGTA.Under these conditions, 20 µM of phillygenin did not alter the Ca 2+ -independent release of glutamate induced by 4-AP.These findings suggest that phillygenin selectively targets and modulates the Ca 2+ -dependent, exocytotic component of glutamate release.To validate this hypothesis and examine the effect of phillygenin on glutamate release, bafilomycin A1, a potent inhibitor of cellular autophagy, or DL-TBOA, a competitive, non-transportable blocker of all excitatory amino acid transporter subtypes, was utilized.As shown in Figure 2B, bafilomycin A1, which depletes synaptic vesicles of glutamate by inhibiting the vesicular glutamate transporter, reduced the amount of 1 mM 4-AP-evoked glutamate release to 1.75 ± 0.13 nmol mg −1 protein per 5 min (F (3, 168) = 13.14, p < 0.0001).However, in the presence of bafilomycin A1, phillygenin-induced inhibition of 4-AP-evoked glutamate release was absent.Moreover, DL-TBOA suppressed glutamate uptake, leading to a significant increase in 4-AP-induced glutamate release (from 7.36 ± 0.29 to 13.49 ± 0.48 nmol mg −1 protein per 5 min).Figure 2C shows that the addition of DL-TBOA completely blocked phillygenin inhibition of 4-APinduced glutamate release (F (3, 168) = 6.733, p < 0.0003).Taken together, the suppression of phillygenin-mediated inhibition of 4-AP-induced glutamate release is observed in the presence of the calcium chelator EGTA or the vesicle transporter inhibitor bafilomycin A1, but not affected by the glutamate transporter inhibitor DL-TBOA (Figure 2D, F (7, 32) = 53.01,p < 0.0001).These findings indicate that the phillygenin-mediated inhibition of 4-AP evoked glutamate release is attributed to a decrease in the Ca 2+ -dependent exocytosis of glutamate release. exocytotic component of glutamate release.To validate this hypothesis and examine the effect of phillygenin on glutamate release, bafilomycin A1, a potent inhibitor of cellular autophagy, or DL-TBOA, a competitive, non-transportable blocker of all excitatory amino acid transporter subtypes, was utilized.As shown in Figure 2B, bafilomycin A1, which depletes synaptic vesicles of glutamate by inhibiting the vesicular glutamate transporter, reduced the amount of 1 mM 4-AP-evoked glutamate release to 1.75 ± 0.13 nmol mg −1 protein per 5 min (F (3, 168) = 13.14, p < 0.0001).However, in the presence of bafilomycin A1, phillygenin-induced inhibition of 4-AP-evoked glutamate release was absent.Moreover, DL-TBOA suppressed glutamate uptake, leading to a significant increase in 4-AP-induced glutamate release (from 7.36 ± 0.29 to 13.49 ± 0.48 nmol mg −1 protein per 5 min).Figure 2C shows that the addition of DL-TBOA completely blocked phillygenin inhibition of 4-AP-induced glutamate release (F (3, 168) = 6.733, p < 0.0003).Taken together, the suppression of phillygenin-mediated inhibition of 4-AP-induced glutamate release is observed in the presence of the calcium chelator EGTA or the vesicle transporter inhibitor bafilomycin A1, but not affected by the glutamate transporter inhibitor DL-TBOA (Figure 2D, F (7, 32) = 53.01,p < 0.0001).These findings indicate that the phillygenin-mediated inhibition of 4-AP evoked glutamate release is attributed to a decrease in the Ca 2+ -dependent exocytosis of glutamate release. Effect of Phillygenin on Nerve Terminal Excitability and 4-AP Induced Ca 2+ Influx To further understand the mechanism underlying phillygenin-mediated inhibition of glutamate release, we observed the synaptosomal plasma membrane potential and monitored Ca 2+ influx under depolarizing conditions.The synaptosomal plasma membrane potential was assessed with the voltage-sensitive fluorescent probe DiSC3 (5).Table 1 shows that 4-AP administration resulted in an increase in DiSC3(5) fluorescence.(25.63 ± 0.86 units per 5 min, Table 1).The application of phillygenin (20 µM) for 10 min prior to the addition of 4-AP had no effect on the resting plasma membrane potential and did not significantly impact the 4-AP-induced increase in DiSC3( 5) fluorescence (27.08 ± 1.47 units per 5 min).Furthermore, we validated phillygenin-mediated inhibition of glutamate release using a high external KCl concentration as an alternative secretagogue.Elevated extracellular KCl levels induce depolarization of the plasma membrane, leading to Ca 2+ influx through VDCCs into the presynaptic terminal and subsequent neurotransmitter release from synaptic vesicles, which is Na + channel independent.The addition of 15 mM KCl resulted in glutamate release at a rate of 5.18 ± 0.4 nmol mg −1 protein per 5 min, which decreased to 2.65 ± 0.63 nmol mg −1 protein per 5 min in the presence of 20 µM phillygenin (Table 1).Furthermore, the calciumsensitive fluorescent dye Fura-2-AM was used to determine the effect of phillygenin on [Ca 2+ ] i .After applying 1).These findings suggest that the observed phillygeninmediated inhibition of glutamate release is due to a direct reduction in Ca 2+ entry through VDCCs rather than modulation of the plasma membrane potential. Table 1.Phillygenin attenuates the 4-AP-induced elevation in [Ca 2+ ] i but does not affect the synaptosomal membrane potential.The synaptosomal membrane potential was monitored using DiSC3(5) (5 µM), both in the absence (control) and in the presence of 20 µM phillygenin added 10 min prior to depolarization induced by 1 mM 4-AP.The effect of phillygenin on 15 mM KCl-induced glutamate release was examined.The experiments were conducted as previously described, with the exception that 15 mM KCl was used as a secretagogue instead of 4-AP.The intrasynaptosomal level of Ca 2+ (nM) was observed using Fura-2 (5 µM), both in the absence (control) and in the presence of 20 µM phillygenin added 10 min before depolarization with 1 mM 4-AP.** p < 0.01 versus the KCl control group, * p < 0.001 versus the 4-AP control group. Membrane Potential (Fluorescence Units) Glutamate Release (nmol mg − The release of glutamate induced by depolarization is attributed to the entry of Ca 2+ through different types of Ca 2+ channels in the plasma membrane, as well as the release of Ca 2+ into the cytoplasm from intracellular storage compartments such as the endoplasmic reticulum (ER) and mitochondria [31,32].At central excitatory synapses, the release of presynaptic glutamate is primarily controlled by N-type (Ca v 2.2) and P/Q-type (Ca v 2.1) calcium channels, which can be blocked by ω-conotoxin GVIA and ω-agatoxin IVA.Subsequently, we evaluated the specific Ca 2+ source involved in phillygenin-mediated inhibition of 4-AP-induced glutamate release.As shown in Figure 3A,B,E, 4-AP (1 mM)-evoked glutamate release was significantly reduced in the presence of the ω-conotoxin GVIA (1 µM) or ω-agatoxin IVA (0.1 µM) to 4.5 ± 0.28 and 3.82 ± 0.27 nmol mg −1 protein per 5 min, respectively.With the addition of ω-agatoxin IVA, phillygenin continued to inhibit glutamate release induced by 4-AP (1.4 ± 0.24 nmol mg −1 protein per 5 min (Figure 3B,E), F (3, 164) = 13.17,p < 0.0001).There was no significant difference in glutamate release following treatment with ω-conotoxin GVIA alone compared to that after combined treatment with ω-conotoxin GVIA and phillygenin (p = 0.97), suggesting that the inhibitory effect of phillygenin on 4-AP-induced glutamate release may be linked to a reduction in Ca 2+ influx through Ca v 2.2 calcium channels but not Ca v 2.1 calcium channels. Phillygenin Inhibits 4-AP-Evoked Glutamate Release through Extracellular Signal-Regulated Kinase Signaling Extracellular signal-regulated kinase (ERK), protein kinase C (PKC), and protein kinase A (PKA) are present at the presynaptic level and play crucial roles in neurotransmitter release [33,34].In this study, we examined the protein kinase cascade implicated in Using the ER Ca 2+ release blocker dantrolene and the mitochondrial Na + /Ca2 +exchanger inhibitor CGP37157, we demonstrated that the inhibitory effect of phillygenin on glutamate release is not mediated by reductions in Ca 2+ release from intracellular stores.The administration of dantrolene (10 µM) resulted in a reduction in the level of 4-AP (1 mM)-evoked glutamate release (4.77 ± 0.42 nmol mg −1 protein per 5 min (Figure 3C,E), F (3, 160) = 5.737, p < 0.0009).However, in the presence of 20 µM phillygenin, a significant decrease in the level of 4-AP-evoked glutamate release was still observed (p < 0.05).Similar results were obtained with the use of 10 µM CGP37157, which effectively inhibited Ca 2+ efflux from the mitochondria (Figure 3D,E), F (3, 164) = 11.29,p < 0.0001. To further confirm the role of the MAPK/ERK signaling cascade in phillygeninmediated inhibition of glutamate release, western blotting analysis was used to assess the phosphorylation of ERK1/2.Figure 5 shows that the depolarization of purified synaptosomes with 1 mM 4-AP led to a notable increase in ERK1/2 phosphorylation.Importantly, this effect was effectively inhibited by phillygenin.As a substrate for ERK protein kinases, synapsin I is a phosphoprotein associated with vesicles and localized at presynaptic terminals.The crucial function of synapsin I involves the regulation of vesicle dynamics and the release of neurotransmitters.Similar results were obtained from the analysis of synapsin I phosphorylation.The presence of 1 mM 4-AP led to an increase in the phosphorylation of synapsin I (135.91%± 6.07%; p < 0.01), and this effect was diminished following treatment with phillygenin (89.53% ± 16.02%; p < 0.01). A similar result was obtained when synaptosomes were subjected to treatment with FR180204 (10 µM), which is a potent, selective, cell-permeable inhibitor of ERK1 and ERK2 (p = 0.76, Figure 4B,E).In contrast, the PKA inhibitor H89 (10 µM) (Figure 4C,E, F (3, 168) = 4.800, p < 0.0031) and the PKC inhibitor GF109203X (5 µM) (Figure 4D,E, F (3, 168) = 6.793, p < 0.0002), which individually suppressed 4-AP-induced glutamate release, did not have any observable effect on the phillygenin-mediated inhibition of 4-AP-evoked glutamate release.These findings suggest that the inhibition of glutamate release by phillygenin is associated with the MAPK/ERK signaling pathway.To further confirm the role of the MAPK/ERK signaling cascade in phillygenin-mediated inhibition of glutamate release, western blotting analysis was used to assess the phosphorylation of ERK1/2.Figure 5 shows that the depolarization of purified synaptosomes with 1 mM 4-AP led to a notable increase in ERK1/2 phosphorylation.Importantly, this effect was effectively inhibited by phillygenin.As a substrate for ERK protein kinases, synapsin I is a phosphoprotein associated with vesicles and localized at presynaptic terminals.The crucial function of synapsin I involves the regulation of vesicle dynamics and the release of neurotransmitters.Similar results were obtained from the analysis of synapsin I phosphorylation.The presence of 1 mM 4-AP led to an increase in the phosphorylation of synapsin I (135.91%± 6.07%; p < 0.01), and this effect was diminished following treatment with phillygenin (89.53% ± 16.02%; p < 0.01). Discussion In this study, we present a novel observation using nerve terminals isolated from the cerebral cortex of a rat model to demonstrate the inhibitory effect of phillygenin on 4-APinduced glutamate release.The ability of phillygenin to inhibit 4-AP-induced glutamate Discussion In this study, we present a novel observation using nerve terminals isolated from the cerebral cortex of a rat model to demonstrate the inhibitory effect of phillygenin on 4-APinduced glutamate release.The ability of phillygenin to inhibit 4-AP-induced glutamate release suggests it can be used to limit excessive glutamate release, a major pathogenetic mechanism in several neurological disease states, including ischemic brain damage and neurodegeneration [35,36]. The release of neurotransmitters at synapses is a complex process linked to membrane depolarization and involves the regulation of ion channels, including Na + , K + , and Ca 2+ channels [37,38].Inhibiting Na + channels and activating K + channels shortened the duration of action potentials and stabilized membrane excitability.This ultimately causes a subsequent reduction in Ca 2+ entry and the release of neurotransmitters [39,40].Therefore, the observed reduction in glutamate release suggested that phillygenin activated a significant protective mechanism in response to excitotoxic insults.Our objective was to investigate the underlying mechanisms responsible for phillygenin-mediated inhibition of glutamate release, focusing on the intrasynaptosomal Ca 2+ concentration, the synaptosomal plasma membrane potential, VDCCs, and the activation of protein kinases in rat brain synaptosomes.The following discussion outlines potential mechanisms involved in phillygenin-mediated inhibition of glutamate release. In this study, we examined phillygenin-mediated inhibition of glutamate release by exploring two potential mechanisms: first, the modification of the synaptosomal plasma membrane potential and second, the direct regulation of Ca 2+ entry through VDCCs.The first scenario seems unlikely for three reasons.(1) Under both resting conditions and during depolarization with 4-AP, phillygenin did not significantly impact the synaptosomal plasma membrane potential, suggesting limited influence on K + conductance.( 2) Phillygenin had no effect on the Ca 2+ -independent release of glutamate triggered by 4-AP, a process that is solely dependent on the membrane potential [41].In accordance with these findings, the inhibitory effect of PD on 4-AP-induced glutamate release was prevented by the cellular autophagy inhibitor, bafilomycin A1, but remained unaffected by the presence of DL-TBOA, an inhibitor of excitatory amino acid transporters (EAATs).This observation suggested that phillygenin does not influence the release of glutamate by altering the direction of the plasma membrane glutamate transporter.(3) The release of glutamate triggered by 4-AP is associated with Na + and Ca 2+ channels, while KCl-induced release involves only Ca 2+ channels [42].Phillygenin significantly inhibited both 4-AP-and KCl-evoked glutamate release, suggesting that the effect of phillygenin involves Ca 2+ channels rather than Na + channels.These findings strongly indicate that phillygenin-mediated inhibition of 4-APevoked glutamate release is associated with a reduction in Ca 2+ -dependent exocytosis. At synaptic terminals, cytoplasmic Ca 2+ results from the combined contribution of extracellular Ca 2+ influx through plasma membrane VDCCs and intracellular Ca 2+ release [31,43].An increase in cytoplasmic Ca 2+ levels is associated with the release of glutamate.Using the endoplasmic reticulum (ER) ryanodine receptor inhibitor, dantrolene, and the mitochondrial Na + /Ca 2+ exchange inhibitor, CGP37157, we demonstrated that phillygenin has no effect on intracellular Ca 2+ release.These data suggest that intracellular Ca 2+ stored in the ER and mitochondria was not involved in phillygenin inhibition of 4-AP-evoked glutamate release.Phillygenin suppressed the 4-AP-induced increase in [Ca 2+ ] i when the calcium indicator Fura-2-AM was utilized.These findings suggested that phillygenin inhibits glutamate release by reducing presynaptic Ca 2+ influx through VDCCs.Moreover, the inhibitory effect of phillygenin on 4-AP-induced glutamate release from synaptosomes was abolished in the presence of the Ca v 2.2 (N-type) calcium channel blocker ω-conotoxin GVIA.In contrast, the action of phillygenin was not affected when the Ca v 2.1 (P/Q-type) calcium channel was blocked by ω-agatoxin IVA.These findings indicate that the modulation of glutamate release by phillygenin is linked to the inhibition of Ca 2+ influx through presynaptic Ca v 2.2 calcium channels.The mechanism by which phillygenin modulates Ca v 2.2 calcium channels remains unclear.Phillygenin's effect may occur directly on presynaptic Ca v 2.2 channels or through another way such as the modulation of protein kinase activity, altering VDCCs phosphorylation.Therefore, additional research is needed to fully understand phillygenin's impact on Ca v 2.2 calcium channels. The effect of phillygenin on Ca v 2.2 calcium channels may be attributed to indirect modulation through a series of protein interactions between the membranes of synaptic vesicles and presynaptic terminals [44,45].The activity of presynaptic VDCCs and glutamate release are known to be regulated by second messenger-activated protein kinases, including PKA, PKC, and MAPK/ERK [46,47].In this investigation, the effect of phillygenin on 4-APinduced glutamate release was effectively blocked by the ERK inhibitors, PD98059 and FR180204.Conversely, the decrease in 4-AP-induced glutamate release caused by phillygenin remained unaffected when synaptosomes were subjected to incubation with the PKA inhibitor, H89 and the PKC inhibitor, GF109203X.These results suggest that phillygenin inhibits glutamate release from rat cerebral cortex nerve terminals through the MAPK/ERK kinase cascade.Furthermore, phillygenin significantly reduced the phosphorylation of ERK1/2 and synapsin I at the ERK1/2-dependent phosphorylation sites 4/5 induced by 4-AP.The MAPK/ERK cascades are key signaling pathways that regulate neurotransmitter exocytosis by transmitting extracellular signals to intracellular targets.Ca 2+ influx, induced by depolarization, activates MAPK/ERK and phosphorylates synapsin I at sites 4/5.This phosphorylation of synapsin I releases synaptic vesicles from the actin cytoskeleton in response to specific stimuli.As a result, more vesicles are available near the active zone for neurotransmitter exocytosis, leading to the release of glutamate from a storage vesicle into the synaptic cleft [48].These findings suggest that phillygenin inhibits glutamate release by suppressing MAPK/ERK-dependent synapsin I phosphorylation and reducing the number of synaptic vesicles. Elevated levels of extracellular glutamate in the brain can lead to neuronal damage via excitotoxicity, which is a critical process in neurodegeneration.Compounds capable of modulating the release of glutamate show promise as therapeutic agents for neuroprotection.Phillygenin, a lignan compound from a medicinal herb, has shown promising therapeutic potential due to its effects on inflammation, oxidation, tumors, bacteria, viruses, pain, and liver damage [13,16,20,49].Oxidative stress can trigger free radical attack of neuronal cells, which contributes to the development of neurodegenerative disorders.Phillygenin, a natural antioxidant, has the potential to protect the brain against the damaging effects of free radicals and inflammation and potentially offer neuroprotective benefits.Most experiments have demonstrated phillygenin's non-toxicity to cells or experimental animals.Additionally, acute toxicity tests in mice showed no adverse effects even at high doses, indicating its safety [50,51].In the present study, phillygenin inhibited the Ca 2+dependent exocytosis of glutamate in a dose-dependent manner in a concentration range of 5-50 µM and an IC50 value of 17 µM.Lin et al. demonstrated that phillygenin at a dose of 30-100 µM was effective in suppressing the inflammatory response and inhibiting apoptosis in vitro [23].In addition, in vitro studies have shown that phillygenin exerts antitumor effects at concentrations ranging from 10-100 µM [14,18,52].The results of the present study are consistent with these reports; however, the antioxidant effects of phillygenin were observed at relatively high IC50 values (approximately 140 µM) [12,53]. While the precise mechanisms underlying the neuroprotective effect of phillygenin are yet to be fully understood, there have been reports suggesting the potential involvement of scavenging free radicals or exhibiting antioxidant properties [16,53].In this study, the ability of phillygenin to reduce glutamate release from nerve terminals may partially elucidate its neuroprotective mechanism.In addition, phillygenin exhibits significant analgesic activities and may interact with glutamatergic receptors or signaling pathways to attenuate pain perception [49].Excessive glutamate release and excitotoxicity are linked to pain conditions and neuropathic pain.Phillygenin's ability to regulate glutamate levels or transmission may contribute to its analgesic effects and offer neuroprotection against glutamate-induced neuronal damage.However, the limited research on phillygenin's neuroprotection necessitates further investigation in future studies. In summary, the findings from this study suggest that phillygenin reduces glutamate release from rat brain nerve endings by blocking presynaptic Ca 2+ entry through Ca v 2.2 calcium channels.These findings highlight a crucial mechanism by which phillygenin may protect neurons against excitotoxic damage induced by calcium overload.Moreover, the observed suppression may, at least partially, be influenced by inhibition of the MAPK/ERK/synapsin I pathway (Figure 6). the observed suppression may, at least partially, be influenced by inhibition of the MAPK/ERK/synapsin I pathway (Figure 6). Figure 1 . Figure 1.Phillygenin suppresses the release of glutamate induced by 4-AP in a concentration-dependent manner.Rat synaptosomes were resuspended in HBM buffer at a final protein concentration of 0.5 mg/mL and incubated for 3 min followed by the addition of 1 mM CaCl2.After an additional 10 min, 1 mM 4-AP was introduced to induce depolarization (indicated by an arrow).The release of glutamate was assessed using a continuous fluorometric assay.(A) Chemical structure of phillygenin.(B) Glutamate release was evaluated through a continuous fluorometric assay, in either control conditions and in the presence of 5-30 µM phillygenin, administered 10 min prior to the introduction of 4-AP.(C) A concentration-dependent reduction was observed in 4-AP-stimulated glutamate release in the presence of phillygenin.The results represent the mean ± standard error of the mean (S.E.M.) values from independent experiments utilizing synaptosomal preparations from six animals.Mean and S.E.M. were calculated at 2-s intervals, with error bars depicted at intervals of 10 s for better clarity.** p < 0.01 compared with the control group, *** p < 0.001 compared with the control group. Figure 1 . Figure 1.Phillygenin suppresses the release of glutamate induced by 4-AP in a concentrationdependent manner.Rat synaptosomes were resuspended in HBM buffer at a final protein concentration of 0.5 mg/mL and incubated for 3 min followed by the addition of 1 mM CaCl 2 .After an additional 10 min, 1 mM 4-AP was introduced to induce depolarization (indicated by an arrow).The release of glutamate was assessed using a continuous fluorometric assay.(A) Chemical structure of phillygenin.(B) Glutamate release was evaluated through a continuous fluorometric assay, in either control conditions and in the presence of 5-30 µM phillygenin, administered 10 min prior to the introduction of 4-AP.(C) A concentration-dependent reduction was observed in 4-AP-stimulated glutamate release in the presence of phillygenin.The results represent the mean ± standard error of the mean (S.E.M.) values from independent experiments utilizing synaptosomal preparations from six animals.Mean and S.E.M. were calculated at 2-s intervals, with error bars depicted at intervals of 10 s for better clarity.** p < 0.01 compared with the control group, *** p < 0.001 compared with the control group. Figure 2 .Figure 2 . Figure 2. The effect of external calcium omission, the glutamate transporter blocker DL-TBOA, and the vesicular transporter inhibitor bafilomycin A1 on the inhibition of 4-AP-evoked glutamate release mediated by phillygenin.(A) Ca 2+ -independent release was assessed by excluding CaCl2 and introducing 300 µM EGTA 10 min before depolarization.The release was triggered by 1 mM 4-AP, both in control conditions and in the presence of 20 µM phillygenin, administered 10 min prior to the introduction of 4-AP.The effect of phillygenin on 4-AP-evoked glutamate release was assessed in the absence (control) and presence of DL-TBOA (10 µM).The black arrow indicates the moment Figure 2. The effect of external calcium omission, the glutamate transporter blocker DL-TBOA, and the vesicular transporter inhibitor bafilomycin A1 on the inhibition of 4-AP-evoked glutamate release mediated by phillygenin.(A) Ca 2+ -independent release was assessed by excluding CaCl 2 and introducing 300 µM EGTA 10 min before depolarization.The release was triggered by 1 mM 4-AP, both in control conditions and in the presence of 20 µM phillygenin, administered 10 min prior to the introduction of 4-AP.The effect of phillygenin on 4-AP-evoked glutamate release was assessed in the absence (control) and presence of DL-TBOA (10 µM).The black arrow indicates the moment when 4-AP is added.(B) and bafilomycin A1 (0.1 µM) (C).DL-TBOA, bafilomycin A1 and phillygenin were added 10 min before depolarization.(D) Quantitative analysis comparing the amount of glutamate release induced by 1 mM 4-AP in the presence and absence of EGTA, as well as in the presence and absence of DL-TBOA or bafilomycin A1.Results are mean ± S.E.M. of five independent experiments.** p < 0.01 versus the control group, * p < 0.05 versus the DL-TBOA-treated group. Figure 4 . Figure 4.The inhibition of 4-AP-induced glutamate release by phillygenin is entirely blocked in the presence of the ERK inhibitors PD98059 and FR180204.Glutamate release was induced by 1 mM 4-AP in the absence (control) or presence of 20 µM PD98059 (A), 10 µM FR180204 (B), 10 µM H89 (C), Figure 4 . Figure 4.The inhibition of 4-AP-induced glutamate release by phillygenin is entirely blocked in the presence of the ERK inhibitors PD98059 and FR180204.Glutamate release was induced by 1 mM 4-AP in the absence (control) or presence of 20 µM PD98059 (A), 10 µM FR180204 (B), 10 µM H89 (C), or 5 µM GF109203X (D), administered 10 min before the addition of 20 µM phillygenin.(E) Quantitative evaluation of the released glutamate levels under different conditions.Results are the mean ± S.E.M. of five independent experiments.** p < 0.01 versus the control group, or GF109203X-treated group.# p < 0.05 versus the H89-treated group. Figure 5 . Figure 5.The Effect of phillygenin on the 4-AP-evoked phosphorylation of ERK1/2 and synapsin I, which is a substrate of ERK.Phillygenin was added 10 min before depolarization with 4-AP.The phosphorylation levels of ERK1/2 (A) and synapsin I (B) in synaptosomes were assessed and expressed as a percentage relative to the measurements obtained from the control group without 4-AP.Each bar represents the means ± S.E.M. of the results obtained in 3 experiments (n = 3 per group).*** p < 0.001 versus the control group.# p < 0.05 versus the 4-AP-treated group. Figure 5 . Figure 5.The Effect of phillygenin on the 4-AP-evoked phosphorylation of ERK1/2 and synapsin I, which is a substrate of ERK.Phillygenin was added 10 min before depolarization with 4-AP.The phosphorylation levels of ERK1/2 (A) and synapsin I (B) in synaptosomes were assessed and expressed as a percentage relative to the measurements obtained from the control group without 4-AP.Each bar represents the means ± S.E.M. of the results obtained in 3 experiments (n = 3 per group).*** p < 0.001 versus the control group.# p < 0.05 versus the 4-AP-treated group. Figure 6 . Figure 6.Schematic representation of the main mechanism involved in phillygenin-mediated inhibition of glutamate release from cerebral synaptosomes.Phillygenin suppresses Cav2.2 calcium channels, which in turn inhibits the MAPK/ERK/synapsin I pathway, thus decreasing the amount of glutamate release.Red downward arrows indicate a decrease.Graph created with BioRender.com.
2024-02-27T17:04:17.583Z
2024-02-22T00:00:00.000
{ "year": 2024, "sha1": "49e37ccdd42f0a8d576a4c1ede01dbaa468705c0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/12/3/495/pdf?version=1708606049", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "210cbb6da3dd4a056211f796288450ef34342223", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
225041233
pes2o/s2orc
v3-fos-license
Holographic complexity of rotating black holes Within the framework of the"complexity equals action"and"complexity equals volume"conjectures, we study the properties of holographic complexity for rotating black holes. We focus on a class of odd-dimensional equal-spinning black holes for which considerable simplification occurs. We study the complexity of formation, uncovering a direct connection between complexity of formation and thermodynamic volume for large black holes. We consider also the growth-rate of complexity, finding that at late-times the rate of growth approaches a constant, but that Lloyd's bound is generically violated. Introduction Holographic duality within the framework of the anti de Sitter/Conformal Field Theory correspondence (AdS/CFT) [1] continues to be the basis of many interesting connections between quantum information and gravity. Geometric quantities in bulk AdS spacetime can be precisely related to entanglement properties of the boundary CFT, most notably through the Ryu-Takayanagi construction [2,3]. Studies of the growth of the Einstein-Rosen (ER) bridge in AdS black holes have led to speculations of its duality to the growth of complexity of the dual boundary state [4]. This was refined to new conjectured entries in the AdS/CFT dictionary: the complexity-volume (CV) conjecture [5,6] and the complexity-action (CA) conjecture [7,8]. Complexity of quantum states is a measure of how hard it is to prepare a particular target state |ψ T from a given reference state |ψ R and an initial set of elementary gates G V n ≡ g n . . . g 1 g 0 (1.1) where g 0 , . . . g n ∈ G. The complexity of a state |ψ T is then defined as the minimum number n of elementary gates that can approximate it according to some norm, starting from a fixed reference state |ψ R C(|ψ T ) = arg min n || |ψ T − V n |ψ R || 2 (1. 2) In addition to discrete circuit models, complexity can also be defined for systems with continuous Hamiltonian evolution generated by with boundary conditions U (0) = I and U (1) = V n , where M k are the basis Hermitian generators of the Hamiltonian, and Y k (t) are the time-dependent control functions specifying the tangent vector Y (t) of a trajectory in the space of unitaries [9]. The time-ordering operator ← − T ensures that earlier terms in the expansion of the evolution operator U (t) act on the state before later terms -i.e. going from right to left. Thus, continuous Hamiltonian evolution defines a path in the space of unitaries of the circuit whose length is [10,11] D(U ) = 1 0 F (U (t),U (t))dt (1.4) where the cost function F (U (t),U (t)) is a local functional of positions along U (t) in the space of unitaries, with the overdot denoting a t derivative. 1 Thus, An ongoing topic of active research is the extension of the concept of complexity to quantum field theories using the above geometric formulation of complexity (for example, see [12][13][14][15][16][17]). The above definition of complexity clearly has many ambiguities [13,18] associated with the choice of reference states, basis operators, and cost function, which is expected to be related to the ambiguities associated with calculating the action in CA proposal [19]. Complexity was originally discussed in the context of holography as the dual to the volume of the ER bridge in eternal black holes [4]. The eternal Schwarzschild-AdS black hole is dual to two copies of the CFT prepared in the thermofield double state [20]. The volume of the ER bridge continues to grow in time even after the system thermalizes, suggesting at any putative CFT dual to this quantity must be something that continues to evolve after equilibrium is reached [21,22]. It was proposed that this growth captures some notion of complexity for the CFT state. The idea that the growth of the black hole interior is connected to computational complexity has come to be refined into a number of concrete proposals, the most studied of which are the CV and CA conjectures. The CV conjecture proposed that the complexity of the TFD state at boundary section Υ is equal to the volume of the extremal/maximal spacelike slice B anchored at t L and t R at the boundaries [6] where R is a length scale associated with the bulk geometry (usually taken to be the AdS length ) chosen to make the complexity dimensionless. This was generalized to the CA conjecture 2 , where complexity depends on the whole domain of dependence of B -a region called the Wheeler-DeWitt (WDW) patch [7]. Explicitly, the CA conjecture asserts that the complexity of the CFT state is given by the numerical value of the gravitational action evaluated on the WDW patch: (1.7) Both the CV and CA conjectures have received considerable attention and basic properties of each are now well-established. Initially, attention was given to the idea that, within the CA proposal, the late-time growth of complexity for the Schwarzschild-AdS black hole is πĊ A = 2M [7,8]. This was a suggestive connection with Lloyd's bound and was argued to support the idea that black holes are the fastest computers in nature [23]. However, subsequent careful analysis revealed that this late-time value is actually approached from above rather than from below, as Lloyd's bound would require [18]. It is now believed that the assumptions required for Lloyd's bound may be incompatible with holography [24,25]. Nonetheless, there have been several rather interesting connections uncovered between complexity and black hole thermodynamics in both proposals, but the situation is especially clear in the CA proposal. For example, in the CA proposal the late-time growth rate of complexity for two-horizon geometries reduces to the difference in internal energies (or enthalpies) between the inner and outer horizons: (1.8) where F is the free energy, S the entropy, and T the Hawking temperature, while the +/− corresponds to the outer/inner horizon, respectively. This relationship was first observed in Einstein gravity in [8], and then argued to hold for general theories of gravity in [26], and established rigorously for the full Lovelock family of gravitational theories in [27] (see also [28]). Many other properties have been explored, e.g., the effects of topology [14,[29][30][31]. If there are topological identifications in the spacetime then the complexity is rescaled by a factor dependent on the identifications [14]. In many instances, the properties of complexity are qualitatively similar in both the CV and CA proposals. For example, both proposals account for the expected linear time dependence at late times [6,8] and both exhibit the switchback effect, which is the expected response of complexity to perturbations of the state at early times [6,32,33]. However, there are some situations in which the two proposals differ in their behaviour [31,[33][34][35][36][37][38]. Understanding universal and divergent aspects of the two proposals is useful as there does not yet exist a first-principles derivation for complexity in the holographic dictionary. Besides the time-dependent complexity rate of growth, another quantity of interest is the complexity of formation [39] of a black hole which measures the additional complexity present in preparing the thermofield double state in two copies of the CFT compared to two copies of the vacuum alone. The complexity of formation was first defined and discussed in [39] for Schwarzschild-AdS black holes in various dimensions, where it was found that it grows linearly with entropy in the high-temperature (equivalently, large black hole) limit -that is, ∆C A ∼ k d S, for a constant k d that depends on the (boundary) dimension d > 3. These considerations were extended to charged black holes in [18] where it was found that the functional dependence of the complexity of formation is more complicated, but its dependence on the size of the black hole was still found to be controlled by the entropy in the limit of large black holes. Our purpose here is to study various aspects of the holographic complexity conjectures for rotating black holes. The study of rotating black holes in the context of AdS/CFT was initiated in [40][41][42][43][44], where the thermodynamic properties of the black holes were compared with those of the boundary CFT. This holographic picture was further developed for astrophysical black holes with the "Kerr/CFT correspondence" [45], which conjectures that quantum gravity near the horizon of an extremal Kerr black hole is dual to a two-dimensional CFT (for reviews see [46,47]). Rotating black holes are dual to thermofield double states with an additional chemical potential |rTFD = 1 Z(β, {µ i }) n e −βEn/2 e −βµJn/2 |E n , J n L ⊗ |E n , J n R (1.10) associated with the rotation, where µ ≡ µ 1 + · · · + µ (D−1)/2 , and µ i is the chemical potential associated with the angular momentum J i along the φ i circle, with Z(β, {µ i }) the grand canonical partition function. The time evolution of the state is modified by the chemical potentials |rTFD(t L , t R ) = e −i(H L +µJ L )t L −i(H R +µJ R )t R |rTFD (1.11) where (H L , J L ) and (H R , J R ) are the Hamiltonians and angular momentum operators for the left and right boundaries, respectively. To date, there have been only a few studies focussing on the effects of rotation in the context of complexity, and these studies are further limited to a derivation of the late-time rate of growth. The late-time complexity growth of Kerr-AdS black holes in CA conjecture was calculated in [48]. The effect of a probe string attached to a rotating black hole on its complexity was studied in [49]. One reason that a more detailed analysis is not straightforward is the more complicated causal structure of rotating black holes. In the case of rotating spacetimes, carrying out a computation of the action for a WDW patch (or of the volume of a spacelike slice) is a technically formidable task. The description of null hypersurfaces is somewhat complicated even for 4 spacetime dimensions [50], and no generalization to higher-dimensional cases presently exists. Fortunately there is a special case that renders the computations tractable: Myers-Perry-AdS spacetimes in odd dimensions with equal angular momenta in each orthogonal rotation plane. Compared to the most general Myers-Perry-AdS black holes, these solutions enjoy enhanced symmetry that considerably simplifies the analysis of the causal structure. This particular configuration has some similarities with the charged case [17,51], however, we shall see that there are interesting differences. One of our main motivations for considering rotating black holes is to help develop an understanding of how the CV and CA proposals behave for less symmetric spacetimes. In the context of the AdS/CFT correspondence, understanding how a quantity responds to deformations of the state or the theory itself has been a fruitful approach in understanding which relationships may be universal and which may be specific to the state or theory. For example, this approach has been used with some success in the context of higher-curvature theories of gravity. Those theories introduce additional parameters into the action, which can then be used to discern between the various possible CFT charges. This method has also been used to understand the limitations of the Kovtun-Son-Starinets bound [52], argue for the existence of c-theorems in arbitrary dimensions [53,54], and generate conjectures for the universal behaviour of terms in entanglement entropy or partition function [55][56][57]. Similarly, our hope here is that the more complicated metric structure of rotating black holes will help to discern both universal features of and particular distinctions between the CV and CA proposals. Along these lines, one of the main results of this paper concerns a connection between the thermodynamic volume of the black hole and the complexity of formation in both the CV and CA proposals. The thermodynamic volume is a quantity that arises naturally when one extends the definition of Komar mass from the asymptotically flat to asymptotically AdS setting [58,59]. It also appears in the first law of black hole mechanics, governing the response of the mass to variations in the cosmological constant which, in this case, is interpreted as a pressure. In general, the thermodynamic volume is an independent thermodynamic potential. However in certain cases (such as those involving spherical symmetry) the thermodynamic volume and entropy are simply related via S ∝ V (D−2)/(D−1) . In some instances, the thermodynamic volume can be related to the spacetime volume inside the black hole [59,60]. This fact has motivated some authors to consider its relevance in the context of holographic complexity. However, the results so obtained have either involved new proposals for complexity [61,62], or have used thermodynamic identities to understand results in terms of the thermodynamic volume for interpretational reasons [26,63,64]. Our result is, to the best of our knowledge, the first to draw a clear connection between thermodynamic volume and the original CV and CA conjectures. We have reported on this result elsewhere [65], and here provide additional details and context. While the meaning of thermodynamic volume in the holographic context is understood (it controls the response of the dual field theory to changes in the number of colours and changes in the volume of the space on which the theory is defined [68]), its utility in holography remains rather undeveloped (though see [66,67,[69][70][71][72][73] for progress in this direction). Our result may be viewed as an initial step toward developing the utility of thermodynamic volume in holography. The paper is organized as follows. In section 2, the geometry and causal structure of the Myers-Perry-AdS spacetimes is given. Section 3 describes the terms of the action calculations that needs to be evaluated to calculate the complexity according to the CA conjecture as well as the framework to calculate the extremal volume in CV conjecture. In section 4, we calculate the complexity of formation of the state (1.10) in reference to the vacuum AdS state, according to both the CA and CV conjectures. In section 5, we present the full time evolution of complexity rate of growth in both the CA and CV conjectures. We discuss the implications of our results and point toward possible future directions in section 6. A number of technical details and supporting calculations are left to the appendices. Unless explicitly stated otherwise, we will use natural units = c = k B = 1 below. Solution and global properties The Myers-Perry-AdS solution in odd dimension D = 2N + 3 is a cohomogeneity-(N + 1) metric with isometry group R × U (1) N +1 , described by its mass M and N + 1 independent angular momenta J i [74]. In the special case in which all angular momenta J i , i = 1 . . . N + 1 are equal, there are considerable simplifications and the metric depends only on a single radial coordinate and on the parameters (m, a) [75]: We take m > 0 and by sending t → −t, we can without loss of generality always choose a ≥ 0. The metricĝ is the Fubini-Study metric on CP N with curvature normalized so that Ric(ĝ) = 2(N + 1)ĝ and A is a 1-form on CP N that satisfies dA = 2J where J is the Kähler form on CP N . The isometry of the spacetime is enhanced to R × U (1) × SU (N + 1). The metric g satisfies the Einstein equations G ab +Λg ab = 0 with a negative cosmological constant, normalized such that Λ = −(D − 1)(D − 2)/2 2 where is the AdS length scale. The field equations can then be simply expressed as The solution above describes the exterior region of a stationary, multiply rotating asymptotically AdS black hole. The basic example is in D = 5, in which case N = 1 and we have The asymptotic region is obtained in the limit r → ∞, where we recover the usual AdS 2N +3 metric provided we periodically identify ψ ∼ ψ + 2π. The line element above is valid in the exterior region of the spacetime; that is we also take t ∈ R and r + < r < ∞ where r + is the largest positive root of g(r) −2 . We will discuss below how the metric can be extended beyond r + to all r > 0. As we will review below, the hypersurface r = r + is in fact a smooth Killing horizon with null generator Horizons are located at the positive roots of g(r) −2 . They can be more easily studied via the polynomial P (r 2 ) where Since there are only two sign changes between adjacent coefficients we can apply Descartes' rule of signs to argue there can be at most two real positive roots x + > x − > 0 assuming m > 0. Thus we expect the causal structure to be qualitatively similar to that of a charged black hole, consisting of an outer (event) horizon and an inner Cauchy horizon. We will show this explicitly below. We can eliminate (m, a) in terms of (r + , a) m = r 2N +2 . (2.8) A similar formula holds for m with r − replacing r + . Note that regularity of the event horizon requires that with the bound saturated when the black hole is extremal. When a = 0 the solution is just Schwarzschild-AdS. Then there is one horizon and beyond this the function g rr < 0 and g tt > 0. The set r = 0, which is a spacelike hypersurface, is then a curvature singularity. We will focus on the case a > 0, for which the set r = 0 is still a curvature singularity but now is timelike (i.e. |dr| 2 → +∞). As r → 0, the geometry of the base CP N collapses. However, h(r) 2 ∼ r −2N as r → 0 so the S 1 grows to an infinite size. Meanwhile g tt ∼ 2mr −2N is also diverging (and ∂ t is spacelike). The metric still has to be Lorentzian however, since det g = −r 4N +2 < 0. Thus instead of the singularity being a timelike worldline, it is a timelike cylinder (i.e. at constant t it has S 1 topology). The conserved charges corresponding to mass and angular momentum are [74,76] where is the area of a unit 2N + 1 sphere. Note that M > 0 imposes the constraint Ξr 2 + − a 2 > 0 from (2.8). We emphasize that the single angular momentum J corresponds to equal angular momenta J i = J/(N + 1) in each of the N + 1-orthogonal planes of rotation. Next, since the volume associated with (CP N ,ĝ) is we can read off the area of a spatial cross section of the event horizon at r = r + It is easy to check that h(r + ) = r 2 + / Ξr 2 + − a 2 . Furthermore, the event horizon has surface gravity (2.14) Finally, since one finds that there is an ergoregion since g tt > 0 in a region exterior to the horizon, although for sufficiently large r, g tt < 0. Note that the ergosurface is never tangent to the event horizon. Extended thermodynamics In addition to the mass M , angular momentum J, and angular velocity Ω H given above, the black hole's entropy and temperature are given by Within the framework of extended thermodynamics (see, e.g. the review [77]) one associates a thermodynamic pressure with the cosmological constant via which is its conjugate thermodynamic volume. One can then check that the following first law of extended thermodynamics holds for the Myers-Perry-AdS family [59] dM = T dS + Ω H dJ + V dP (2.19) along with the Smarr relation In what follows, it will often be convenient to work in terms of the parameters (r + , r − ) rather than (m, a). To make the connection between these quantities and the physical parameters of the black hole more explicit, in figure 1 we plot the mass and angular momentum as functions of r + / for different values of the ratio r − /r + . The basic conclusion is that, for large black holes, both the mass and angular momentum grow with increasing r + / . However, for black holes closer to extremality, the growth is stronger. Although we show this pictorially only for five dimensions, the plots are qualitatively similar in higher dimensions. We show also in figure 2 the angular velocity of the horizon as a function of r + / , again for different values of the ratio r − /r + . In the left plot, the dashed black line corresponds to the case where the black hole rotates at the speed of light with respect to an observer situated at infinity. For a ratio r − /r + sufficiently below unity, the angular velocity exhibits a minimum Right: A plot of the angular momentum as a function of the horizon radius for several values of r − /r + . In each case, the lower dark blue curve corresponds to r − /r + = 1/100, and this value increases in increments of 1/8 as one moves vertically in the plot (lines of decreasing opacity). for some intermediate value of r + / and then increases. When this minimum coincides with the critical angular velocity Ω c H = 1/ , the minimum disappears and the angular velocity is a monotonically decreasing function of the horizon radius, asymptoting to Ω c H = 1/ from above. The minimum of the angular velocity coincides with the critical value when Although it is not possible to obtain a simple-closed form, for five-dimensions it occurs when r − /r + = √ 5 − 1/ √ 2 and decreases with increasing spacetime dimension, asymptoting to r − /r + = 1/ √ 2 in the limit N → ∞. All black holes with r − /r + above this threshold rotate faster than light. Provided that r − /r + is less than the value corresponding to the solution of (2.21), the location of the minimum of the angular velocity occurs at The equally-rotating Myers-Perry-AdS black holes considered here are unstable to linearized gravitational perturbations when they rotate faster than light [75]. The instability is 'superradiant' in the sense that certain perturbations are trapped by the AdS potential barrier and are reflected back to the black hole, creating an amplification process [42]. Note that extreme black holes in this class always rotate faster than the speed of light and are hence unstable. The endpoint of these instabilities are expected to be stationary, nonaxisymmetric black hole. Although it will not be particularly important for the considerations we are interested in here, it would be interesting to investigate the relation of our findings to known results on the dynamical stability of rotating, asymptotically AdS black holes. Causal structure Next let us discuss the global structure of the spacetime. In general the causal structure of spacetimes with nontrivial rotation is far more complicated than that of their static counterparts. The reason for this, at least partly, is because in general rotating spacetimes the null hypersurfaces are no longer effectively two dimensional as they are in the static case. However, for the special case of odd-dimensional rotating black holes with equal angular momenta some of these difficulties can be circumvented, as first emphasized in [31]. Let us illustrate this, following the methods of [50,78,79]. For convenience we will focus on the non-extreme case r + = r − . Our task is to construct a suitable family of null hypersurfaces. We start with an ansatz v = t + r * (r, ψ i ) (2.23) where ψ i stands for the various angular coordinates and r * denotes a suitable 'tortoise' coordinate. We then demand that dv -the one-form normal to surfaces of constant v -is null, i.e. g −1 (dv, dv) = 0. A direct computation reveals that this condition admits an additively separable solution: Using an appropriate choice of integration constants the dependence on the angular coordinates can be eliminated, leaving 25) or in other words, r * is a function only of the radial variable, somewhat akin to the static case. These rotating black holes possess the "simplest" causal structure, and are therefore natural candidates for a first foray into the properties of complexity in rotating backgrounds. Unfortunately, the tortoise coordinate cannot be obtained in a useful closed form and numerical techniques are required for its evaluation. However, for later convenience, here we note both the asymptotic form of the tortoise coordinate, and that the integral can be massaged into a form much more amenable to numerical evaluation. Working to the leading order at which differences between the tortoise coordinate for the black holes differs from that for global AdS we find (2.26) Of course, the tortoise coordinate will exhibit logarithmic singularities at the event and inner horizons. To better understand the behaviour of the tortoise coordinate it is useful to define where G(r) > 0 will be completely regular at both horizons. We can series expand the integrand in the vicinity of the horizon to obtain the behaviour near the poles. We find and Noting this behaviour, we can then perform a splitting of the integral, subtracting the pole contributions from the integrand to leave a completely convergent integral, and then handle the poles separately. We choose . where we have kept (r 2 − r 2 ± ) in the denominator to ensure that, when integrated, these terms converge also as r → ∞. Note that the term in parentheses is now completely regular at r = r ± . The integrals involving the divergent parts can then be evaluated directly and we obtain r * (r) = G(r + )h(r + ) 2r 2 (2.31) Here we emphasize that the integrand in the last term is completely regular at both horizons. Furthermore, in so doing we have extended the integration at infinity, choosing r * → 0 as r → ∞. This form of the tortoise coordinate is much more amenable to numerical evaluation. By expressing the surface gravities at the inner and outer horizons in terms of G(r) we find which allows us to write the tortoise function in the simple form where R(r) is a smooth function defined by the integral term in (2.31). So far we have shown that the null sheets v =constant in the equal-angular momenta Myers-Perry-AdS solution have a particularly simple form compared to the general situation. We next turn to investigating the causal structure of the solution. To begin, we will construct horizon-penetrating ingoing coordinates adapted to these light sheets. We first pass to corotating coordinates so that the null generator of the event horizon ξ = ∂ T . Next we introduce new coordinates (v, r, Ψ + ) by setting v = T + r * , so that the metric becomes The metric is clearly smooth and non-degenerate at both horizons (i.e. at poles of g(r)). The coordinates cover one exterior region, and can be continued through the event horizon, beyond the inner horizon, and finally to the timelike singularity at r = 0. However, as in the well known Reissner-Nordstrom case, to determine the maximal analytic extension, the ingoing coordinates are not sufficient. To construct the required Kruskal-like coordinates, we first define a new chart (v, u, Ψ + ) where u = v − 2r * to obtain the metric in 'double null coordinates' where r * = (v − u)/2. The metric (2.37) is clearly degenerate at both the event and inner horizons. As r → r + we see from (2.33) that r * → −∞ whereas as r → r − , r * → ∞. Therefore in a neighbourhood of the event horizon as which implies that |r − r + | → 2r + e κ + (v−u) as r → r + . We next define coordinates Therefore as we approach the event horizon, Furthermore it is easily checked that (Ω H − Ω(r))dv is smooth as r → r + . This demonstrates that the metric is smooth and non-degenerate at the event horizon in the (U + , V + , Ψ + ) chart and we can analytically continue the chart through the event horizon (U + = 0 or V + = 0) to a new region U + > 0, V + < 0 so that the metric (2.41) is regular for r − < r < ∞. The chart covers four regions (quadrants in the (U + , V + )−plane) with a bifurcation S 3 at (U + , V + ) = (0, 0). The coordinate system breaks down near the inner horizon as r → r − and there are radial null geodesics that reach this null hypersurface in finite affine parameter. We can extend beyond this coordinate singularity by reversing the above coordinate transformations to return to the ingoing coordinates (v, r, Ψ + ), which are regular at both horizons. Define so that in the (v, r, Ψ − ) chart, the Killing field ∂ v is corotating with the inner horizon r = r − . Introduce a second double null coordinate system (v,û) witĥ so that in particular r * = (v −û)/2. The metric in the (v,û, Ψ − ) coordinate chart will resemble (2.37) with the obvious replacements and hence will be degenerate at r = r − . We then introduce a second pair of Kruskal-like coordinates adapted to the inner horizon by setting By repeating the above computations we find the metric in the (V − , U − , Ψ − ) chart is which is indeed smooth and non-degenerate at r = r − using the fact that (r − r − )e −2κ − r * → 2r − as r → r − and (Ω(r − ) − Ω(r))/V − = O(1). In this coordinate system, the inner horizon corresponds to either U − = 0 or V − = 0 and we may analytically continue the metric in this chart to allow U − ≥ 0 and V − ≥ 0, corresponding to 0 < r < r − . This region contains a timelike coordinate singularity at r = 0 , or U − V − = e 2κ − R(0) . Since this region is actually isometric to a region for which the event horizon lies to the future, we can introduce new coordinates (Û + ,V + ) and analytically continue the metric into new exterior regions r > r + that are isometric to the original asymptotically AdS regions described by the (U + , V + ) coordinate chart. We can repeat this procedure indefinitely both to the future and past to produce a maximal analytic extension with infinitely many regions, qualitatively similar to the familiar maximal analytic extension of the non-extreme rotating BTZ black hole [80]. Note that in contrast to the Kerr black hole, and generic members of the Myers-Perry(-AdS) black holes, one cannot continue into a region of spacetime for which r 2 < 0. Framework for Action calculations Given a D−dimensional bulk region M, the gravitational action, including all the various terms for boundary surfaces and joints [81], over this region is given by 3 [19] The first term is the Einstein-Hilbert bulk action with cosmological constant, which from (2.17) is integrated over M. The second term is the Gibbons-Hawking-York boundary term [82,83] that contributes at spacelike/timelike boundaries B. The convention adopted here for the extrinsic curvature is that the normal one-form is directed outward from the region of interest. The third term is the contribution of the null boundary surface B of M. For a null boundary segment with normal k α the parameter κ is defined in the usual way: k β ∇ β k α = κk α , while γ is the determinant of the induced metric on the (D − 2)-dimensional cross-sections of the null boundary and the parameter λ is defined according to k α = ∂x α /∂λ. The fourth term is the Hayward term [84,85] for joints J between non-null boundary surfaces -these terms will play no role in our construction. The last term is the contribution of joints J from the intersection of at least one null boundary surface [81]. The parameterã is defined according to where k i is a null normal, t i is a timelike unit normal, and n i is a spacelike unit normal. Additionally, depending on the intersecting boundary segments, auxillary vectors -indicated with a hat -are required. These unit vectors are defined by the conditions of living in the tangent space of the appropriate boundary segment and pointing outward as a vector from the joint of interest. The action as presented above is ambiguous when the spacetime region of interest contains null boundaries. Namely, the action is not invariant under reparameterizations of the normals to the null boundary segments. To ensure this invariance we add to the above the following counterterm [19]: where l ct is an arbitrary length scale and is the expansion scalar of the null boundary generators, which depends only on the intrinsic geometry of the null boundary surfaces. While this term is not required to have a well-defined variational principle, it is known to have important implications for holographic complexity -for example, it is crucial for reproducing the switchback effect in the complexity equals action conjecture [32,33,86]. A further difficulty is that the gravitational action is divergent. To control these divergences (and allow for appropriate regularization in the complexity of formation calculations) we introduce a UV cut-off δ at the boundary CFT and integrate the radial dimension in the bulk up to r = r max (δ) [87,88]. When calculating the complexity of formation, the choice of r max (δ) for the black hole spacetime should be consistent with that in vacuum AdS. This subtlety can be resolved [39] by expanding the metrics of both geometries in the Fefferman-Graham canonical form [89] and setting in both cases the radial cut-off surface at z = δ. We discuss the Fefferman-Graham form of the rotating metrics in Appendix A. To evaluate the complexity within the CA conjecture, we must evaluate the gravitational action and counterterm on the Wheeler-DeWitt patch of spacetime. Using the boost invariance of the spacetime, it is always possible to shift the WDW patch so that it intersects the left and right boundaries at the same times: t L = t R ≡ τ /2. We show the structure of the WDW patch in Figure 3, which has the same structure for all the rotating black holes considered here. Of particular importance are the joints where the future/past boundaries of the WDW patch meet. Let us determine the past meeting points of the boundaries of the WDW patch. We denote the future meeting point as r m 1 and the past meeting point as r m 2 . Consider first the past meeting point, and denote its coordinates inside the horizon as (t m 2 , r m 2 ). From the right side of the Penrose diagram, this point lies along a u = constant surface, while from the left it lies along a v = constant surface. These facts translate into two equations: where t L and t R denote the timeslices at which the lightsheets intersect the left and right boundaries, respectively. Note that t m 2 is the same in both equations as those points lie in a common patch of the diagram. Eliminating t m 2 from these equations we obtain Upon noting that t L = −t R (which implies t m 2 = 0) and setting t R = τ /2 we obtain An analogous derivation holds for r m 1 , the only difference being a sign in the last two terms: Note that here we have chosen to use the time τ instead of t to avoid possible confusion of this quantity with the t appearing in the metric (which, when considering the patches outside of the horizon, would be either t L or t R ). In general the values of r m 1,2 must be obtained numerically. However, let us note that it is possible, starting from eq. (2.31), to obtain an asymptotic form of this quantity valid for early times in the limit r − /r + → 0. This can be obtained by evaluating the integral appearing in (2.31) perturbatively in r − /r + . Here we note the result only in five dimensions: (3.12) where the dots denote subleading terms and ε = +1 for r m 1 and −1 for r m 2 . This expression reveals that, as r − /r + → 0, the value of r m tends exponentially towards the inner horizonconsistent with the discussion of charged black holes in [18], albeit with a slightly different rate of approach. Bulk Action The bulk contributions to the action are very simple in this case since the black holes are vacuum solutions. In particular, we have and thus . (3.14) The bulk action is then simply the spacetime volume of the WDW patch weighted by this dimension-dependent prefactor: To evaluate the bulk contribution, we recall that the determinant of the metric is We then split the integration domain into three regions where the (t, r) coordinates are valid, as shown in Figure 3. In region I, the integration over t R is between 0 (i.e. t m 1 ) and In region II the integration over t R is between Finally, the integration in region III occurs between and 0. We then have The total bulk action is then twice the sum of these three terms. Surface contributions There are two cut-off surfaces at r = r max , which each contribute a term The normal to the timelike surface r = r max is n µ = 0, √ g rr , 0, 0 (3.24) and the induced metric to the timelike surface of constant r = r max has the determinant The trace of the extrinsic curvature of the boundary surface is then This gives a contribution for the two boundary surfaces at r = r max of the form Note that this term is time-independent, so it does not contribute to the complexity rate of change dC A /dτ . Furthermore, it does not contribute to the complexity of formation ∆C A because it is cancelled by the contribution made by the AdS D vacuum -as shown explicitly in appendix B. Joint contributions There are two different types of joint contributions that arise here. First, there are the intersections of the null boundaries of the WDW patch with the regulator surface at r = r max . There are four of these joints in total. Second, there are the intersections of the null sheets of the WDW patch in the future and in the past. Let us begin with the first case. Considering the future, right boundary of the WDW patch near the regulator surface r = r max , the relevant null normal is given by while the outward pointing normal to the surface r = r max is We need also a vectort that is a future-pointing unit time-like vector directed outwards from the region. In this case the correct choice iŝ where we have written it as a form, but the sign is chosen so that the corresponding vector is outward directed. The relevant dot products are easily computed and since f (r) > 0 near the boundary we obtain = −1. We have theñ where we have made use of the fact that on the joint. Note that by dΩ 2N +1 we mean the volume form on the usual, round 2N + 1 sphere -when integrated over the angles this gives An analogous computation for the remaining three joints can be shown to yield the same answer as that presented here. Next let us consider the joints at the future and past meeting points of the WDW patch. The determinant of the induced metric at the intersection of the lightsheets is given by where r m is the value of r at the point of intersection. At the future meeting point the relevant null normals are where, in determining the relevant signs, it is important to recognize that t increases from left to right inside the future horizon and dr * points in the negative dr direction inside the horizon. Note also that the dt appearing in these normals is the t that appears in the metric, not the boundary time. We need alsok -a null vector, living in the tangent space of the right sheet of the WDW patch that is orthogonal to k F R and outward pointing as a vector. In this case, a one-form that points in the negative k F R direction yields a vector with the correct properties. We takek We then find for the dot products yielding = +1 and from (3.5)ã Putting this all together we obtain for the joint contribution at the future meeting point A completely analogous calculation gives an identical form for the joint contribution at the past meeting point, with r m 1 → r m 2 . Null boundaries Since the normals to the lightlike boundaries of the WDW patch are affinely parameterized, the boundary term on these surfaces makes no contribution. Nonetheless, we consider here the contribution from the counterterm for null boundaries that ensures the total action does not depend on the parameterization used for the null generators. Considering the future segment on the right of the Penrose diagram, we have We therefore have for the counterterm We can use integration by parts to express this object in terms of two contributions at the joints and an integral independent of α and ct : where here we have used the shorthand Θ = dΘ/dr. It can easily be confirmed that the counterterm evaluates to the same result for the future left segment. Additionally, the result for the past segments is equivalent with the substitution r m 1 → r m 2 . Framework for Complexity equals Volume calculations We will compare our results obtained for the action with the results within the "Complexity equals Volume" framework [5,6]. 4 According to the CV proposal, the complexity of a holographic state at the boundary time slice Υ is related to the volume of an extremal codimension-one slice B by The fact that the CV conjecture requires an (arbitrary) length scale R was originally used as an argument in favour of CA over CV. However, there is as yet no universally accepted prescription for computing the bulk complexity, and useful information can be gleaned by comparing different proposals. 5 To find the volume of the extremal codimension-one slice B, write the metric (2.1) in ingoing coordinates x µ = v, r, Ω , and parameterize the surface with coordinates y a = λ, Ω , where Ω are the angular coordinates. 6 Below, we choose the symmetric case of boundary times t R = t L ≡ τ /2. The induced metric on the codimension-one slice is then where g µν is the MP-AdS metric (2.1). The volume functional of this slice can be shown to be where v = v(λ) and r = r(λ). We assume 7 a parametrization where This Lagrangian is independent of v and hence there is a conserved quantity (analogous to energy) given by The volume of this extremal surface is obtained by integrating (3.48) on-shell: where we included a factor of 2 to include the left half of the surface. Here we wish to take r max to be infinity, but this will yield a divergent result in general. A finite result can be obtained by studying the time derivative of the volume (as relevant for the growth rate), or by performing a carefully matched subtraction of the AdS vacuum (as relevant for the complexity of formation). Here r min is the turning point of the surface, determined by the conditionṙ = 0: A simple calculation shows that r min will be on or inside the (outer) horizon, and so we have that, using (3.50), f (r min ) 2 < 0,v(λ min ) > 0 ⇒ E < 0 and we recall that f (r) 2 < 0 in the region between the inner and event horizon. Complexity of Formation In this section, we study the complexity of formation for rotating black holes in both the CA and CV conjectures. In both cases, we verify convergence to the static limit and study the dependence of the complexity of formation on thermodynamic parameters near the extremal limit. Complexity Equals Action Within the CA conjecture, the complexity of formation is given by the difference between the action of the WDW patch and the action of the global AdS vacuum both evaluated at the τ = 0 timeslice. Let us now put together the various pieces accumulated so far. First, consider the sum of the joint and counterterm contributions. As we know from the general arguments in [19], this result must be independent of the parameterization of the null generators, i.e. independent of α. We find that Note that this expression is completely independent of α -Θ is proportional to α and thus all α dependence precisely cancels out. This is, of course, necessary, but it nonetheless provides a consistency check of our computations. It can further be shown -assuming the scale ct is the same for both the AdS vacuum and the black hole solutions -that the first term evaluated at r max cancels precisely with the corresponding ones occurring in the global AdS vacuum. A completely analogous computation holds for the past sheets of the WDW patch yielding the same result as above with the substitution r m 1 → r m 2 . However, in this case we can further simplify matters by noting that, since τ = 0 for the complexity of formation, r m 1 = r m 2 ≡ r m 0 . Noting that for the AdS vacuum we have and combining the above with the relevant background subtraction we obtain for the joint and counterterms: 8 where we have extended the range of integration to infinity in the last term since the subtraction has made the integral convergent. Note also that r m 0 is obtained by solving the equation For the case of the complexity of formation, additional simplifications occur for the bulk integral. It becomes (including the necessary factor of two) Since r * must be computed numerically, followed by a numerical evaluation of this integral, it is actually more convenient to use integration by parts to eliminate the appearance of r * (r) inside this expression, leaving only a single integral to evaluate numerically. Doing so, we find that (4.6) Note that the evaluation of the first term at r m 0 vanishes by virtue of the equation defining r m 0 . It can further be shown, using the asymptotic form of the tortoise coordinate, that the evaluation at r max cancels with the analogous one coming from the global AdS vacuum. Taking this into account and performing the background subtraction we obtain the result where we have extended the range of the first integral to r = ∞ since the subtraction has made it convergent. The most complicated aspect of determining the complexity of formation within the action framework is computing the value of r m 0 numerically. We show in figure 5 the resulting curves for several values of r + / . The difficulty arises in determining accurate results in the limit where r − /r + becomes small. As discussed previously, in this limit the value of r m 0 can be worked out perturbatively and, for five dimensions, reads Thus, as r − /r + → 0, the difference between r m 0 and r − tends to zero like exp(−1/r 2 − ), and so increasing numerical precision is required in this limit. For sufficiently small r − /r + the problem effectively becomes numerically intractable and we are forced to resort to perturbative techniques. In figure 6, we show the complexity of formation ∆C A for five-dimensional rotating black holes with different values of r + / . There are a few noteworthy things here. The basic structure of the curves is qualitatively similar for different values of r + / . A somewhat strange feature is that there is a range of parameter values over which the complexity of formation actually becomes negative. While strange, it must be kept in mind that complexity of formation is a relative quantity: it is computed by subtracting one (infinite) result from another. Moreover, in some cases, namely involving gravitational solitons, a negative complexity of formation has been previously observed [29,31], and so this result in and of itself is not new. While there is an intermediate regime in which the complexity of formation is negative, it is always positive at the two extremes of the plot: in the extremal and nonrotating limits. That the former is true is obvious from the plot, but the static limit is subtle and requires additional scrutiny. The static limit is examined in detail in appendix C. Here, for conciseness, we will present a discussion relevant to the five-dimensional case. In the static limit r − /r + → 0 all contributions to the corner/joint term vanish except for the term involving the logarithm, Using the perturbative expansion for r m 0 shown above, we can work out that this term yields a finite limit and we reiterate that here we are considering the case of five dimensions (N = 1). This result is exactly half the contribution arising from the GHY terms on the future/past singularity in the Schwarzschild-AdS geometry. A similar analysis can be carried out for the bulk term, which in the static limit (see appendix for details) yields lim r − /r + →0 ∆I bulk = ∆I Schw bulk . (4.11) That is, the bulk contribution of the rotating black hole limits to exactly the bulk contribution for the non-rotating black hole. As a result, there is an order of limits problem for the action computation: taking the static limit of the action result gives an answer that does not agree with the direct computation done for the Schwarzschild-AdS black hole. It is insightful here to consider how this limit compares with the analogous neutral limit for charged black holes. Again, we consider this in full detail and in all dimensions in appendix C. For the charged black hole, the joint term reduces to a fraction of the Schwarzschild-AdS GHY term in the neutral limit, while the bulk action for charged black holes reproduces the full Schwarzschild-AdS bulk action along with the remaining fraction of the GHY term. Thus, for charged black holes, there is not an order of limits problem. However, the manner in which the various terms conspire to give the neutral limit is rather nontrivial. The main difference here in the rotating case is that the limit of bulk term does not include an additional fraction of the GHY term. This can be traced, mathematically, to the behaviour of the metric function h(r) in this limit. It should be noted that while when a = 0 the metric is simply the usual static AdS black hole, the limit considered here is different and this is the mathematical reason behind the order of limits issue. Effectively, here we are simultaneously zooming in on the inner horizon Figure 7: The complexity of formation is shown as a function of r − /r + normalized by a power of the thermodynamic volume (left) and the entropy (right). The curves, in order from bottom to top, correspond to r + / = 10 3 , 10 4 , 10 5 , 10 6 , 10 7 and 10 8 . while taking the limit r − → 0. In this limit the metric function h is not simply r (as it would be for the static black hole), but instead it limits to a constant value. As discussed in Appendix C, this behaviour is the source of the order of limits issue, which in general dimensions becomes: where the complexity of formation of the static black hole ∆C Schw form is the sum of the bulk ∆I Schw Bulk and surface I Schw GHY contributions. There are (at least) two perspectives one could have on this issue. First, it could be viewed as simply a genuine feature of the CA proposal. The CA proposal is highly sensitive to the detailed causal structure of spacetime, and the order of limits issue found here is not the first of its kind. For example the rate of growth of complexity for dilaton black holes was found to be highly sensitive to the details of the causal structure [90]. Moreover in the usual framework the complexity growth rate for magnetic black holes is precisely zero [90,91], leading to an obvious order of limits problem (though it is possible to remedy this case through the addition of an electromagnetic counterterm). Furthermore, the growth rate of complexity for charged black holes in higher-curvature theories exhibits an order of limits problem in the neutral limit [27,48,92]. Thus there is precedent for subtle behaviour of the CA conjecture, and it would be interesting to better understand whether this is consistent with CFT expectations. An alternate perspective is that this order of limits issue is a problem that must be resolved. One means to do so is to consider an alternative regularization scheme for the WDW patch -which we explain in detail in appendix D. The basic idea is to introduce space-like regulator surfaces cutting off the future and past tips of the WDW patch at r = r m 0 + ∆r. This could be motivated from the perspective that the inner Cauchy horizon is expected to be unstable to generic perturbations [93][94][95], and therefore this cutoff would encode some level of agnosticism of what happens precisely at the inner horizon. This leads to a well-defined static limit to the complexity but it must be noted that the limits do not commute. Moreover, for sufficiently small ∆r there is no appreciable effect of this term on the results when both r − and r + are sufficiently large, but it becomes important in the limit r − /r + → 0. 9 Let us now leave aside this issue of limits and consider in more detail some further interesting properties of the complexity of formation. Our focus here is primarily on the scaling behaviour of complexity in the limit of large (r + / 1) black holes. For neutral and charged static black holes this behaviour is governed by the entropy [18,39], leading to the idea that the complexity of formation is effectively controlled by the number of degrees of freedom possessed by the system. We can schematically write this relationship for charged black holes as: where µ is the chemical potential. The function f (µ/T ) has a smooth, non-vanishing limit as µ → 0. The relationship above is schematic and so neglects possible constant terms in the coefficients and so on. However it conveys the important features: the complexity of formation exhibits a logarthmic singularity near extremality and the general form is controlled by the entropy. We consider the analogous problem in detail for rotating black holes in appendix E. Again, there is a logarthmic singularity in the extremal limit that is controlled by the entropy. However, the general behaviour is markedly different. The schematic form for the complexity of formation for large rotating black holes takes the form where Ω H is the angular velocity of the horizon, V is the thermodynamic volume and again f is some function of the ratio r − /r + (which can, of course, be expressed as a function of Ω H /T ). Examining the curves in figure 7, we see that the second term in (4.15) dominates over a larger range of temperature. For smaller values of r + / , the logarithmic divergence becomes manifest in the limit of extremality. The implication of the above relationship is that at a given fixed temperature, and for sufficiently large black holes, the complexity of formation is always controlled by the thermodynamic volume rather than the entropy. The validity of this conclusion can be seen clearly in the plots shown in figure 7 for five dimensions 9 It is also worth noting that there appears to be no simple modification of the action proposal itself that would account for the order of limits problem. For example, if one considers only the bulk action as the relevant term then there would be no order of limits issue for rotating black holes, but it would introduce one for charged black holes -see appendix C. -see also figure 20. We emphasize that this observation is possible due to the independence of the thermodynamic volume and the entropy for rotating black holes. In the case of static (charged or neutral) black holes, these quantities are not independent and one is free to write the final result in terms of either S or V as the two quantities are related by We will return to discuss the implications of this result in the discussion. Comparison with Complexity=Volume Conjecture The complexity of formation in the CV proposal is straightforward to calculate. The volume of the maximal slice in vacuum AdS D is In the black hole geometry, we are interested in the maximal slice at τ = 0. In this case we have r min = r + which gives E = 0 from (3.50). The complexity of formation is then The integral can be evaluated numerically in a straightforward manner, and we show some representative examples in figure 8. The qualitative structure of the curves is independent of the value of r + / . Though, since ∆C V is not a homogeneous function of r + / , there is no simple factor that collapses the different curves to a single line for all values of r + / . When r − /r + → 0, the complexity of formation tends to a constant value, whereas it diverges in the extremal limit. This divergence is consistent with results obtained previously for charged black holes [18]. In the CA framework we encountered an order of limits issue when taking r − /r + → 0. Here there is no such issue, which is due to the fact that the CV proposal is less sensitive to the detailed properties of the causual structure than the CA proposal. In the static limit, the complexity of formation (4.18) reduces directly to that of the static black hole ∆C Schw where f Schw (r) is the metric function of the Schwarzschild-AdS spacetime. It is interesting to further compare the general behaviour of the complexity of formation of large black holes within the CV proposal to the CA proposal. The details of this analysis are presented in appendix E, but the conclusion is the same. The complexity of formation exhibits a logarithmic singularity near extremality that is controlled by the entropy, while the non-logarthmic terms are controlled by the thermodynamic volume. Thus we once again arrive at the result that for sufficiently large black holes the complexity of formation is controlled by the thermodynamic volume: The validity of this can be seen directly in figure 9 -see also figure 19. Growth Rate of Holographic Complexity In this section, we use the CA and CV proposals to study the full time evolution of holographic complexity of the boundary state (1.11) dual to the MP-AdS black hole geometry. Our interest here will be in understanding the growth rate of complexity, and how this quantity evolves in time. Complexity Equals Action As before, we begin our considerations with the action conjecture. The various terms appearing in the computation were assembled in section 3, and here we proceed and use these directly. Taking the time derivative of all action terms, we see that only the bulk and joint terms contribute, giving The first line in the above is the time derivative of the bulk action, while the second and third lines correspond to the time derivatives of the combined joint and counterterm contributions at the future and past tips of the WDW patch. We recall that since Θ ∝ α, this result is actually independent of the parameterization of the null vectors normal to the WDW patch, as it must be. From (3.10) and (3.11), and so once the values of r m 1 and r m 2 are known, it is possible to evaluate directly the growth rate of complexity. Just as in the case of the complexity of formation, the difficulty here arises in determining the values of r m i , which is a numerically subtle problem. We show some representative results in figure 10. While we show the results here for a particular value of r + / , this is unimportant for understanding the general behaviour which depends much more strongly on the value of r − /r + . We see from the top-left figure that, when r − /r + is a small value, r m 1 and r m 2 present a phase where they are effectively constant. The implication of this is a period in the growth rate where the complexity effectively stalls and does not exhibit significant dependence on time. As r − /r + increases, r m i exhibit stronger time dependence, but generally become "squished" in a smaller interval (since they must lie between r − and r + ). In all cases, r m 1 and r m 2 asymptote to the inner and outer horizons, respectively. Once the values of r m i have been determined, it is straightforward to determine the growth rate as a function of time. We show representative results in figure 11 for the same cases for which we displayed r m i in figure 10. The results are qualitatively similar to what has been previously observed for charged black holes (c.f. figure 10 of [18]). There are some general features that can be remarked on. First, we note that in the limit of small rotation (equivalently, small r − /r + ) the growth rate develops a minimum. As the rotation is decreased, the minimum becomes sharper and deeper. Moreover, in the same case, the growth rate exhibits a phase where it is close to zero before this oscillatory behaviour manifests. These observations are consistent with the growth rate limiting to that of the static black holes [18]. As the rotation is increased, both the late-time limit of the growth rate decreases and the transient oscillations become less significant. The ultimate limiting case is the extremal limit, where the late-time growth actually goes identically to zero (this will be justified below). While we have shown the growth rate for the particular choice of ct = 1, the precise value of this parameter affects significantly only the early-time behaviour -we show an example of this in figure 12. Perturbative expansion at late times Having presented numerical computations for the full time-dependent growth rate of complexity, let us now turn to discuss some general features at late times. At large τ , using Figure 11: Here we show plots of the growth rate of complexity as a function of time. In each plot we have set r + / = 10, while the different plots correspond to r − /r + = 1/20, 1/10, 3/4 (left to right). We have set ct = 1. The dotted black line shows the growth rate of complexity in the limit τ → ∞. (2.31), we can solve (3.10) and (3.11) perturbatively to find that where the dots indicate subleading terms in the large τ expansion and where H(r) is the integrand of R(r) defined in (2.33). In the limit τ → ∞, it can be shown that g tt ,r (r) g tt (r) where we have introduced the notation G(r) ≡ g(r) −2 , T ± is the temperature of the black hole at the horizon r ± given in (2.16), and Expanding (5.1) in this limit using (5.3) gives where the dots indicate subleading terms in τ , and It is easiest to see the equality of the second and third lines by writing the parameters (m, a) in the bracket in the third term in terms of (r + , r − ), which yields the second term. Furthermore, this agrees with which is the difference in thermodynamic free energy between the outer and inner horizons, and dI jnt where we note that S ± is given by (2.16). Therefore, the late-time complexity rate of growth is simply the difference in internal energy between the outer and inner horizons 10 where F ± and U ± are the free and internal energies, respectively, of the outer and inner horizons. The second term in (5.7) was checked for various dimensions and found that it is always positive and less than 1. This strongly suggests that the late-time limit of action rate of growth (5.8) is always approached from above. Using the fact that the Smarr relation (2.20) holds for both outer and inner horizons, we can rewrite (5.11) as where ∆V = V + − V − is the difference between the thermodynamic volumes of the outer and inner horizons. Interestingly, in the limit of large black holes, the T S factors and P ∆V term become proportional to each other and one can show that lim r + / →∞ π dC A dτ τ →∞ = 2N + 2 2N + 1 P ∆V. (5.13) As will be shown below, a similar result also holds for the complexity rate of growth in CV conjecture. Comparison with Complexity=Volume Conjecture We will compare the complexity rate of growth according to the CV conjecture dC V /dτ with the results found according to the CA conjecture. The volume of the extremal codimensionone slice was found in (3.48). To relate to boundary time, note first that v max = t R + r * (∞), v min = t min + r * (r min ) (5.14) where t min = 0 by left-right symmetry (we have left-right symmetry because the functional V is invariant under t → −t and a → −a), and r min is defined by (3.53). Therefore, where we used (3.50). Note that the integrand here is convergent at r = r ± . Finally, it is easy to see that (5.16) Choosing the symmetric case t R = t L ≡ τ /2, it is straightforward to show using (3.53) that where G(r) ≡ g(r) −2 . The complexity rate of change is then To find its total dependence on time, one first notes that equation (5.15) can be written as and as τ → ∞, |E| increases until the two roots meet at the extremum of W (r min ). Therefore, Figure 13: The late-time rate of complexity growth dC V /dτ is shown as a function of r + / for spacetime dimensions D = 3, 5, 7 (solid blue, dashed red, and dot-dashed green, respectively). It is shown that the limit (5.23) is always approached from below. The fact that the late-time dC V /dτ can be expressed in this way in terms of the thermodynamic quantities of the black hole only in the large r + / limit shows one of its shortcomings compared to dC A /dτ , which can expressed at late-times in terms of thermodynamic quantities of the black hole for all r + / . This reduces to the result [18] found for Schwarzschild-AdS black holes, which was 8πM sch /(D − 2) 11 , since it is straightforward to show that For illustration, we will prove (5.23) in spacetime dimensions D = 3 and D = 5 below, where generalization to other spacetime dimensions follows the same methods. 11 M sch is the thermodynamic mass of Schwarzschild-AdS black hole, given by taking the a → 0 limit of M in (2.10). Note that for the BTZ black hole with D = 3, we have M sch = r 2 + 8G N 2 which is different from the one naively obtained from the D → 3 limit of the blackening factor of Schwarzschild-AdS black hole, giving . This is because for Schwarzschild-AdS black holes with D > 3 we implicitly assume that the r+ → 0 limit corresponds to the Neveu-Schwarz vacuum of AdSD with blackening factor f (r) = 1 + r 2 2 of the metric in Schwarzschild coordinates, whereas the r+ → 0 limit of the BTZ black hole corresponds to the Ramond vacuum of AdS3 with blackening factor f (r) = r 2 2 of the metric in Schwarzschild coordinates. For more details on this, see [96]. Late-time complexity growth in D = 3 In this case, we can explicitly solve (5.20) and find that and Using this, it is straightforward to show that from which we get (5.23) by setting R = . In fact, as shown in figure 13, the late-time rate of complexity growth dC V /dτ is independent of r + / . Late-time complexity growth in D = 5 In this case, the expression forr min is considerably more complicated. However, we can use it to expand the two sides of (5.23) as a series in large r + , from which we get Expanding the ratio of these two expressions in the large r + limit gives which yields (5.23) as r + / → ∞. Interestingly, it also shows that the limit is always approached from below, which agrees with the behaviour of dC V /dτ found for Schwarzschild-AdS black holes [18]. Discussion We have considered several aspects of the CA and CV proposals for holographic complexity in the context of rotating black holes. While the behaviour of these proposals for numerous static and/or spherically symmetric spacetimes has been thoroughly studied, their extension to rotating black holes is a somewhat nontrivial task. In large part, the difficultly arises due to the comparative lack of symmetry in rotating solutions and therefore more complicated causal structure. Here we have partly side-stepped this issue by considering equal-spinning odd-dimensional rotating black holes, which enjoy enough additional symmetry to make the computations tractable, while still revealing a number of non-trivial features. Here our focus has been devoted to understanding the complexity of formation and also the time-dependent growth rate of complexity. First, we introduced the Myers-Perry-AdS spacetimes with equal angular momenta in odd dimensions and discussed the enhancement of symmetry and the associated causal structure and thermodynamic properties. In studying holographic complexity, especially within the action proposal, it is necessary to have a thorough understanding of the causal structure of the spacetime of interest. We have done this here by analysing the structure of light cones in this geometry. The enhanced symmetry of the equal-spinning case allows for us to chose SU (N + 1) × U (1) invariant hypersurfaces, effectively making the causal structure two-dimensional as is the case for static, spherically symmetric black holes. This represents a significant technical simplification over the most general case. 12 Despite this simplification, the solutions maintain the classical features associated with rotating black holes (such as ergoregions, for example), which allow us to rigorously study holographic complexity for rotating black holes for the first time. Second, we studied the complexity of formation for rotating black holes in both the CA and CV conjectures. As shown in detail in appendix C, there is an order of limits problem when taking the static limit of ∆C A . We note that there have been previous investigations where such order of limits problems have been observed for the growth rate in the CA conjecture [27,48,[90][91][92], however we believe this is the first observation of this for the complexity of formation. This issue can be resolved by an alternative regularization scheme where the future and past tips of the WDW patch are ignored near the singularity and at the static limit (see appendix D). It would be interesting to explore more deeply the implications of this alternative regularization, in particular the mechanism and/or interpretation of the regulator itself. Perhaps the most intriguing result of our analysis concerns the scaling of the complexity of formation for large black holes. In both the CV and CA proposals we found that this behaviour is given by where the function f appearing above is dimensionless and independent of the size of the black hole. This result stands in contrast to what was previously understood about complexity of formation for static black holes. Previous work [18,39] that analysed the complexity of formation for static black holes found that in both the charged and uncharged cases the complexity of formation depends on the black hole size exclusively through entropy. Here, due to the more complicated nature of the metrics involved, we have been able to deduce that there are in fact two scaling regimes. When viewed as a function of temperature and fixed 12 For the most general rotating black holes the light cones can be defined using PDEs as discussed in [50,78,79], though they must be solved numerically. black hole size, there exists a logarithmic singularity in the complexity of formation that is governed by the entropy. This term will dominate at sufficiently low temperatures for a given fixed black hole size. An alternative case is the behaviour of the complexity of formation at fixed temperature, viewed as a function of the black hole size. In this case, the complexity of formation will be controlled by the thermodynamic volume when the size becomes sufficiently large. In this regime, the above relationship implies that, at fixed temperature, the complexity of formation of sufficiently large black holes is controlled by the thermodynamic volume: where V AdS = D−1 , Σ g is a factor that depends on the specific metric, dimension, etc. (but not on the size of the black hole), and C T is the central charge of the CFT as computed from Newton's constant G N . The interpretation of thermodynamic volume in the holographic context remains to be completely understood, but some concrete statements can be made. From the perspective of the dual theory, variations of corresponds to variations in the central charge C T ∝ D−2 /G N along with variations in the volume of the space where the field theory lives V CFT ∝ D−2 ; the thermodynamic volume is the chemical potential associated to variations in these quantities [66,[68][69][70]97]. Despite this identification, it is not obvious (at least to us) why such a quantity would naturally be connected to the idea of complexity of formation in the field theory. However, heuristic motivation for this connection is more transparent from the gravitational picture. It should be recalled that the original motivation for holographic complexity was to provide a holographic interpretation for the time-dependent growth of the Einstein-Rosen bridge after thermalization had occurred. In this sense, thermodynamic volume is a contender because, at least in simple scenarios, it can be related to the spacetime volume contained within the black hole horizon [58,61]. Another motivation for our proposal is simplicity. Of course, it is possible to use the Smarr relation to replace the thermodynamic volume with a combination of other thermodynamic potentials. However, none of the resulting expressions appear to have a more direct holographic interpretation. In the present case of rotating black holes, use of the Smarr formula would allow the volume to be replaced by the combination While the holographic interpretation of each of the terms in the numerator on the right is clear and well-established for a long time, the factor of P appearing in the denominator, which is required on dimensional grounds, spoils any simpler interpretation that could be obtained. Moreover, the expression in terms of V is far more economical from the gravitational perspective, involving only a single term to capture the correct scaling and dimensionful factors. The expression in terms of thermodynamic volume also allows a more direct comparison with what is understood about complexity of formation in the static case, where the result can be written in terms of the entropy. In those cases, our result reduces to the previously known expressions. This is because for static (charged) black holes V (D−2)/(D−1) ∝ S. The thermodynamic volume has been conjectured [59] to obey a 'reverse' isoperimetric inequality: The inequality is saturated by (charged) Schwarzschild-AdS spacetimes. Assuming the relationship (6.2) is general, the reverse isoperimetric inequality becomes the statement where β D is a positive constant that can be easily worked out from the above. This means that the complexity of formation for large black holes is bounded from below by the entropy (equivalently, the number of degrees of freedom). The above appears to be suggestive of a rather robust connection between complexity of formation and extended thermodynamics. The expression as we have presented it covers static black holes, rotating black holes, as well as gravitational solitons [31]. While evidence from the field theory side remains lacking, the fact that the behaviour is observed in both CV and CA dualities is nontrivial. It is our view that the relationship (6.2) merits further exploration, both from the field theory perspective and from the gravitational perspective, where it could be further tested through analysis of other black hole geometries that have S and V independent. Finally, we examined the time-dependent rate of complexity growth using both the CA and CV conjectures. Previous studies have shown that the late time limit of complexity growth in black holes with two horizons is bounded by the difference in internal energy between the outer and inner horizons 13 This surprising result seems to be of near universal scope and it suggests a deep connection between complexity and black hole thermodynamics [37,38,99]. In the CV conjecture, we have shown that the complexity is a positive function of time whose late time rate of growth saturates the bound (6.6) in the r + / → ∞ limit, up to a constant that depends on the spacetime dimension. We have also explicitly shown that the bound is always approached from below as r + / is varied. In the CA conjecture, the bound (6.6) is always saturated, and we have shown that it is always approached from above as time is varied. Both of these results agree with the behaviour found for the charged black hole [18]. Furthermore, we found that the arbitrary length scale ct does not affect the late-time rate of complexity growth but does affect its early behaviour, as shown in figure 12. Going forward, there are a number of directions worth exploring. Perhaps the most interesting one concerns the result (6.2). While we have not offered a definitive proof of this relationship, it reduces to known results for static black holes, holds also for large gravitational solitons [31], and we have provided robust evidence that it is obeyed in general for large rotating black holes. It would be interesting to test the full range of validity of this relationship, which could be done most effectively by studying other black hole solutions for which the entropy and thermodynamic volume are independent and scale differently. Such explorations could provide useful insight from which a general proof of the relationship could be deduced, or a counter-example from which its limitations could be assessed. It would also be interesting to explore this feature in light of the recently proposed first law of complexity [37]. While the holographic interpretation of thermodynamic volume has been understood for sometime, its utility in this realm has remained comparatively undeveloped (though see [66][67][68][69][70][71][72][73] for progress in this direction). Our results provide one concrete setting where thermodynamic volume appears to play a natural role in holography, and it is our view that this result provides further impetus to investigate in greater detail the role of thermodynamic volume in the holographic context and its relation to complexity. It would be worthwhile to extend the analysis here to the most general class of rotating black holes, though this may be a formidable task. Exploring the implications of the known instabilities (e.g. superradiance) of rotating black holes for complexity would also be of interest. Although our complexity calculations were done in the case of odd-dimensional equal angular momenta in each independent plane of rotation, we expect that the general family of Myers-Perry-AdS black holes should possess similar qualitative behaviour. Acknowledgments This work was supported in part by the Natural Sciences Engineering Research Council of Canada. The work of RAH is supported by the Natural Sciences and Engineering Research Council of Canada through the Banting postdoctoral fellowship program. HK acknowledges the support of NSERC grant RGPIN-2018-04887. A Fefferman-Graham form of the metric In computing the complexity of formation, it is important to justify equating the cutoffs at large distance r max in both AdS and the black hole spacetimes. To see that this is the case, here we case the metric into the Fefferman-Graham form which will then allow us to directly compare the differences in the fall off of the metric components. We define a new coordinate ρ according to the relation Directly solving this relation to obtain r as a function of ρ yields In terms of the coordinate ρ the metric now reads with the metric γ µν approaching the metric on the boundary as ρ → ∞, along with the relevant corrections to this from the bulk. The specific form of this metric can be easily worked out, but its exact form is not necessary here. With this expansion at hand it is now possible to directly compare the behaviour of r for the global AdS metric with that for the black hole metric. The result is, placing a UV cutoff at ρ = 2 /δ, Thus, for all positive N the difference in the cutoffs tends to zero in the limit where δ → 0. This justifies working directly with a cutoff r max in both the AdS and black hole geometries. B Vanishing contribution of the GHY term We will show that the GHY term (3.27) in the action does not contribute to the complexity of formation ∆C A and is canceled by the contribution from vacuum AdS D . First, note that the GHY term for vacuum AdS D is given by replacing g(r) −2 → f 0 (r) in (3.27), where f 0 (r) is the blackening factor of vaccum AdS D . At r → ∞, the difference I GHY − I AdS GHY depends only on the tortoise coordinates. Using (A.4), it is straightforward to show that Furthermore, the factor multiplying this term is of order O(1/δ 2N +2 ). Therefore, which vanish in the limit δ → 0. C Complexity of formation in the static limit Here we consider, in arbitrary dimensions, the behaviour of the complexity of formation in the limit where r − /r + → 0. We compare the result with the analogous limit for charged black holes, and compare both with the results for the Schwarzschild-AdS black hole. C.1 Complexity of formation for Schwarzschild-AdS The Schwarzschild-AdS metric in D spacetime dimensions reads (In the remainder of this section we will drop the "Schw" subscript, but will re-introduce it in later sections when confusion could arise.) Here we will consider the complexity of formation for this geometry focusing on the k = 0, +1 cases, essentially reviewing the discussion of [39] but with a slightly different emphasis to allow straightforward comparison with our results for the rotating black holes. 14 The calculation of the action on the WDW patch consists of a bulk term and a GHY term at the past/future singularities. Additional contributions vanish when the result is regularized by subtracting the contribution of two copies of global AdS. The calculation is carried out by focusing on a single quadrant of the WDW patch, then multiplying by a factor of four to obtain the full answer. Let us consider each of these contributions in turn. Consider first the GHY term on the future singularity. It is straightforward to show that in this case the extrinsic curvature takes the form The space-time has a four-fold reflection symmetry along the lines t = 0, and so the computation can be performed by focussing on one quadrant of the diagram and then multiplying by four. Focusing on the top-right quadrant of the Penrose diagram, the integration for t is carried out between t = 0 and t = r * Schw,∞ − r * Schw ( ), where the latter corresponds to the future right boundary of the WDW patch. The idea is to send to zero at the end of the computation, yielding for the GHY term where in the second equality we replaced the mass in terms of r + . This term must be multiplied by a factor of 4 to account for the GHY contributions in each quadrant. Generally we will set r * Schw,∞ = 0 by suitable choice of integration constant. Next consider the bulk in the upper right quadrant, which takes the form Note that, just as in the main text, we have cut the integration off at r = r max since the integral diverges otherwise. We will send r max → ∞ after subtracting the contributions of the AdS vacuum, which will render the integral convergent. It is generally hard to evaluate the tortoise coordinate, and so a simpler form for the bulk integral is obtained using integration by parts: This can be further simplified by isolating and separately dealing with the pole contribution at the black hole horizon. Doing this, writing we obtain dr . (C.9) In the first term involving the logarithm, we have extended the integration to infinity since that term is convergent. The remaining integral is completely well-behaved at the horizon and can easily be evaluated numerically. (It can be evaluated analytically in certain dimensions, or in the case of planar k = 0 black holes [39].) The complexity of formation is then written as four times the sum of the GHY and bulk terms studied above, along with a subtraction of two copies of global AdS. The final result is (C.10) Here we have explicitly set r * Schw,∞ = 0, which we will do also throughout the remainder of this appendix. C.2 Charged black holes & the neutral limit Let us consider here the complexity of formation for charged black holes, as it will be insightful to compare the results for charged solutions with the results for the rotating solutions studied in this work. The charged solutions are given by the following metrics (C.12) We will be concerned here with the planar and spherical solutions, i.e. the k = 0, 1 ones. Our objective is to understand how the complexity of formation for these solutions behaves in the limit q → 0. The causal structure of the charged black holes is qualitatively identical to the equal-spinning rotating holes considered in this work -see [18] for a full discussion. Since here we are only interested in the neutral limit, we will not consider the counterterm for null boundaries, as its contribution is subleading and vanishing in that limit. Moreover, just as for the rotating solutions, a GHY term at large distances is unimportant as it cancels when the subtraction relative to global AdS is performed. Therefore the complexity of formation consists of two ingredients: the bulk action and two corner terms where the past/future sheets of the WDW meet. Let us consider first the corner terms. The analysis is qualitatively similar to that performed already in the rotating case (and we refer the reader to [18] for a full discussion of these terms in the charged case), leading to the final result: where we have included a constant α that keeps track of the parameterization of the null geodesics normal to the sheets of the WDW patch. This accounts for the contribution of the future joint, the joint term at the past meeting point is identical and so the above should be multiplied by two when including it in the complexity of formation. The parameter r m 0 appearing in the above is the value of the radial coordinate where the sheets of the WDW patch meet. It is obtained by solving the condition where r * is the tortoise coordinate for the charged black hole. Here, introducing to allow the problematic pieces at the horizons to be isolated and treated separately, we find it has the form r * (r) = 1 2r 16) where in the above we have chosen an integration constant such that r * ∞ = 0 and have introduced dr . (C.17) Consider next the bulk contribution. After some massaging, the bulk action for charged black holes can be written in the form 15 and the subscript "0" denotes this quantity and the metric for the AdS vacuum. Note that since the AdS contribution has been subtracted here, making the integral convergent, we have taken the limit of integration to infinity. The bulk term can be massaged in a manner similar to the tortoise coordinate we considered early in the manuscript. We first write the metric function as as before. Then, the integrand of the bulk term can be split up according to . (C.21) This decomposition of the integral allows us to isolate the contributions at the horizons which require special care. We can integrate these terms explicitly, and then arrive at the following expression for the bulk: where we have defined This term is convergent and completely regular, requiring no special treatment at the horizons. It can be straightforwardly integrated numerically (or analytically in certain special cases). The complexity of formation then takes the final form We want to understand how this quantity behaves in the limit r − /r + → 0. For this we must first understand the asymptotic behaviour of r m 0 in this limit. In general dimensions, writing r m 0 = yr + (1 + ) we find that where y = r − /r + and r * Schw (0) is the value of the tortoise coordinate for the static solution at the origin (recall that we have set the integration constant so that r * ∞ = 0). Explicitly, this term takes the form We then deduce the asymptotic form of the meeting location 16 Using this asymptotic result along with the fact that near the inner horizon we have |f Q (r)| ≈ |f Q (r − )|(r − r − ) it is rather straightforward to show that Comparing with the results for the neutral (C.4) case we see that Note that this limit is independent of the parametrization of the null normals to the WDW patch, as indicated by the absence of α in the final expression. 17 16 The factor of 2 in front of the exponential differs from [18], where this factor is unity. The difference comes from the fact that we defined f (r) = F (r)(r 2 − r 2 + )(r 2 − r 2 − ) whereas those authors defined f (r) = F (r)(r − r+)(r − r−). The prefactor of the exponential is completely unimportant for the y → 0 limit, and the same results are obtained for rm 0 = yr+(1 + A ) for any choice of parameter A. It is the argument of the exponential that is important. 17 As we mentioned earlier, inclusion of the counterterm for null boundaries changes the structure of the joint term, but this addition has no effect on the y → 0 limit. For this reason, to keep the complexity of the expressions at a minimum, we did not include that term in the analysis presented here. The limit of the bulk term is more difficult. It is easy to deal with the logarithm terms in this limit -one of them simply vanishes, while the other yields a finite result. We have: Determining the value of I Q (0) is the tricky part. However, after careful examination of (C.23) it can be shown that this term can be expressed as (C.31) Thus, we conclude that the limit of the bulk action is It can be further shown that The conclusion is that, when k = 0, the limit of the bulk part of the action ∆I Q Bulk vanishes in all dimensions. This is consistent with the analysis of [18] where the D = 5 case was studied. However, the bulk term ∆I Q Bulk does not vanish when k = 1, as the equation just above does not hold in that case. However, the way in which the particular terms combine yields in general Thus, in the charged case the y → 0 limit of the complexity of formation matches the complexity of formation for the Schwarzschild AdS solution, irrespective of the horizon topology. However, note the non-trivial way in which this limit is achieved, with the corner term producing one fraction of the GHY term and the bulk action for the charged solution producing the other fraction of the GHY term while at the same time giving the full Schwarzschild-AdS bulk contribution. C.3 Rotating black holes & the static limit Let us finally consider in detail the static limit of the rotating black holes that have been our focus here in this work. We are interested once again in determining the limit of the bulk and joint terms in the action in the limit y ≡ r − /r + → 0. We work in general (odd) dimensions. Consider first the joint term. The relevant part of this term is Here we have neglected the term 2 ct Θ 2 /α 2 inside the logarithm for simplicity of presentation as it will have no effect on our discussion as it is subleading. Note also that here we have included the overall factor of 2 to account for both the past and future joints. Our objective is to understand the behaviour of this term as r − /r + → 0. In order to understand the behaviour of this corner term as y → 0 we need to understand the behaviour of r m 0 . Working in the limit of small y, and writing r = yr + (1 + ), it is easy to show that the tortoise coordinate (2.31) behaves as where r * Schw (0) is the value of the Schwarzschild-AdS tortoise coordinate at the origin -see eq. (C.26). In deriving this expression it is useful to note that as y → 0. We can then deduce that the meeting point behaves as in the limit y → 0. Near the inner horizon we can expand Subsituting this into (C.35) and taking the limit y → 0, we obtain the following result: Noting that D = 2N + 3 we see that This limit is different in structure than the limit in the charged case. 18 The reason partly has to do with the behaviour of h(r m 0 ) in the limit y → 0 which approaches a constant -or blows up -rather than behaving ∼ y in this limit (as it would for the charged solution). Next let us consider the behaviour of the bulk. Again, it is useful to split the bulk into pieces, isolating the parts that are divergent at the horizon. Doing this we can write the bulk term as Again, the last integral is convergent and its argument completely regular. As in the charged case, we can now easily study the limit of the logarithmic terms and then carefully consider the remaining integral. As before, the logarithmic term involving r + vanishes in this limit, and we must only consider the contribution from the logarithmic term involving r − . However, here a crucial difference from the charged case arises. In the rotating case, we have from the limiting behaviour of G(r − ) and h(r − ) presented in eq. (C.37) above. Meanwhile, the logarithm goes like log( based on the behaviour of r m 0 presented in eq. C.38. We therefore see that the logarithmic contributions to the bulk vanishes in the limit y → 0! We then must only consider the remaining integral in the bulk. However, this term behaves just as it did in the charged case, producing the following final limit for the bulk term: (C.45) The combined joint and bulk terms give which is the order of limits problem in the rotating case. D Alternate regularization of the WDW patch Here we consider an alternate regularization of the WDW patch to examine the limiting behaviour of the complexity of formation as y = r − /r + → 0. We do so by cutting off the future and past tips of the WDW patch at r = r m 0 + ∆r and introducing the appropriate GHY and joint terms to accommodate this (see figure 14). This amounts to introducing two corner terms and one GHY term at the future tip of the WDW patch, and likewise at the past tip. Consider first the GHY term on the right side of the future cutoff surface. This can be worked out to be where we have denoted r ∆ = r m 0 + ∆r. There are four contributions, all identical to this one, and so the final result for the GHY contribution is Consider next the corner terms that occur where the boundaries of the WDW patch intersect the cutoff surface at r ∆ . Focussing on the contribution on the right side of the future boundary of the WDW patch, the relevant null normal is To determine the relevant dot products appearing in the joint term we need the form of the auxillary future/outward pointing unit vectorŝ. In the present caseŝ = |f 2 (r ∆ )|dt is the appropriate choice. We can then work out the sign ε appearing in the definition of the joint term -see eq. (3.4). We find here that ε = +1. We then find the following result for the joint term There are four joints of this kind, giving the total The idea, then, is to replace the corner term appearing in section 4 with the combination of joint and GHY terms shown above. Note for our purposes here we will not consider the contribution of the null boundary counterterm. This is because we are interested in the limit y = r − /r + → 0 and the null boundary counterterm vanishes in this limit. We now examine this limit keeping r ∆ small but finite until after the limit y → 0 has been performed. The GHY term limits to precisely the GHY term in the static case, It must be emphasized that the order of limits here is important. The y → 0 limit must be taken prior to taking the ∆r → 0 limit. The entire issue associated with the order of limits problem is that this limit does not commute. Said another way, effectively what this conclusion means is that the future and past 'tips' of the WDW patch contain the following amount of action: in a vanishing amount of volume. Interestingly, this is exactly the limit of the corner term in the charged case. Thus, in this alternate regularization of the WDW patch the limit agrees with the Schwarzschild-AdS result. Note that for any finite y the two approaches will agree, as in that case the limits considered above will commute. E Behaviour of complexity of formation for large black holes Here we present additional details for the behaviour of the complexity of formation in the limit of large black holes. For the cases of charged black holes and also the rotating black holes considered here there are two independent limits that are of interest. The first involves holding fixed the size of the black hole, r + / , while exploring the extremal limit r − /r + → 1. The second is to hold fixed r − /r + while examining the behaviour of the complexity of formation for r + / → ∞. In previous work that focused on five-dimensional charged black holes [18], it was demonstrated that the entropy controls the behaviour of the complexity of formation in either limit when the black holes are large enough. In particular, those authors found that the complexity of formation diverges logarithmically as extremality is approached with a prefactor proportional to the entropy when the black holes are large. Moreover, the subleading terms in a near extremal expansion were also found to be related to the entropy. Here we wish to examine those conclusions in more detail and extend them to higher dimensions. We will then contrast them with the rotating case where it is found that different thermodynamic potentials control the different limits. E.1 Charged black holes: complexity equals volume To understand our results in the rotating case, it will be important to have an understanding of how the relevant computations play out for charged black holes. In this case, the complexity of formation is given by the following integral: To illustrate a particular example, we consider the five dimensional case. In five dimensions, the above integrals can be worked out to be where we have defined Our main objective here will be to try to understand how the resulting integral scales with α. While this is not so hard for these charged black holes, it will be considerably more involved for the rotating ones. So we will use the simpler setting of charged black holes to illustrate our ideas. Although it is not our main focus, let us mention here the case of planar charged black holes. For these solutions, the dependence of complexity of formation on the quantity α = r + / completely factors out of the integral, leaving a result dependent only on = 1 − r − /r + . In five dimensions the remaining integral can be evaluated explicitly, giving the final result: Here S is the black hole entropy, while E(X) refers to the elliptic integral of the first kind. We see clearly here that, for planar black holes, the only dependence on the black hole size is through the entropy. This property extends directly to all higher dimensions, though the resulting integrals no longer yield such a simple final result. Figure 15: A plot of the complexity of formation within the CV conjecture for five dimensional, spherical (k = +1) black holes. We have normalized the complexity of formation by the entropy and the curves shown correspond to r + / = 1/2, 1, 10, 50, 100 in order from top to bottom. The last three curves are visually indistinguishable. Imposed on the plot in a black curve is the complexity of formation for the planar k = 0 charged black hole. This curve coincides with the last three plots for the spherical black holes. From a heuristic examination of the integrals above, it is not too hard to become convinced that as α → ∞ the behaviour of the spherical (k = +1) black holes will match that of the planar black holes. We illustrate this with a numerical evaluation of the complexity of formation in figure 15. In this figure we have normalized the complexity of formation by dividing by the entropy and have shown the result as a function of r − /r + for several values of r + / . The plot illustrates that when r + / is small the curves can be distinguished. However, as r + / becomes large the results all converge to the planar case (shown here as the black curve). This illustrates that, for large black holes at fixed = r − /r + − 1, the entropy completely controls the complexity of formation. For charged black holes it is also not too difficult to confirm this conclusion analytically. Expanding (E.2) in the large α limit for five-dimensional spherical (k = +1) black holes gives While an analytic study is possible in the charged case, it will turn out to be much more difficult in the rotating case. For this reason we will discuss a numerical approach to determine the dependence of the complexity of formation on the horizon radius for large black holes. Suppose that ∆C V ∼ (r + / ) γ (E.7) for some power γ. A convenient way to determine the value of γ is the following. We consider the ratio We then take the logarithm of this ratio treated as a function of both (r + / ) and β. For each value of β, we compute R(β) for several (large) values of r + / and fit the resulting data to a linear model, and extract the slope of the numerical model. We explore the β parameter range until the slope determined in this way is zero. The value of β for which the slope vanishes corresponds to the case β = γ, allowing us to extract how the complexity of formation depends on the size of the black holes. This scheme is illustrated in figure 16 for five, seven, and nine dimensions. In each case it is clear from the plot that the slopes vanish for β = 3, 5, 7, respectively (but this can be confirmed to much higher precision numerically). This numerical finding is consistent with the results discussed above: In general dimensions, the complexity of formation for large charged black holes is controlled by the entropy and nothing more. E.2 Rotating black holes: complexity equals volume Let us now consider the rotating black holes, which are the main topic of our interest here. Once again for ease of presentation we will present detailed equations only in the five dimensional case and will comment how the situation plays out in higher (odd) dimensions. The complexity of formation for rotating black holes according to the CV conjecture is As in the charged case, there are two limits that are interesting to consider here. We can consider holding the size of the black hole r + / fixed and examine the extremal limit = (1 − r − /r + ) → 0 or vice versa. Let us first consider the former. To understand the leading behaviour in the extremal limit we split the integrand for the black hole into two parts: In the first term we have isolated a part of the integral that will behave like ∼ 1/(r − r + ) in the extremal limit, and so we expect a logarithmic singularity for this term. The second term does not exhibit such behaviour in the extremal limit: the behaviour of the numerator near r = r + will cancel the blow up due to the denominator. Therefore, near = 0, it is the asymptotics of the first integral that we must understand. The first integral converges when integrated between r + and ∞, and so we extend the integration domain r max → ∞. The result can then be expressed in terms of elliptic integrals: where E is the elliptic integral of the first kind. The remaining integrals cannot be evaluated in a simple closed form, but luckily this will not trouble us here (yet). Expanding this expression near = 0 and noting that this will be the dominant contribution to the complexity of formation in this limit, we find that in all dimensions It is tempting to expand the prefactor appearing here to understand how it behaves for large black holes. The behaviour is given by (N + 1)(N + 2) (E. 13) where S is the black hole entropy. So it is tempting to conclude that the complexity of formation (at least near extremality) is controlled by the entropy. However, the situation is more subtle. First, while the expansion just presented above holds provided → 0, it does not follow that the subleading terms in the expansion will always be subleading for sufficiently large r + / . What is true is that, for fixed r + / , one can find an that is small enough such that the entropy will control the behaviour near extremality. However, in the general situation the entropy does not control the complexity of formation, as we will now explain. The process of understanding the behaviour of the complexity of formation for large black holes involves extracting the leading r + / dependence of the integrals presented above. Despite a number of attempts, we have been unable to understand this problem from an analytical perspective, and therefore we resort to numerics. In figure 17 we show the ratio of the complexity of formation normalized by the entropy for several large values of r + / . It becomes clear that the entropy does not control the complexity of rotating formation for large black holes. This figure should be compared with figure 15 to see the stark difference relative to the charged case. Note that the entropy can be written as where P ( ) is a polynomial in that becomes rather complicated in higher dimensions and the general form is not important. This means that the entropy interpolates between two different scaling regimes. In the limit of slow rotation ( → 1) the entropy scales as Figure 18: The slope of the logarithm of the ratio R(β) for rotating black holes in several dimensions. The curves correspond to 5,7, 9, 11 dimensions from left to right, respectively. For each value of β the integrals have been evaluated for 500 points laying between r + / = 10, 000 and r + / = 20, 000. The slope is extracted by performing a linear fit to this data. In all cases we have set = 10 −10 to probe close to extremality. Vertical dashed lines have been added to aid in seeing where the slopes cross the horizontal axis. for large black holes, while in the near extremal limit the entropy scales like S ∼ →0 r + 2N +2 = r + D−1 (E. 16) for large rotating black holes. Although it is not immediately clear from figure 17, the entropy does match the scaling decently near r − /r + ≈ 0 -which is expected since this scaling holds for the Schwarzschild-AdS black hole [39] -but fails miserably closer to extremality. Using the same numerical scheme described in the previous section for charged black holes we can understand how the complexity of formation behaves as a function of r + / for large black holes. The objective is to understand this scaling close to extremality where the departure from entropic scaling is most severe. To briefly recap, the process involves studying the ratio R(β) = ∆C V (r + / ) β (E. 17) and numerically determining the value of β so that R(β) exhibits no dependence on r + / (when r + / is large). We show a sample of this numerical scheme in figure 18, and tabulate the results up to 27 dimensions in table 1. The conclusion is that in spacetime dimension D the complexity of formation scales like ∆C V ∼ →0 r + (D+1)(D−2)/(D−1) (E. 18) for large black holes near extremality. Table 1: Table of numerically calculated values of β compared with the scaling of the thermodynamic volume V (D−2)/(D−1) for large r + / . Here we have computed numerically the values of β according to the method outlined in the text. The data is obtained by evaluating the complexity of formation between r + / = 10 10 and r + / = 10 20 and we have fixed = 10 −10 , so we are considering the situation very close to extremality. The numerical values agree with the scaling of the thermodynamic volume to at least five decimal places in all cases. By pushing the domain of r + / to large values, the agreement becomes even better. Note that in all cases the scaling differs from the scaling of the entropy which behaves like (r + / ) D−1 for large r + / at fixed near extremality. It is obvious from table 1 that the scaling of ∆C V is different from the scaling of the entropy. The question then becomes whether or not there is a thermodynamic parameter that does have this scaling. As already hinted in table 1, the answer is that the thermodynamic volume possesses this scaling for large black holes. Isolating the dependence on r + , the thermodynamic volume can be written schematically as where again H( ) and K( ) are messy polynomials in whose form does not matter for the information we need here. These polynomials vanish nowhere on the range ∈ [0, 1]. We therefore see that the thermodynamic volume also has two scaling regimes, behaving as → 1 like V ∼ The scaling of the thermodynamic volume to this power interpolates precisely between the two scaling regimes of the complexity of formation. We show this graphically for five dimensions in figure 19. There are a few important things to note here: • The power of thermodynamic volume is natural. Recall that the thermodynamic volume has dimensionality [length] D−1 , therefore to obtain a quantity that has the correct dimensions of [length] D−2 requires precisely this power. • The scaling with thermodynamic volume is consistent with the entropic scaling observed for charged black holes and the Schwarzschild black hole [18,39]. This is because those solutions satisfy S ∼ V (D−2)/(D−1) . (E. 23) In other words, for those solutions the thermodynamic volume and the entropy are not independent and so the results can be written in terms of either quantity. For the rotating black holes these quantities are truly independent and we observe that it is actually the expression written in terms of the thermodynamic volume that prevails. • The convergence to "volumetric scaling" is slower for rotating black holes than it is for charged black holes. In the charged case the subleading terms die off at least as fast as /r + , while in the rotating case they die off like /r + . • To the best of our knowledge there is no a priori reason to expect that the thermodynamic volume should be related to an extremal volume in a black hole spacetime. However, deriving such a relationship could contribute to a proof of our relationship for the complexity of formation in general situations. • The conjectured reverse isoperimetric inequality [59] bounds the entropy in terms of the thermodynamic volume: If our result is general, i.e. the complexity of formation generally scales with the volume for large black holes, then the reverse isoperimetric inequality can be interpreted as the statement that the entropy provides a lower bound for the complexity of formation. This bound is saturated for static black holes, but more complicated black holes have a larger complexity of formation than naively suggested by their degrees of freedom (entropy). E.3 Rotating black holes: complexity equals action It is now natural to ask whether this scaling with the thermodynamic volume is universal to both complexity proposals, or if it is a peculiar behaviour associated with the CV proposal. Recall that, as shown in section 4.1, the complexity of formation in the CA conjecture is given by The most difficult part of the CA computation is the determination of r m 0 . In some instances, particularly in the limit r − /r + → 0, accurate determination of this parameter requires hundreds of digits of precision in the numerics. This technicality has limited our ability to probe the behaviour of the complexity of formation within the CA conjecture as broadly as the CV conjecture. However, we show in Fig. 20 Figure 20: A plot showing the CA complexity of formation normalized by the thermodynamic volume as a function of the ratio r − /r + in five dimensions. The plot shows curves for fixed r + / = 10, 10 2 , 10 3 , 10 4 , 10 5 , 10 6 and 10 7 , however after r + / = 1000 the curves are visually indistinguishable. Here we have set ct = . dimensions. The plot makes clear that the thermodynamic volume controls the scaling of ∆C A for large black holes, just as in the CV conjecture. While it was possible to compute the behaviour in various higher dimensions for the CV case, this is more difficult in the CA scenario. Nonetheless, we have confirmed the scaling with thermodynamic volume in seven dimensions, which suggests the same trend holds in general for CA.
2020-10-23T01:00:37.227Z
2020-10-21T00:00:00.000
{ "year": 2020, "sha1": "0ba044c74c2f2e99e41d10d4ec58862ed8ad55dc", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2021)226.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "672cf0b241d6168ac32286a0504450452862e8b6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
221836745
pes2o/s2orc
v3-fos-license
Inter-database validation of a deep learning approach for automatic sleep scoring Study objectives Development of inter-database generalizable sleep staging algorithms represents a challenge due to increased data variability across different datasets. Sharing data between different centers is also a problem due to potential restrictions due to patient privacy protection. In this work, we describe a new deep learning approach for automatic sleep staging, and address its generalization capabilities on a wide range of public sleep staging databases. We also examine the suitability of a novel approach that uses an ensemble of individual local models and evaluate its impact on the resulting inter-database generalization performance. Methods A general deep learning network architecture for automatic sleep staging is presented. Different preprocessing and architectural variant options are tested. The resulting prediction capabilities are evaluated and compared on a heterogeneous collection of six public sleep staging datasets. Validation is carried out in the context of independent local and external dataset generalization scenarios. Results Best results were achieved using the CNN_LSTM_5 neural network variant. Average prediction capabilities on independent local testing sets achieved 0.80 kappa score. When individual local models predict data from external datasets, average kappa score decreases to 0.54. Using the proposed ensemble-based approach, average kappa performance on the external dataset prediction scenario increases to 0.62. To our knowledge this is the largest study by the number of datasets so far on validating the generalization capabilities of an automatic sleep staging algorithm using external databases. Conclusions Validation results show good general performance of our method, as compared with the expected levels of human agreement, as well as to state-of-the-art automatic sleep staging methods. The proposed ensemble-based approach enables flexible and scalable design, allowing dynamic integration of local models into the final ensemble, preserving data locality, and increasing generalization capabilities of the resulting system at the same time. Introduction Sleep staging is one of the most important tasks during the clinical examination of polysomnographic sleep recordings (PSGs). A PSG records the relevant biomedical signals of a patient in the context of Sleep Medicine studies, representing the basic tool for the diagnosis of many sleep disorders. Sleep staging characterizes the patient's sleep macrostructure leading to the socalled hypnogram. The hypnogram plays also a fundamental role for the interpretation of several other biosignal activities of interest, such as the evaluation of the respiratory function, or the identification of different body and limb movement [1,2]. Current standard guidelines for sleep scoring carry out segmentation of the subject's neurophysiological activity following a discrete 30s-epoch time basis. Each epoch can be classified into five possible states (wakefulness, stages N1, N2, N3, and R) according to the observed signal pattern activity in the reference PSG interval. Specifically, for sleep staging, neurophysiological activity of interest involves monitoring of different traces of electroencephalographic (EEG), electromyographic (EMG) and electrooculographic (EOG) activity [1]. A typical PSG examination comprises 8 up to 24 hours of continuous signal recording, and its analysis is usually carried out manually by an expert clinician. The scoring process is consequently expensive and highly demanding, due to the involved clinician's time, and the complexity of the analysis itself. Moreover, the demand for PSG investigations is growing in relation with the general public awareness, motivated by clinical findings over the last years uncovering the negative impact that sleep disorders exert over health. This represents a challenge for the already congested sleep centers, with steadily increasing waiting lists. Automatic analysis of the sleep macrostructure is thus of interest, given the potential great savings in terms of time and human resources. An additional advantage is the possibility of providing deterministic (repeatable) diagnostic outcomes, hence contributing to the standardization and quality improvement in the diagnosis. The topic, in fact, is not new, and first related approximations can be traced back to the 1970's [3,4]. Numerous attempts have followed since then and up to now [5][6][7][8][9][10][11][12][13][14], evidencing that the task still represents a challenge, and an open area of research interest. More recently, several approximations have been appearing based on the use of deep learning, claiming advantages over previous realizations which include improved performance, and the possibility to skip handcrafted feature engineering processes [15][16][17][18][19][20][21][22][23]. However, despite the promising results reported in some of these works, practical acceptance of these systems among the clinical community remains low. Effectively, an unsolved problem remains the inability of these systems to sustain their results beyond the research lab, failing to make them extensible to the practical clinical environment. The problem is closely related with the so-called database variability problem, whereby the automatic scoring algorithm is not able to hold its performance beyond a specific testing dataset or the original experimental conditions. More specifically, estimation of the algorithm´s performance is commonly approached using a subset of independent (testing) data, taken from the whole set available in a specific reference database. This testing subset, while independent of the training data, remains effectively "local" to the reference database, meaning training and testing data share characteristics bounded to their common data generation process. However, when considering a multiple-database validation scenario, heterogeneity associated with the various external data sources adds an extra component of variability. In the case of sleep staging, sources of data variability are multiple and include, for example, differences among the subject's conditions or physiology, the signal acquisition and digitalization methods (e.g. sampling rates, electrode positions, amplification factors or noise-to-signal ratios), and also important, disagreement among expert's interpretation due to the inherent human subjectivity, or different training backgrounds. Detailed discussion on the topic can be found in a previous work of the authors [24], in which a general trend of performance degradation has been reported among the few works that have attempted validation procedures involving multiple independent external databases. In this work we describe a new deep learning approach for automatic sleep staging. Given the scarcity of comprehensive validation studies in the literature, one of the major contributions of this work involves addressing the real generalization capabilities of the learning model on a wide range of public sleep staging databases. For this purpose, prediction performance of the proposed approach is evaluated, for each database, in the context of both, independent local, and external generalization scenarios. In the first case, part of each dataset is set aside to be used as independent testing set, while the rest of the data are used for training and parameterization of the machine learning model. On the second scenario (external database validation) the whole dataset is presented as brand-new to the model, which was derived based on data from external and completely independent database(s). Effectively, by comparing both procedures it is possible to extrapolate the expected performance of the method, regardless of a specific local database used for the development of the model; hence, a better estimation of the real generalization capabilities of the algorithm on the general reference task of sleep staging can be achieved. Architecture of the proposed deep learning approach uses a novel flexible design combining different layers of Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM). The new design adds the capacity for learning of transition rules among sleep stages, i.e. epoch sequence learning, resulting on improved performance of the approach. We investigate the use of different architectural variants and epoch sequence lengths to analyze their impact on the generalization of the resulting models. In addition, we also examine the suitability of a novel approach introduced on a previous work [24], based on the use of an ensemble of individual local models. This approach has potential advantages in terms of modelling and learning scalability, and at the same time, it reduces the necessity of exchanging data between centers for the development of generalizable machine learning models. The impact of this novel ensemble approach on the resulting inter-database generalization performance is also evaluated using the new deep learning approach introduced in this work. Validation results are contextualized with respect to the expected levels of human agreement, and the performance of current state-of-the-art automatic scoring solutions on the sleep staging task. Our approach shows robust behavior in comparison with the available references. Datasets A set of heterogeneous and independent clinical sleep scoring datasets was used as testing benchmark during the course of our experiments. In order to enhance reproducibility, all datasets were gathered from public online repositories, and recordings were digitally encoded using the open EDF(+) format [25,26]. An overview of the general characteristics of each integrating dataset is given next. Extended description, including specifications of the corresponding signal montages can be found in S1 Table. Haaglanden Medisch Centrum Sleep Center Database (HMC). This dataset includes a total of 154 PSG recordings gathered retrospectively from the sleep center database of the Haaglanden Medisch Centrum (The Netherlands). Recordings were randomly selected from a heterogeneous population which was referred for PSG examination on the context of different sleep disorders during the year 2018. Data were acquired in the course of common clinical practice, and thus did not subject people to any other treatment nor prescribed any additional behavior outside of the usual clinical procedures. PSGs were anonymized avoiding any possibility of individual patient identification. Explicit participant consent was not required by the ethics committee due to the retrospective nature of the study and the fact that data were deidentified. Study was approved under identification code METC-19-065. The dataset has been made publicly available online [27]. St. Vicent's Hospital/University College Dublin Sleep Apnea Database (Dublin). This dataset contains 25 full overnight PSGs from adult subjects with suspected sleep-disordered breathing. Subjects were originally randomly selected over a 6-month period (September 02 to February 03) from patients referred to the Sleep Disorders Clinic at St Vincent's University Hospital, Dublin, for possible diagnosis of obstructive sleep apnea, central sleep apnea or primary snoring. The 2011 revised version of the dataset was used which is available online on the PhysioNet website [28]. Sleep Health Heart Study (SHHS). The Sleep Heart Health Study (SHHS) is a multi-center cohort study implemented by the National Heart Lung & Blood Institute to determine the cardiovascular and other consequences of sleep-disordered breathing. The database is available online upon permission at the National Sleep Research Resource (NSRR) [29,30]. More information about the rationale, design, and protocol of the SHHS study can be found in the dedicated NSRR section [30] and in the literature [31,32]. For this study a random subset of 100 PSG recordings were selected from the SHHS-2 study. A list of the recording numbers included in the selection is included as supplementary information for reproducibility purposes (S1 File). Sleep Telemetry Study (Telemetry). This dataset contains 44 whole-night PSGs obtained in a 1994 study of temazepam effects on sleep in 22 caucasian males and females without other medication. Subjects had mild difficulty falling asleep but were otherwise healthy. The PSGs were recorded in the hospital during two nights, one of which was after temazepam intake, and the other of which was after placebo intake. More details on the subjects and the recording conditions are further described in the works of Kemp et al. [33,34]. The dataset is fully available at the PhysioNet website as part of the more extensive Sleep-EDF database [35]. DREAMS subject database (DREAMS). The DREAMS dataset is composed of 20 wholenight PSG recordings from healthy subjects. It was collected during the DREAMS project, to tune, train, and test automatic sleep staging algorithms [36]. The dataset is available online granted by University of MONS-TCTS Laboratory (Stéphanie Devuyst, Thierry Dutoit) and Université Libre de Bruxelles-CHU de Charleroi Sleep Laboratory (Myriam Kerkhofs) under terms of the Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0) [37]. ISRUC-SLEEP dataset (ISRUC). This dataset is composed of 100 PSGs from adult subjects with evidence of having sleep disorders. PSG recordings were originally selected from the Sleep Medicine Centre of the Hospital of Coimbra University (CHUC) database during the period 2009-2013. More details about the rationale and the design of the database can be found in Khalighi et al. [38]. The database is publicly accessible online [39]. Pre-processing. The preprocessing block is in charge of processing the PSG signals for input homogenization and for (optionally) artifact cancellation. Input signal homogenization is necessary to confer the model the capacity to handle inter-database differences due to the use of different montages and digitalization procedures. Specifically the model receives as input two EEG, the chin EMG, and one EOG channel derivations, which are resampled at 100 Hz, representing a compromise between the limiting the size of the input dimensionality, and the preservation of the necessary signal properties for carrying out the sleep scoring task. Resampling at 100 Hz allows a working frequency up to 50 Hz which captures most of the meaningful EEG, EMG and EOG frequencies. Signals are then segmented using a 30s window following the standard epoch-based scoring procedures [1], resulting on input patterns of size 4x3000 that are fed into the following CNN processing block. Each of these input patterns is subsequently normalized in amplitude using a Gaussian standardization procedure [40]. Input signal filtering is left as an optional pre-processing step. The main purpose of this module is the removal of noise and signal artifacts, which are patient and database specific, and thus can interfere with the generalization capabilities of the resulting model. Application of the optional filtering step takes place over the original raw signals, i.e. at the original signal frequencies before resampling them at 100 Hz. Experimentation is carried out in this work to study the effects of applying the following pre-processing step on the different tested datasets. The filtering step is composed of the following filters: • Notch filtering: It is meant to remove the interference caused by the power grid. Notice that the AC frequency differs per country (e.g. 50 Hz in Europe, and 60 Hz in North America) and therefore, depending on the source dataset, mains interference will affect signals at different frequency ranges. Design and implementation of the used digital filter has been described in previous works [41,42]. • High-pass filter: It is applied to the chin EMG only, and the purpose is to get rid of the DC and low frequency components unrelated to the baseline muscle activity. A first order implementation has been described elsewhere [42]. In this work a cut-off value at 15 Hz has been used for the filter. • ECG filtering: Applied only in the case that an additional ECG derivation is included in the corresponding montage (see S1 Table) the filter is used for getting rid of possible spurious twitches caused by the ECG, affecting the input signals. An adaptive filtering algorithm has been used which has been described in detail in a previous work [41]. CNN block. The CNN block design is an updated version of previous CNN models developed by the authors [19,24]. As stated before, this block receives input patterns of size 4x3000, representing a 30s epoch window of PSG signals (2xEEG, 1xEMG, and 1xEOG). The block can produce a valid sleep staging output for each input pattern (CNN-only), or act as intermediate processing layer to feed a subsequent LSTM block (CNN-LSTM configuration). Experimentation will be carried out in this work to compare the two possible neural network configurations. The CNN design is composed of the concatenation of N operational blocks. Each operational block B(k), k = 1. . .N, is at the same time composed of four layers, namely (i) a 1D convolutional step (kernel size 1x100, preserving the input size with zero padding at edges, stride = 1), followed by (ii) ReLu activation [43], (iii) batch normalization [44], and (iv) an average pool layer (pool dimension 1x2, stride = 2). While the kernel size (1x100) at the convolutional step is maintained through all the N operational blocks, the number of filters in B(k), is doubled as with respect to B(k-1). Based on previous experiments [19] the initial number of filters in B(1) was set for this work at 8, while the number of operational blocks was fixed at N = 3. Output of the last operational block is fed into a subsequent CNN output block. The first processing layer in the output block is a full-connected step which takes the output from the last operational block and reduces the feature space to an output size of 50. This will be used as the input for the subsequent LSTM processing block when the network is working under the CNN-LSTM configuration. When the network is configured as CNN-only, then four additional processing steps follow. Specifically the 50-length feature vector is filtered through an additional ReLu activation, and then a dropout step with probability 0.5 is applied to improve regularization. Finally a final dense full-connected layer with softmax activation is used at the output with size 5, each representing a possible sleep stage assignment (W, N1, N2, N3, or R). The output of the softmax is interpreted as the corresponding posterior class probability, with the highest probability determining the final classification decision. LSTM block. When the network follows the CNN-LSTM configuration, the 50-length feature vector is fed into a subsequent LSTM processing block. The inclusion of an additional LSTM layer in the design is meant to provide the resulting network with the capacity of modelling the effect of epoch sequence on the final scoring. Indeed, the medical expert decision on the classification of the current PSG epoch is partially influenced by the sleep state of the preceding and subsequent epochs [1]. The LSTM block is composed of a first sequence configuration layer, a unidirectional LSTM layer [45], and finally, a fully-connected layer followed by softmax activation for producing the final output. The sequence configuration step composes the corresponding epoch feature sequence relative to the epoch k under evaluation. Specifically given a PSG recording containing M epoch intervals, for a given epoch k, k = 1. . .M, the sequence S(k) is composed as where de and bc respectively represent the ceil and the floor operations, L is the length of the sequence, and F stands for the corresponding input feature vector, in this case out of the preceding CNN node. For example, if L = 3, then the sequence would result as , and so on. The number of hidden neurons for the LSTM layer was set to 100 in this study. Ensemble of local models The intuitive approach to achieve better generalization of a machine learning model is to increment the amount and heterogeneity of the input training data. In the scenario where data from different sources (in our case, different databases) are involved, the former would translate into using data from the all the available datasets. Thereby the amount of training data increases, as well as their heterogeneity, hence boosting the chances of ending up with a better generalist model minimizing the dataset overfitting risk. This approach, however, has its own drawbacks. First, from a computational perspective, higher memory and computational resources are needed, the resulting model becomes inflexible to data evolving dynamically in time, and a combinatory explosion occurs when finding the best input dataset partition combination [24]. In addition, from a regulatory perspective, collecting data from different centers can be a problem due to potential privacy-protection restrictions on exchanging of patient data. At this respect a proposal was depicted on a previous work [24] based on the use of an ensemble of local models. Under this approach an independent "local" model is developed for each dataset using exclusively its data. For this purpose each dataset is split whereby part of the data are used for training and parameterization of the machine learning model, and the remaining are set aside to be used as independent local testing set. The resulting individual local models can be then combined using an ensemble. Specifically, in this work we are assuming that the ensemble output takes place using the majority vote [46,47]. The proposed approach shows advantages in the scalability of the design, making it flexible to dynamic evolution of the input datasets, i.e. the ensemble can be easily expanded by adding new local models when new training data, or new datasets are available. This, in addition, allows each individual model to be developed locally, meaning each center can develop its own model based on its data without the need of sharing and/or collecting data from other centers. This minimizes potential issues due to patient privacy protection regulations. Eventually only the resulting local model would need to be shared for its integration in the final ensemble. In this study we want to check the working hypothesis that by combining "local expert models" by means of an ensemble we can also increase the overall generalization capabilities of the resulting model when predicting external datasets. Experimental design An experimental design was scheduled aimed at testing the prediction and generalization capabilities of the deep learning architecture for automatic sleep staging described in the preceding sections. In order to characterize the effects on generalization performance due to varying characteristics of the target database, validation was carried out on a multiple-database setup. Experiments were designed to assess and compare both independent local and external database prediction scenarios separately. No a posteriori exclusion criteria were applied on any of the benchmark datasets used for this study. Thus, all the recordings integrating the datasets as described above were included in the validation. The underlying motivation is to assess the reliability of the resulting models on the most realistic situation, including the most general and heterogeneous patient phenotype possible. Remarkably, signal montages, recording methods, and manual scoring references can differ across the different source databases. That represents an extra challenge on testing the generalization capabilities of the sleep scoring algorithm. As stated before, our deep learning model assumes as input two channels of EEG, one submental EMG, and one EOG derivation. When more than two EEG derivations were available in the corresponding montage, the general rationale was to select the traditional central derivations (C4/M1 and C3/M2) as input. If central derivations were not available, then frontal electrodes were used as backup. In some cases, no choice was possible according to this rationale, therefore the only available derivations must to be used (e.g. for Telemetry, Pz-Oz and Fpz-Cz). In the case of the EOG, horizontal derivations were preferred as they are less sensitive to EEG and movement artifacts. S1 Table describes the specific selected derivations according to the available set of channels as well as the main characteristics for each dataset. The current AASM scoring standard [1] was set as reference for labelling the output classes for validation. Hence, when the reference dataset was originally scored using the R&K method (see S1 Table), NREM stages 3 and 4 were merged into one unique N3. For each dataset k, k = 1. . .K, the following experiments are carried out: Experiment 1: • Each dataset k, is split following an independent training TR(k) and testing TS(k) partition. Let us denote the whole original dataset by Notice that a subset of TR(k)-namely the validation subset VAL(k)-is used to implement the early stopping criterion during the network's learning process. The "local" generalization performance of the resulting model M(k) is evaluated by assessing the predictability of data contained in TS(k). This is the performance that is usually reported in the literature when data from only one database is used for experimentation. from ENS(k) aims to keep W(k) completely independent and external to ENS(k). By comparing the results of Experiment 3 with those of Experiment 1 and Experiment 2, it is possible to assess the effects on the resulting inter-database generalization of the proposed ensemble approach. Each of the previously described experiments is repeated using different variations of the general network architecture described in the preceding sections. The purpose is to analyze the impact of each configuration variation on the resulting generalization capabilities of the resulting models. Specifically, the following variants are tested: • Using the CCN-only configuration, first the default segments of 30s (1 epoch, input size 4x3000) configure the input to the network's CNN block. The input segments are afterwards expanded to form sequences of consecutive epochs with the aim of implementing the effect of epoch sequence learning. Different sequence lengths L = {3,5,7} are investigated at this respect. Gaussian normalization takes place in this case over the whole 4x(3000L) resulting input patterns. This approach to implement epoch sequence learning using a CCN-only configuration will be later on compared with the results achieved using the full CNN-LSTM design. • Using CNN-LSTM configuration, the sequence length parameter is similarly tested on different values L = {3,5,7}, using as input reference the 50-length feature vector of the preceding CNN output block. As stated before, the resulting models will be compared against the respective sequence learning implementations using the CNN-only configuration. • Finally, in order to test the effects of the optional signal preprocessing filtering step, each of the previous described experiments is repeated again, respectively, with and without applying the filtering pipeline. Thus, for each of the datasets included in our experimentation, a total of 14 different individual local models are developed, based on the data contained on each respective dataset. For identification, the following nomenclature is used: CNN_1, CNN_3, CNN_5, CNN_7, CNN_F_1, CNN_F_3, CNN_F_5, CNN_F_7, CNN_LSTM_3, CNN_LSTM_5, CNN_LSTM_7, CNN_LSTM_F_3, CNN_LSTM_F_5, CNN_LSTM_F_7, where the subscript F denotes the use of the pre-processing filtering step, and the suffix number indicates the corresponding number of sequence epochs used (value of the L parameter). For homogenization purposes, the same training configuration is applied in the development of above mentioned learning models for each dataset. In this respect the stochastic gradient descent approach is used to guide the weight's update, with the cross-entropy loss as the target cost function [40]. Each dataset is partitioned using 80% of data for training (TR), using the remaining 20% as independent local testing set (TS). A validation subset (VAL) is arranged by successively splitting 20% of the available training data apart. The validation set is used as reference to implement the early stopping mechanism to avoid overfitting to training data. The stopping criterion takes as reference the validation loss, which is evaluated 5 times per training epoch. A patience of 10 is established thereby stopping training when the validation loss has not been further improved after the whole training dataset is presented two times. The number of patterns within each training epoch (internal training batch) is set to 100 patterns, imposed by the available hardware resources relative to the size of the tested networks. The maximum number of training epochs is set to 30, and the initial learning rate to 0.001. The learning rate is decreased by a factor of 10 every 10 training epochs (thus 10 −4 , 10 −5 , up to a minimum of 10 −6 ). The same random initialization seed is used on each experiment to exclude variability due to initialization conditions, hence enabling deterministic training processes. This is important to assess the influence of the different tested architecture variants, as described before, and to make fair comparisons among the different resulting models and datasets. Performance evaluation of each experiment is carried out by taking the Cohen's kappa index (κ) as reference score. Cohen's kappa is preferred over other widespread validation metrics (e.g. accuracy, sensitivity/specificity, or F 1 -score) because it corrects for agreement due to chance, showing robustness in the presence of various class distributions [48]. This is an important property to allow performance comparison among differently distributed datasets, or when some classes are underrepresented in proportion to the rest (e.g. N1 vs N2 or W), as it is the case (see S1 Table for details on the different class distributions among the benchmark datasets). Remarkably, Cohen´s kappa is the standard metric being reported among studies analyzing human inter-rater agreement in the context of sleep scoring [49][50][51][52][53][54]. Results The following tables contain the results of the experiments described in the previous sections. Table 1 shows the results of Experiment 1, where each of the learning models is trained and evaluated using data from its respective local testing dataset. Subsequent Table 2 shows the results of the second experiment in which the resulting individual local models have been used to predict the reference scorings on each of the complete datasets. Results in Table 2 therefore involve performance evaluations of the models using an external validation setting, with the only exception of the main diagonal. The main diagonal in Table 2 represents the situation in which M(k) is used to predict W(k), resulting in a biased prediction since TR(k) = > M(k) and TR(k)�W(k). Regardless, these results have been kept in Table 2 for reference. Results regarding the third experiment (ensemble predictions) are shown in Table 3. These are compared with the reference predictions of the individual local models, both in the local Results report agreement in terms of kappa index with respect to the corresponding human clinical scorings for each dataset. The notation M(X) is used to indicate that the model was trained based on data on the dataset X. Rows within each dataset correspond to the different tested neural network configurations as described in the experimental design. The main diagonal (in greyed background) shows the results when the model is predicting its own complete local dataset (biased prediction). Individual local models Predicted dataset Model configuration M(HMC) M(Dublin) M(SHHS) M(Telemetry) M(DREAMS) M(ISRUC) https://doi.org/10.1371/journal.pone.0256111.t002 and external validation scenarios. The third column in Table 3 shows the reference local predictions achieved by the models in their respective testing sets (last column of Table 1). Subsequently, the fourth column shows the corresponding ranges of the inter-database external predictions as derived from data in Table 2. These ranges exclude data from the main diagonal of Table 2, i.e. for dataset k, performance of M(k) is excluded, hence regarding performance when the individual models are presented with the target dataset on an external prediction scenario exclusively. The resulting average performance is shown in the fifth column. Finally, the last column of Table 3 shows the corresponding performance when the ensemble model is Table 4 shows the global results by aggregating performance of the respective models across all the tested datasets. Specifically, each row in the second, third, and fourth columns of Table 4 is calculated by averaging the corresponding rows of columns three, five, and six, in Table 3, i.e. across all the six datasets. Columns five, six, and seven in Table 4 respectively represent the averaged inter-database performance differences between (i) the individual models in their respective local testing datasets and their averaged external dataset predictions, (ii) the individual models in their respective local testing datasets and the prediction of the ensemble model, and (iii) the averaged external dataset predictions of the individual models and the corresponding ensemble model prediction. Best model (CNN vs CNN-LSTM) According to Table 4, the proposed deep neural network approach achieves its best generalization performance across all the tested datasets on its CNN_LSTM_5 architectural variant. This configuration did achieve the best overall performance both in the local as well as in the external dataset prediction scenarios. The implementation of epoch sequence learning by concatenating the LSTM processing block to the output of the preceding CNN feature output layer results on an overall improvement of the model's performance. In general the CNN-LSTM configuration outperforms the respective CNN-only counterpart for the same sequence length at both local and external generalization scenarios. Performance improves with increasing L, reaching a saturation value around L = 5, after which generalization of the model decreases again below the validation indices obtained for L = 3. When using the CNNonly configuration, on the other hand, augmentation of the epoch sequence length does not translate on any network´s prediction improvement. This result for the CNN-only configuration seems to be a consequence of learning overfitting, as Table 1 shows that performance on the respective training sets nevertheless keeps improving with higher values of L. For the CNN-LSTM configuration, however, the trend seems to be consistent between the respective training and generalization performances. Signal prefiltering Data from Table 4 seems to rather advise against the use of the optional filtering pre-processing step. A closer look to the results of Tables 1-3, however, does show an inconsistent effect across the individual tested datasets. Actually data could be regarded as inconclusive or even favorable to the use of filters, with the notable exception of the results achieved for the Dublin dataset. As evidenced by data in Tables 2 and 3, the filtering step seems to have a totally different effect on the predictability of this dataset as compared with the rest. Remarkably, however, notice that difficulties of the models in predicting Dublin's data are only evidenced when the validation is carried out on an external prediction scenario. When using Dulin as independent local testing dataset, corresponding data in Table 1 do not show the pronounced performance decay as in the previous setting. This result evidences the database variability problem, and thus importance of expanding the validation procedures beyond the usual local testing scenario, including a sufficiently heterogeneous and independent data sample from a variety of external sources. Database generalization performance Having the expanded validation scenario in mind, and attending to experimental data contained in Tables 1-4, the following general statements might be formulated: 1. The individual model's local-dataset generalization performance overestimates the actual inter-dataset external generalization. This is a consistent result across all the tested datasets and network configurations (see Table 3). The trend is globally evidenced in Table 4 as well, as I vs II differences in the fifth column consistently show negative values. The downgrade in performance when evaluating external data is considerable, with associated kappa indices decreasing on the range between 0.21 up to 0.34 for the tested architectural variants. 2. The proposed ensemble method improves external inter-dataset generalization performance. This result is also consistent across all experimental simulations as evidenced in Tables 3 and 4. The improvement as with respect to the performance of the individual model's estimations ranges between 0.08 and 0.10 on the related kappa indices (see II vs III differences in column 7 of Table 4). 3. Individual model's local-dataset generalization estimation still represents an upper bound for the external inter-dataset generalization achieved by the ensemble approach. Similarly, evidence is consistent across data of Tables 3 and 4, with absolute kappa differences ranging between 0.12 and 0.26 in this case (I vs III differences in column six of Table 4). Table 5 summarizes literature results reporting on the expected human inter-scorer variability for the sleep staging task. Only works reporting agreement in terms of kappa index are included. Results in Table 5 are structured depending upon if experimentation implements a local or an external validation scenario, enabling a corresponding comparison with our results. In this regard, it is interpreted that a local validation was carried out when agreement among different human scorers belonging to a same center is compared. Usually this also involves the use of their own local database as the source for comparing their scorings. External inter-rater validations, on the other hand, refer to the cases in which experts compare their scorings using an independent dataset external to their center of origin. As reference for our results the CNN_LSTM_5 architectural variant is used, which achieved the best overall performance both in the local as well as in the external dataset prediction scenarios through our experimentation. Attending to data in Table 5, our results in the local database generalization scenario are in the range of the expected human agreement under similar conditions (Table 5, κ = 0.78-0.83 ours vs 0.73-0.87 reference). As per dataset, the trend holds for HMC (Table 5, κ = 0.79 vs 0.74 reference) and SHHS (Table 5, κ = 0.82 vs 0.81-0.83 reference), while for ISRUC, the automatic system performs somewhat under the expected expert levels ( Table 5, κ = 0.78 vs 0.87 reference). For HMC, human reference agreement levels were estimated using a subset of five recordings that were rescored by a total of 12 clinical experts from our sleep lab. The resulting pair-wise kappa agreements between all the combinations of experts were then averaged. To minimize the possibility of a biased case representation, the five recordings were selected, out of the 154 available, using a structured approach based on their relative positioning in the human-computer kappa performance distribution (12.5, 37.5, 50, 62.5 and 87.5 percentiles), where the original clinical expert scorings were used as reference. A similar selection approach was used on a previous study of the authors for the validation of an EEG arousal detection Table 5. Indices of human inter-rater agreement reported in the literature compared with the performance achieved by our proposed deep-learning approach. Dataset Inter-rater agreement (same center/database) Other databases 0.73 [11] 0.77-0.80 [55] 0.84-0.86 [56] 0.86 [54] ---0.46-0.89 [55] 0.72-0.75 [50] 0.62 [57] 0.76 [51] 0.68 [49] 0.63 [53] 0.58 [21] 0.75 [54] 0.66 [23] --- algorithm [58]. No other studies reporting on human kappa agreement were found in the literature for the rest of the datasets used in this work. As with respect to the external inter-database scenario, analysis of the literature shows a general decrease in human performance when compared to the respective local variability references. Specifically, two works, [54] and [55], allow comparison between local and external inter-scorer variability on the same dataset. In general results on these works follow the previously mentioned downgrading trend. In [55], however, an exception to this trend is reported in one of the two tested subgroups: 23 recordings scored using the R&K standard, and 21 recordings scored using the AASM rules. Specifically for the first subgroup of 23 recordings, inter-scorer agreement seems to actually increase among scorers coming from different centers (from κ = 0.77, when scorers belong to the same center, up to κ = 0.85-0.89 [55]). This result seems to represent an outlier, and for the second subgroup the results seem to support again the general downgrading trend reported in the literature (from κ = 0.80, when scorers belong to the same center, down to κ = 0.46-0.49 [55]). Overall range (all databases) Unfortunately, baseline levels of human agreement for the external prediction scenario cannot be determined from the current available literature for none of the databases used in this work. With that in mind, external generalization performance of our automatic scoring approach still seems to fall within the range of the expected human agreement reported for other databases (Table 5, κ = 0.59-0.69 ours vs 0.46-0.89 in general). Analysis in the context of other automatic approaches In Table 6 validation results comprising other automatic approaches reported in the literature are summarized. As in the previous case, results are structured considering if the performance metrics were obtained on the basis of a local or an external validation scenario. Only studies reporting agreement in terms of kappa index were considered. As reference for our results the CNN_LSTM_5 architectural variant is used. According to Table 6, when comparing local generalization performance on the datasets used in this work, our approach falls within the upper range of the corresponding state-of-theart results ( Table 6, κ = 0.78-0.83 in this work vs 0.44-0.84 overall). In particular, the architecture presented in this work clearly outperforms the previous results reported by the authors using the exact same datasets (κ = 0.44-0.68 in [24]). Other works have reported results in the case of the Dublin, SHHS, and ISRUC datasets. In the Dublin dataset our approach (κ = 0.79) outperforms results in existing literature [59,60] (κ = 0.66-0.74) but in the case of [18] (κ = 0.84). Notice [18] does not report results regarding external independent validation, and therefore overfitting to the local database should not be discarded. In the case of SHHS, our results (κ = 0.82) outperforms those reported in [62] and [64] (κ = 0.73 and 0.81, respectively) and matches those in [61]. On one another author's previous work [63] slightly better results were reported for SHHS (κ = 0.83), however the results in [63] shared the limitation that validation was only carried out the local dataset prediction scenario. No local performance reference has been found in the literature for HMC, Telemetry, and DREAMS datasets. When considering performance on the local dataset scenario globally, including results reported on other benchmarks, performance of our approach still holds on the upper range ( Table 6, κ = 0.44-0.86 globally vs 0.78-0.83 in this work). Notice that the highest performance reported in [65] (κ = 0.86) was obtained using 50% of the data from a small dataset of 8 recordings only, also not including validation data on external datasets. When considering data on the external dataset validation, Table 6 shows a general global decrease in the performance of the automatic methods as with respect to the corresponding indices on the local database validation scenario. Specifically, in all the works that allow comparison between local and external database generalization using the same algorithm [20,23,24,62,64] decrease in performance in noticeable when tested using external independent datasets. This trend is consistent with the results of our experimentation, as well as with data regarding human inter-rater agreement analyzed in Table 5. Overall, the highest external database generalization performance reported in the literature has been described in [21] (κ = 0.72-0.77). Results regard the best model on a leaving-one-out consensus of experts using one independent external dataset (IS-RC, see Table 1 in [21]). Unfortunately, local generalization performance in terms of kappa was not reported for the same model in that work. Therefore, it is not possible to evaluate possible differences between local and external database generalization using kappa as reference. Very recently, however, generalization of the same algorithm was evaluated on two additional external datasets, in this case reporting a combined average performance of κ = 0.61, almost in line with the reference human levels in the corresponding cohort (κ = 0.66) [23], but underperforming with respect to the original values reported in [21] (κ = 0.72-0.77). Table 6. Indices of automatic scoring agreement reported in the literature in comparison with the results achieved by the proposed deep-learning approach. Dataset Local dataset prediction scenario Our results (local dataset) External dataset prediction scenario Our results Discussion This study has addressed the extensive validation of a deep-learning based solution for the automatic scoring of sleep stages in polysomnographic recordings. Proper handling of the different sources of variability associated with the task has been one of the major traditional problems in the development of automated sleep staging systems. While clinical standard guidelines, such as those contained in the R&K [69] or AASM [1] manuals, aim for a certain level of homogenization in the recording and analysis process, inter-database differences are inevitable in practice. Data variability includes differences in the targeted patient populations, recording methods, or human-related interpretability, see [24] for a detailed discussion. Validation procedures reported in the literature have been so far limited. Performance of the reported methods is often extrapolated using small or non-independent datasets, mostly involving data limited to one particular database. Consequently, the performance is usually bounded to a particular data source, risking overfitting bias. Validation studies usually lack of enough data heterogeneity to allow establishment of valid generalizations. Our experimentation, together with the analysis of the existing literature, has shown the non-triviality of translating the estimated model's local generalization capabilities in the predictability of independent external datasets. When a system trained with some particular data is presented with similar examples, which are gathered from an external database, performance tends to decrease. This result further motivates the necessity of considering external multi-database prediction as a fundamental mandatory step in the validation of this class of systems. It also suggests a critical revision of the related existing literature in this regard. In this work we wanted to address this issue and challenge our design by evaluating its performance beyond data from a local database testing set (local generalization validation). For this purpose we have expanded our tests to include a wide selection of previously unseen external databases (external generalization validation). Effectively, by comparing both procedures it is possible to better extrapolate the real generalization performance of the method. For that purpose we have intentionally aimed at selecting databases freely available online in order to enhance reproducibility of the experiments. In total, six independent public databases have been included in this study. To our knowledge this is largest number of datasets to have ever been included on a study of this kind. On this challenging validation scenario, the deep learning architecture proposed in this work has shown good general performance, as compared with both human and automatic references available throughout the literature. We remit to the respective analyses carried out around data collected in Tables 5 and 6. Still, direct comparison of the results with other works has to be performed with caution. Effectively, even when referencing the same database source, studies might differ on the specific used validation approach, the number of involved recordings, or the particular patient conditions in their respective data selections or training partitions. The specific protocols and subject selection details of each particular study can be found within the referenced publications in Tables 5 and 6. Only the results provided on an earlier study of the authors [24] can be directly compared, as they address the exact same database benchmark. In this regard, the new architecture proposed in the current study outperforms the overall generalization capabilities previously achieved both in the local (κ = 0.60 in [24] vs 0.80 in this work) as well as in the external (κ = 0.50 in [24] vs 0.63 in this work) validation scenarios. Results from our experimentation have shown that the new CNN+LSTM architecture design introduced in this work translates into considerable improved generalization performance. This improvement has been noticeable on both the local and the external database validation scenarios, and across all the tested configuration variants of the proposed neural network architecture. Experimental data have pointed out as well toward the convenience of adding epoch sequence learning mechanisms using an additional LSTM output block, as with respect to the approach of increasing the length of the input pattern on the CNN-only configuration mode. Moreover, as dimensionality of the CNN input space (4x3000xL) is much bigger than the dimensionality of the LSTM input feature space (50xL), scalability of the solution also improves. Overall, the best performance achieved throughout our experimentation has corresponded to the CNN_LSTM_5 configuration. No further benefits on increasing the length of the sequence beyond the five epochs have been noticed. On the other hand, our global results have casted doubt on the convenience of using the proposed optional signal pre-filtering step. This result seems counterintuitive at first sight, as filtering was hypothesized to contribute to the homogenization of input data. Thereby, to cancel out patient and database-specific artifacts unrelated to the relevant neurophysiological activity, which could hinder generalization of the resulting models. However, data have not shown a consistent effect across all the tested datasets. More research is hence needed to fully understand the underlying causes of the high inter-dataset variability when using the proposed filtering pipeline. The same variability, on the other hand, evidences once again the importance of using a sufficiently heterogeneous and independent data sample, from a variety of external sources, to allow the establishment of valid and generalizable conclusions about the performance of an automatic scoring algorithm. Last but not least, our experimentation has shown that the use of an ensemble of local models leads to better generalization performance in comparison with the use of individual local models alone, hence confirming our preliminary results [24]. In this regard, it is a well-known result that incrementing the amount and heterogeneity of the input training data is an effective approach to achieve better generalization in machine learning. The proposed ensemble approach, however, provides additional advantages in terms of scalability and flexibility of the design. That means the ensemble can be easily expanded by adding new local models when new training data, or new datasets, are available. Moreover, the possibility to develop models based on local datasets reduces the necessity to exchange patient data between different centers, otherwise needed to increase heterogeneity of one big learning dataset. This addresses potential issues in relation with preservation of patient privacy. Altogether, our results thus motivate further exploration of the proposed ensemble-based design in future investigations. Some possible limitations of our study should be mentioned as well. Specifically, although the proposed ensemble strategy suggests a quantitative improvement in the generalization capabilities among independent databases, there is still notable degradation in the generalization performance in reference to the corresponding local testing datasets. The origin of this degradation must be studied in more detail, investigating alternative approaches to possibly reduce these differences. On the other hand, analysis of the literature regarding human interscorer variability has suggested that differences between local and external validation scenarios are likely to affect human experts in a similar manner. As the goal for an automatic scoring algorithm (in which the reference gold standard is based on subjective human scorings) is to achieve comparable agreement with respect to the human inter-scorer levels, it remains to be investigated how much of this degradation can actually be explained by the same intrinsic effect in human scoring. For this purpose, the reference levels of expected human agreement, and the corresponding local-external validation differences, need to be assessed for each particular database subject to validation. However, among the databases used in this study, reference levels of human scoring variability were only available for the HMC, SHHS, and ISRUC datasets, all of them constrained to a local validation scenario. Further investigation is therefore needed involving databases for which the reference levels of human agreement are available including the external validation scenario. In the case of the SHHS dataset a random selection of 100 PSG recordings was performed, however we could have used more data available for this cohort and study the generalization effects of the derived local models. Some recent studies have suggested that diversity of data plays a more important role on generalization than the amount of data itself [64], however our study did not include a specific protocol to test this hypothesis. Future work might explore increasing the local sample size and add additional datasets to the testing benchmark. Future research will also include the exploration of alternative ensemble combination strategies. The Naive-Bayes combiner [70], for example, might be an appealing approach in taking advantage of the different output probability distributions associated with each individual model in the ensemble. Better hyper-parameterization and data pre-processing methods must be also investigated. In particular, variability of the results for the Dublin dataset with respect to the proposed filtering pipeline remain unclear, and need to be studied in more detail. Finally, future work will be conducted toward addressing the effects of the input sampling rate (in this study signals were resampled to 100 Hz) and study the contribution of the selected input signal derivations to the resulting model's generalization capabilities.
2020-09-23T01:01:07.192Z
2020-09-22T00:00:00.000
{ "year": 2021, "sha1": "770e1966cc517dffab8b5d86a970495feb8d120b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0256111&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d661578edb6ea42e2c801d6c964106f1e9048f5", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science", "Engineering", "Mathematics" ] }
236868591
pes2o/s2orc
v3-fos-license
Laparoscopic-assisted Soave operation for the treatment of Hirschsprung disease in children: 5 years of experience Purpose The purpose of this study was to summarize the clinical experience of the laparoscopic-assisted Soave operation for the treatment of Hirschsprung disease in children. Methods In total, 186 children with Hirschsprung disease participated in this study from January 2014 to January 2019. The Soave operation was used to treat Hirschsprung disease with laparoscopic assistance. Symptoms and signs were followed up at one week, one month, three months, six months, one year and every 1-2 years after the rst year. Results All 186 children underwent laparoscopic surgery successfully, and none progressed to open surgery. During hospitalization and follow-up, there were 49 patients with complications, including 1 patient with an anastomotic leakage, 1 with an anal stricture, 5 with constipation recurrence, 5 with dirty defecation, 22 with enterocolitis, and 15 with perianal erosion. There were no complications such as abdominal bleeding, abdominal infection, ureter injury, adhesive intestinal obstruction, anastomotic stricture, incontinence. Conclusion The laparoscopic-assisted Soave operation is a safe and feasible method for the treatment of Hirschsprung disease in children. This method has the advantages of less trauma and good cosmetic effects. Introduction Hirschsprung disease is one of the most common congenital digestive tract malformations in paediatric surgery, with an incidence of approximately 1/5000 [1][2][3] . The main symptoms are delayed defecation, vomiting, progressive abdominal distension and constipation. If timely and effective treatment is not carried out, poor digestive system function can result, which affects growth and can even lead to death [4] . Surgical treatment is often needed in children with megacolon, and traditional surgical methods include the open operation of Duhamel, Swenson and Soave, with great trauma [5,6] . With the development and application of laparoscopic surgery in paediatric surgery and the continuous improvement of megacolon surgery, the operation for Hirschsprung disease has gradually evolved to laparoscopy-assisted combined with transanal anal pulling. As early as 1994 [7] and 1995 [8] , Smith and Georgeson reported on the Duhamel and Soave procedures assisted by laparoscopy. Subsequently, the Soave operation with laparoscopic assistance has been widely used in the clinic, and remarkable results have been achieved [9][10][11] . We retrospectively analysed clinical data for 186 children with Hirschsprung disease who underwent the Soave operation assisted by laparoscopy in our hospital over 5 years to summarize the experience and clinical e cacy of this approach. Patients We retrospectively analysed the clinical data of 186 children with Hirschsprung disease in our hospital from January 2014 to January 2019, including preoperative, intraoperative, postoperative and follow-up data. Patients met the inclusion criteria if they underwent the Soave operation assisted by laparoscopy. Patients were excluded from this study if they 1) underwent other surgical methods 2) had severe liver and kidney dysfunction and complex congenital heart disease or 3) refused to sign the consent form for surgery or refused to comply with the follow-up schedule. All the children had symptoms of delayed defecation and long-term or repeated constipation. Hirschsprung disease was clearly diagnosed by transanorectal manometry, barium colonography, intraoperative frozen pathology and postoperative para n pathology examination. Among the patients, 135 patients had the common type, 43 the long segment type, and 8 the total colon type. There were 122 males and 64 females. Age ranged from 2.3 months to 6.3 years, and the weight ranged from 3.5 kg to 26.6 kg ( Table 1). A routine clinical examination was performed before the operation, including an electrocardiogram, chest radiography, cardiac colour Doppler ultrasound and blood examination. Preoperative preparation Normal saline was used to clean and wash the intestines for 7-10 days before the procedure. Surgical method After successful anaesthesia, each patient was placed in a at position; routine surgical eld disinfection was performed, and the lower extremities were aseptically clothed by package isolation. A pneumoperitoneum needle was injected into the abdominal cavity at the lower edge of the umbilical part, and an arti cial pneumoperitoneum was established by slowly injecting CO 2 gas. The pneumoperitoneum pressure was generally maintained at 1.2-1.6 kPa. After the pneumoperitoneum needle was removed, a 5-mm inner-diameter laparoscope was placed at the puncture point, and operation forceps were placed at the intersection of the outer edge of the left rectus abdominis muscle and the umbilical horizontal line and the right lower abdomen. The extent of intestinal lesions was examined by laparoscopy to determine the spasmodic segment and the distal and proximal transitional segments. The serous muscle tissue of the upper rectum was cut and sent for intraoperative frozen biopsy, and the pathological report showed that the submucous and intermuscular ganglion cells were diagnosed as "congenital megacolon". The area from the lesion section and the distal end to the upper section of the rectum was retroperitoneal to 1-2 cm, the proximal end to the external appearance was soft, and the size of the intestinal tube was close to that of the normal intestinal tube. The proximal biopsies were taken from the proximal normal intestine. The sarco-muscular tissue of the segment was cut and sent for intraoperative frozen biopsy, and submucous and intermuscular ganglion cells were reported pathologically. After fully releasing the spleen curvature of the colon, the left side of the peritoneum, the lateral ligament of the descending colon, the lateral ligament of the sigmoid colon, and the proximal part of the examination could be stretched to the pelvic oor without tension. A length of 5-10 cm of "healthy" colon with some ganglionic cells was resected. Then, we drained the gas from the abdominal cavity and began the anal surgery. From the dentate line of the rectum, the posterior wall was 0.8 cm, and the anterior wall was 1.5 cm; the rectal mucosa was cut in the oblique ring, and the rectal mucosa was free to the proximal end to the abdominal cavity. We circularly cut the muscle sheath and resected part of the rectum muscle sheath in the posterior wall with the V type. The intestines were pulled out from the abdominal cavity with no tension and no torsion. At the proximal end of the examination, the serous layer of the lower segment of the colon was sutured with the stump of the rectal muscle sheath using an absorbable 4-0 line. The colon was cut off at the 0.5-cm distal end of the anastomosis, and intermittent suturing was performed between the severed end of the colon and the rectal mucosa. The arti cial pneumoperitoneum was re-established; no active bleeding was found in the abdominal cavity, and the pulled-out colon was not reversed. The pneumoperitoneum gas was discharged, and the abdominal incision was closed. Postoperative management After recovery of intestinal function, anal exhaust and defecation, the children began to take uids and gradually returned to a normal diet after 5 days. Anal dilatation began at 14 days after the operation, once a day, and was xed for 15 min to 20 min, increasing by 1 calibre (1 mm) every week and lasting for 3 months. Postoperative follow-up The children were followed up by telephone and outpatient service. The follow-up times were at one week, one month, three months, six months, and one year after the operation, and then every 1-2 years thereafter. Instances of constipation, faecal incontinence, defaecation, enteritis, etc., were recorded. Results All 186 children underwent laparoscopic surgery successfully, and no cases progressed to open surgery. During hospitalization and follow-up, there were 49 patients with complications, including 1 with anastomotic leakage, 1 with anal stricture, 5 with constipation recurrence, 5 with dirty defecation, 22 with enterocolitis, and 15 with perianal erosion. The patient with the anal stricture, which was caused by the lack of time for the parents to dilate the anus, was cured by correct and reasonable anal dilatation after 1 year. For the 5 patients with constipation recurrence, 2 were cured by conservative treatment, and 3 were cured by repeat open operation for the transition zone. The defecation symptoms of the 5 children with dirty defecation gradually disappeared with age and long-term anal sphincter exercise training. Children with enterocolitis were cured after conservative treatment, such as anti-infective use, clean enaemas, and probiotics. The children with perianal erosion were cured after strengthening perianal nursing, keeping the perianal skin dry and clean, protecting the perianal skin with external drugs, taking oral intestinal convergent drugs and reducing stool moisture. Discussion Resection is the main treatment for Hirschsprung disease, the purpose of which is to remove the diseased intestinal canal and pull the intestinal canal innervated by the normal nerve to the anus for anastomosis to maintain normal function of the anal sphincter and achieve the purpose of continuity of digestive tract reconstruction [12] . In recent years, the one-stage radical Soave operation of the transanal megacolon [13] has been widely carried out because laparotomy is avoided; there is also less trauma, less bleeding and rapid postoperative recovery. However, it is only suitable for the short segment type and some infants with the common type of megacolon. The application of laparoscopy can resolve the technical limitations of the Soave transanal megacolon and reduce the trauma of laparotomy, which highlights its minimally invasive features. Although laparoscopic surgery has many advantages, because of the small abdominal cavity in children, abdominal distension often affects the laparoscopic visual eld, resulting in abdominal organ injury, defective intestinal tube judgement, and normal laparoscopic operation [14] . We took the following measures to reduce abdominal distension and the di culty of the operation, which ensured a smooth operation and avoided or reduced the conversion to laparotomy. First, we chose an experienced anaesthesiologist to avoid prolonged mask oxygen supply and repeated tracheal intubation. If there is obvious gas accumulation in the gastric vesicle, we can properly adjust the position of the gastric tube, keep the gastric tube unobstructed, and expel the gas from the stomach. Second, insertion into the anal canal or adult gastric canal through the anus and insertion of the narrow segment into the dilated segment was performed to discharge the intestinal gas. Third, the small intestine often accumulates gas dilatation in the total colon type megacolon. The epidural catheter can be inserted through the abdominal wall to the dilated small intestine to eliminate the gas in the dilated small intestine and eliminate abdominal distension. Through the above measures to eliminate abdominal distension, laparoscopic surgery was successfully completed for all the children in this study, and no cases were converted to open surgery. Enterocolitis is the most common and serious postoperative complication of Hirschsprung disease, with an incidence of 2-33% [15,16] . Some scholars believe that the occurrence of enterocolitis is related to incomplete colorectal obstruction [17] , but we have observed that enterocolitis still occurs despite a smooth operation for most children, standard anal dilatation after the operation, and no obvious stricture or obstruction at the distal end of the colon. Therefore, we believe that in addition to colon obstruction, it is important to have low immunity in the body or intestine, to reduce surgical trauma and to avoid an imbalance of the intestinal ora. All the children in this study were cured after conservative treatment, and some of the children with recurrent enterocolitis recovered gradually with age and improvement in immune function. Dirty defecation is a common postoperative complication of Hirschsprung disease. The main reasons are injury to the anal sphincter or excessive traction of the anus during the operation. When the colon is pulled out of the anus, the anal sphincter is damaged, leading to dysfunction of the anal sphincter. The occurrence of some dirty defecation is also related to congenital anal sphincter dysplasia [18] . Therefore, the activity in the perineum should be gentle during the operation to avoid excessive traction of the anus; injury of the levator anal muscle should also be avoided during laparoscopic free retro exion of the intestine. In this study, the children with dirty defecation were not found to have congenital dysplasia or congenital loss of the anal sphincter by MRI, and all were cured after exercise training of the anal sphincter. The causes of recurrence of constipation after the operation are as follows [19,20] . 1. The rectum muscle sheath without ganglia was retained too long. There was no incision or an insu cient incision of the posterior wall of the rectal muscle sheath during the operation. 2. The resection of the diseased intestine was insu cient, from a long segment megacolon to a short segment or ultra-short segment megacolon. 3. Secondary ganglion cells develop poorly due to improper operation, proximal intestinal injury, or ischaemia. 4. During the operation, the abdominal cavity was widely separated, and the blood vessels were damaged, resulting in spasm caused by insu cient blood supply to the internal sphincter and anal stricture, resulting in constipation recurrence. 5. Enterocolitis is also an important cause of constipation recurrence. In this study, 5 patients with constipation recurrence and 2 with conservative treatment improved after the operation. One patient had recurrent, persistent constipation after the surgery, and radiography showed colon dilatation. Considering that the resection scope was not su cient, we resected the dilated segment. In one case, frozen pathology revealed a long segment megacolon; the whole colon was pathologically con rmed in para n-embedded samples after the operation, and the radical extubation operation was performed again. One case of Hirschsprung disease was complicated with megacolon-like disease with insu cient intestinal resection, resulting in a recurrence of constipation, which was cured by reoperation. Therefore, the cause of abnormal defecation after the operation must be found. After excluding anastomotic stricture, it should be clear whether development of the intestinal nerve is normal and whether it is complicated by megacolon. It has also been suggested that the pathological diagnostic criteria and pathologists' experience should be emphasized in the radical resection of megacolon. There are several limitations of our study. First, this was a single-centre study, and more research from multiple centres is needed to assess the effectiveness and complications of this technique. Second, this study was a retrospective review without a control group. Conclusion In conclusion, laparoscopic-assisted Soave surgery is a safe and feasible method for the treatment of Hirschsprung disease in children. The cosmetic results are impressive, and the follow-up results are promising. Declarations Ethics approval and consent to participate This study was approved by the ethics committee of Fujian Maternity and Child Health Hospital and strictly adhered to the tenets of the Declaration of Helsinki. In addition, all patients' guardians signed an informed consent form before the operation. Consent for publication Not Applicable Availability of data and materials The datasets used and analysed during the current study are available from the corresponding author on reasonable request. Competing interests The authors declare that they have no competing interests. Funding: No founding. Authors' contributions LMK and FYF designed the study, collected theclinical data, performed the statistical analysis, participated in the operation,and drafted the manuscript. ZB, LY, LOM, BJX and WDM participated in the operation and revised the article. All authors read and approved the nal manuscript. The follow-up time 3.5years(3months-5years)
2020-06-11T09:05:22.511Z
2020-03-30T00:00:00.000
{ "year": 2020, "sha1": "800691ea512a9814d69078165c2d83e84d48d3aa", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-18886/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "ece81fb259dc027c7ec3bc5bbabd83eab39d69d4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248432450
pes2o/s2orc
v3-fos-license
Bone Density Distribution in the Cervical Spine Study Design Retrospective cohort study Objective Given changes in bone density induced by degenerative disease, general measures of bone health (ie DEXA) are inadequate to evaluate bone density in surgical areas of interest. Regional differences in HU in the cervical spine may influence surgical strategies. The purposes of our study were to determine whether cervical Hounsfield units (HU) vary by level, examine their relationship with age, comorbidities, and alignment, and propose a technique to measure HU in the lateral masses. Methods Two hundred twenty-four patients with degenerative spine pathology with a cervical computed tomography were included (2015-2019). Measurements were performed in each vertebral body (C2-T1; mid-axial, anterior-axial, posterior-axial, mid-coronal, and mid-sagittal) and 2 regions of the lateral masses (C3-C6; mid-cor, mid-sag). To evaluate reliability, 6 observers each measured 355 HU values, inter-relater reliability assessed with intraclass correlation coefficients Correlations of HU with age, BMI, comorbidities, and cervical alignment were evaluated. Results Bone density differed by level, with the lowest HU scores in the lower cervical spine (C6-T1) (P < .001). No correlations were found between LM HU and age, BMI, CCI, or alignment (P > .05). Increased kyphosis was weakly correlated with VB HU, while age and CCI showed moderate correlations with VB HU at all levels (P<.001). ICC for HU measurements were good to excellent for the VBs, but poor to moderate for the LMs. Conclusion Bone is least dense in the lower cervical spine. HU scoring is not reliable in the lateral masses. We recommend that a level-specific approach to bone density is considered in surgical planning. Introduction The density of tissue on computed tomography (CT) reconstructions can be measured in Hounsfield units (HU), which are a measure of tissue attenuation.The clinical prevalence of CT scans has fostered interest in the use of HU as a marker of bone density. 1 Certain authors have advocated for the use of cervical or lumbar CT scans as an "opportunistic" manner of screening for osteoporosis. 2owever, while the relationship between osteoporosis (diagnosed with DEXA) and low HU values on CT scan is well-established, [1][2][3] the patient's general bone health may not correlate with the bone density in the surgical zone of interest.In accordance with Wolff's law, bone remodels under stress.Thus, patients with degenerative lumbar spine disease have been shown to have higher bone density in the lumbar spine, despite having femoral neck DEXA Tscores consistent with osteoporosis. 4,5In this context, the measurement of HU in the surgical region of interest (ie the levels for which a surgical procedure is planned) may be a more accurate method of evaluating regional bone density and predicting complications.][8] In the lumbar spine, HU have been shown to vary by vertebral level. 3Similar studies have not been performed for the cervical spine.Understanding trends in bone density of the cervical spine (as measured by HU scores) may be of clinical importance given the associations between HU and mechanical complications. 9Furthermore, HU scores have not been attempted in the lateral masses, which may be of interest for posterior cervical fusion constructs.Thus, the purposes of this analysis were to (1) define trends in bone density of the degenerative cervical spine, (2) examine the relationship between cervical bone density, age, BMI, comorbidities, and alignment, and (3) develop a reliable technique to measure HU scores in the lateral masses. Patient Sample This study was approved by the institutional review board.A retrospective chart review was performed to identify all consecutive adult patients (>18 years old) who underwent a cervical spine CT by one of six spine surgeons (May 2015 -December 2019) for evaluation of cervical-spine related complaints.Patients with outside cervical spine CTs, previous cervical spine instrumentation, pathophysiologic cervical spine conditions (eg, Klippel-Feil), osteoporosis (defined as a medical history of osteoporosis or current or past treatment with osteoporosis medications [excluding supplements]) were excluded. Data Collection Demographics and medical history were retrieved from the electronic medical record.All components of the Charlson Comorbidity Index (CCI) were collected from the medical record and used to calculate the CCI for each patient as a composite measure of their chronic health status. 10 The American Society of Anesthesiologists Class was collected for each patient as an additional composite measure of preanesthesia medical co-morbidities. 11ach patient underwent presurgical AP and lateral cervical spine radiographs in neutral alignment position, from which the C2-C7 sagittal cobb angle, the T1 slope [T1S], and the C2-C7 sagittal vertical axis [cSVA] were measured by an independent research technician using a dedicated measurement software (Surgimap, Nemaris Inc, New York, NY). 12 CT scans were performed using a 16-MDCT scanner (MX8000 Philips Healthcare, Andover, MA).For the assessment of bone mineral density, we modified a previously published protocol by Schreiber et al. 1,6 As described by this protocol, HU scores were obtained on circular areas of medullary bone that excluded cortical bone or areas of sclerosis on three thin-cut (1.25 mm) axial CT slices.The protocol was developed in the lumbar spine and consisted of measurements on three axial CT slices (inferior, mid, and superior).Given that the vertebral bodies in the cervical spine are smaller and shorter, we modified this protocol by obtaining HU scores from axial, coronal, and sagittal multiplanar reconstructions. 3][15] Hounsfield units were measured in 5 regions of each vertebral body (C2-T1; mid-axial, anterior-axial, posterioraxial, mid-coronal, and mid-sagittal).Mid-axial was defined as the CT slice at the region halfway between the inferior and superior endplates.Mid-coronal was defined as the CT slice halfway between the anterior and posterior aspects of the vertebral body, and mid-sagittal was defined as the slice halfway between the lateral aspects of the vertebral body (Figure 1).Similarly, the lateral masses (left and right, C3-C6) were measured at two regions: mid-coronal and mid-sagittal.The mid-coronal region was defined using the CT slice halfway between the anterior and posterior aspect of the lateral mass, along an anterior-posterior axis in line with the inferior and superior facet joints.The mid-sagittal region was defined as the slice halfway between the medial and lateral aspects of the lateral mass (Figure 2).Notably, mid-axial LM measurements were excluded as the relative thickness of the medial/lateral and anterior/posterior cortical walls of the LM made it near-impossible to measure the HU values at these regions without including cortical or sclerotic bone. The intraobserver reliability for HU measurements using the described technique has already been well established. 1,6easurements were performed by six members of the research team (orthopaedic residents and spine surgery fellows).Six persons were tasked with performing measurements as we aimed to find an HU method with high interrater reliability, especially with regards to the lateral masses.Thus, before all CTs were assessed, the first five patients were independently measured by the six observers, consisting 355 independently measured HU regions (71 regions per patient, 5 patients).After reliability was established, the rest of the CT scans were assessed. Statistical Analysis Given that a major purpose of this study was to determine level-specific average HU scores in the cervical spine, we chose to eliminate outliers.For each set of measurements, the top and bottom five values were excluded and analyses were performed.The 3 measured regions in each VB and the 2 measured regions in each LM were combined and averaged to obtain the "Total VB" and "Total LM" HU values, respectively (ie, Total C2, Total LM C5, etc.).These means were utilized for correlation and comparison analyses.Oneway ANOVA was used to compare HU measurements between vertebral levels and to compare HU means across age groups.Post-hoc pairwise comparisons were also performed between HU values between vertebral levels, with Bonferroni corrections applied.Correlations of HU values with age, body mass index, radiographic alignment, and medical comorbidities were assessed using Spearman's coefficients.Inter-observer reliability for the measurement technique was assessed using two-way random-effects models to calculate intraclass correlation coefficients (ICCs).A value of P < .05 was considered significant.Portions of the data were managed using REDCap (Research Electronic Data Capture) hosted by [BLINDED] Medicine Clinical and Translational Science Center under the following grant: [BLINDED]. 16Statistical analyses were performed using IBM SPSS Version 25.0 (Armonk, NY). Reliability of Measurement Method Measurement of vertebral bodies was performed with moderate to excellent inter-relater reliability. 17The intraclass correlation coefficients for the three VB regions as follows: .82mid-ax (good), .88mid-cor (excellent), .88mid-sag (excellent).In contrast, the reliability of the lateral mass measurements was fair to good: .46mid-sag (poor), .61midcor (moderate). Bone Density Trends in the Cervical Spine Mean bone density was different at each level (P < .001, Figure 4).While HU values were relatively close from C2-C5 (ranging between 370.2 [C2]to 389.0 [C4]), density gradually decreased through T1, which had the minimum average HU (232.3).Post-hoc comparisons showed that C2 VB was significantly different (P < .05 with Bonferroni correction applied) compared to C6-T1 and all LMs, C3 was significantly different compared to C6-T1 and LM3-5, C4 significantly differently compared to C6-T1 and all LMs, C5 significantly differently compared to C6-T1 and LM3-5.The lower VBs (C6-T1) were significantly different compared to all VBs and LMs. Regarding the lateral masses, the mean HU differed by level but fell within a narrow range (480.0-383.3)(Figure 5).On one-way ANOVA, the lateral masses had significantly higher HU scores compared to the vertebral bodies (P < .001).Post-hoc comparisons showed that LM3 was significantly different compared to LM4 and LM6, LM4 significant different compared to all LMs, LM5 significantly different compared to LM4, and LM6 was significantly differently compared to LM3 and 4 Relationship Between Clinical Factors and Bone Density Significant negative correlations were found between VB HU, age, and CCI (P < .05)(Table 2).The largest negative correlations between age and VB HU were found in the uppermost and lowermost portions of the cervical spine.A similar correlation pattern was found between VB HU and CCI (Table 2).When analyzed categorically, a significant decrease in mean bone density was seen with age for all vertebral bodies, with the exception of C4 (P < .05for all other comparisons).Increasing ASA class was also associated with a significant decrease in bone density with the exception of T1 (P < .05for all other comparisons). Bone density in the lateral masses (at any level) was not associated with age, BMI, CCI or ASA class on correlation or categorical analyses. Relationship Between Alignment and Bone Density A more kyphotic cervical spine was weakly correlated with higher bone density in the mid-cervical spine (Table 2).] T1S and cSVA were not correlated with bone density.Alignment was not correlated with HU scores in the lateral masses (Table 2). Discussion Our retrospective review of 201 CT scans in patients with symptomatic cervical spine pathology found that vertebral body HU score varies by level, with the lowest scores in the inferior cervical spine.Age and comorbidity burden were inversely correlated with the density of the vertebral bodies, especially in the upper and lower cervical spine.Regarding alignment, we found that a more kyphotic cervical spine was associated with higher HU scores, with the strongest correlations in the mid-lower cervical spine.In contrast, HU scores in the lateral masses were not correlated with any factor, including alignment.Finally, we showed that the density of the lateral masses is higher than the that of the vertebral bodies, however, the lateral mass analyses must be interpreted cautiously in the context of poor inter-observer reliability. We used a novel technique in our measurement of HU scores in cervical vertebral bodies, which was found to have similar interobserver reliability compared to traditional measurement techniques. 4While the majority of HU investigations have exclusively used axial cuts, 1,2,6,9 we incorporated sagittal, coronal, and regional axial HU scores into our protocol.4][15] Previous authors have attempted similar analyses in the lumbar spine, finding that at certain levels, sagittal HU scores were more strongly correlated to DEXA than axial scores (though the differences in correlation were minimal). 3For the purposes of measuring HU scores, whether one CT reformat is superior to another remains to be seen. Our HU values are similar to previous investigations of HU scores in the cervical spine. 2,7Wang et al 7 investigated a smaller cohort (91 patients) who underwent one-level ACDF, reporting mean C3-C7 HU values remarkably similar to those reported in our study (C2 and T1 were not measured).Colantonio and colleagues examined 149 cervical spine CTs in the United States Department of Defense database, finding a mean C4 axial HU value of 452±116 in healthy subjects, compared to 320±82 in those with osteoporosis (as diagnosed by DEXA).Notably, they performed their analyses in all patients who had a previous cervical CT scan, not necessarily patients presenting with cervical spine pathology.Thus, we believe that our investigation currently represents the best possible "reference values" for HU in patients with degenerative cervical spine pathology.However, there is a caveat to this statement-given that the primary purpose of our study was to establish average level-specific HU values in the cervical spine, we chose to eliminate the top 5 and bottom 5 measures of bone density.While this may have made our average HU values more accurate, it also increased our interobserver reliability.To this end, we caution readers that very high or low HU scores (ie several standard deviations lower or higher than the average values we report) may not be reliable. 1 similar statement can be made about using HU scoring for the lateral masses.Ours is the first study to attempt HU scoring of the lateral masses.Anecdotally, we found these measurements to be quite difficult given that the high ratio of cortical to medullary bone in the lateral masses.The difficulty in assessing lateral mass bone density was evident in the poor to moderate inter-observer reliability of our measurements.Thus, we do not believe HU scores should be used to evaluate the density of the lateral masses.However, there is still a clinical need to evaluate local bone quality in the lateral masses (eg to prevent instrumentation failure). 18This may be especially relevant for long posterior fusion constructs extending into the cervical spine.Future avenues of research could analyze new modes of assessing bone quality, such as cortical thickness or cortical to medullary ratios.The bone quality of the cervical pedicle may also be an area of research to allow for comparisons of cervical posterior fixation methods. In our study, we found that the least dense bone was found in the lower cervical spine.Notably, the relationship of HU scores with mechanical outcomes is a well-researched topic, with studies showing correlations between HU scores and cage subsidence in ACDF and lateral lumbar fusions, 7,9 loss of pedicle screw fixation, 18,19 proximal junctional kyphosis, 6,20 and lumbar pseudarthrosis. 8Thus, the fact that HU scores vary by level has several potential clinical consequences.First, anterior constructs ending in the lower cervical spine (the majority of 3-and 4-level ACDFs) are seated in the least dense region of bone.This may help to explain the higher rate of pseudarthrosis in multilevel constructs, 21,22 and especially why multilevel fusions in the lower cervical spine demonstrate a larger loss of lordosis than those in the mid-upper cervical spine. 23Second, while age and a higher comorbidity burden were correlated with decreased bone density at nearly all cervical levels, the fact that density varied shows that a level-specific (and not necessarily patient-specific) approach should be taken when determining the risk for mechanical complications. 7,24For example, the C6-T1 region in a healthy 50 year-old may have similar HU scores as the C3-C5 region in a sick 70 year old, which may in turn influence surgical strategy or the risk-benefit discussion.We recommend that surgeons evaluate the bone density in the surgical regional of interest when planning fusions of the cervical spine.These considerations become even more complex when the effect of alignment is taken into account.Wolff's law states that bone adapts to stress, organizing its trabeculae to support load.In this sense, the fact that posterior-axial HU scores were higher than anterior-axial HU scores in our population (who had an average lordotic alignment of 4°) should not be surprising.Wolff's law could also explain why patients with kyphosis began to show a reversal of this relationship, with similar HU values in the anterior-axial and posterior-axial regions of C5 and C6.Kyphotic cervical spines were also associated with higher HU scores overall-correlations strongest in the C5-C7 regions, where degenerative disc disease is most likely to occur. 25This finding only strengthens our recommendation that surgeons consider the HU scores in the surgical region of interest, and avoid assuming that bone density throughout the spine is the same. This study has several limitations.First, we did not have gold-standard measures of bone density (DEXA or quantitative CT scans) available for all patients, which precludes our ability to relate cervical spine HU scores with these metrics.However, several lumbar spine studies have shown that DEXA may overestimate the bone density at the surgical region of interest. 4Thus, we do not believe that the inclusion of DEXA scores in our analysis would affect our conclusions.Second, while we excluded patients with a clinical history of osteoporosis or history of taking osteoporosis medications, we could not account for patients with undiagnosed osteoporosis.Regardless, our patient population is a real-world example of patients presenting for evaluation by a spine surgeon, many of who will present without a diagnosis of osteoporosis.Third, while the purpose of this study was not to evaluate risk factors for decreased bone density in the cervical spine, the small number of smokers, patients with rheumatologic disease, and non-white subjects precluded meaningful analysis of these risk factors.Thus, our conclusions may not apply to patients dissimilar to our cohort.Fourth, as stated above, given that we eliminated outliers in order to obtain the most accurate HU values, we caution readers that very high or low HU scores (ie several standard deviations lower or higher than the average values we report) may not be reliable.Fifth, our findings only apply to patients with degenerative spine pathology presenting for cervical-spine related complaints, we cannot comment on HU values or trends in patients with healthy cervical spines.Furthermore, we did not include the "degree of degeneration" in the cervical spine.While we did note that kyphosis was associated with an increase in HU, this is an imperfect surrogate for disc degeneration.Further study on the specific relationship between HU values and disc degeneration is merited. In conclusion, our investigation of 201 CT scans in patients with cervical pathology demonstrates that cervical HU scores vary by vertebral level.The lowest density was found in the cervical spine, which may partially explain the high rate of mechanical complications in multilevel anterior fusions.Further investigation on preventing these complications should incorporate HU scoring.Age, comorbidities, and alignment were all found to influence HU scores of the vertebral bodies, which demonstrates that a level-specific approach to bone density should be considered when evaluating a patient for surgery.Future studies will be needed to determine if such an approach will influence clinical and mechanical outcomes.Finally, we encourage continued research on methods to measure regional bone density in the lateral masses, as HU scoring may not be appropriate. Figure 2 . Figure 2. Measurement regions for the lateral masses. Figure 5 . Figure 5.Total HU scores of lateral masses.The HU scores of the lateral masses were higher than those of the vertebral bodies (P<.001). Figure 4 . Figure 4. Total HU scores of vertebral bodies.HU scores are significantly different, with a maximum score at C4 and minimum score at T1 (P<.001).Post-hoc comparisons showed that C2 VB was significantly different (P<.05 with Bonferroni correction applied) compared to C6-T1 and all LMs, C3 was significantly different compared to C6-T1 and LM3-5, C4 significantly differently compared to C6-T1 and all LMs, C5 significantly differently compared to C6-T1 and LM3-5.The lower VBs (C6-T1) were significantly different compared to all VBs and LMs.LM3 was significantly different compared to LM4 and LM6, LM4 significant different compared to all LMs, LM5 significantly different compared to LM4, and LM6 was significantly differently compared to LM3 and 4. Standard deviations represented by error bars.
2022-04-30T06:24:43.057Z
2022-04-29T00:00:00.000
{ "year": 2022, "sha1": "82d908e6267e6926cb8be9d5bffcc1b18ccd47ae", "oa_license": "CCBYNCND", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21925682221098965", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3aa0715e3bfb88098800e3d92a15f44f71908b0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3265482
pes2o/s2orc
v3-fos-license
Design and Analysis of Nonbinary LDPC Codes for Arbitrary Discrete-Memoryless Channels We present an analysis, under iterative decoding, of coset LDPC codes over GF(q), designed for use over arbitrary discrete-memoryless channels (particularly nonbinary and asymmetric channels). We use a random-coset analysis to produce an effect that is similar to output-symmetry with binary channels. We show that the random selection of the nonzero elements of the GF(q) parity-check matrix induces a permutation-invariance property on the densities of the decoder messages, which simplifies their analysis and approximation. We generalize several properties, including symmetry and stability from the analysis of binary LDPC codes. We show that under a Gaussian approximation, the entire q-1 dimensional distribution of the vector messages is described by a single scalar parameter (like the distributions of binary LDPC messages). We apply this property to develop EXIT charts for our codes. We use appropriately designed signal constellations to obtain substantial shaping gains. Simulation results indicate that our codes outperform multilevel codes at short block lengths. We also present simulation results for the AWGN channel, including results within 0.56 dB of the unconstrained Shannon limit (i.e. not restricted to any signal constellation) at a spectral efficiency of 6 bits/s/Hz. I. INTRODUCTION In their seminal work, Richardson et al. [29], [28] developed an extensive analysis of LDPC codes over memoryless binary-input output-symmetric (MBIOS) channels. Using this analysis, they designed edge-distributions for LDPC codes at rates remarkably close to the capacity of several such channels. However, their analysis is mostly restricted to MBIOS channels. This rules out many important channels, including bandwidth-efficient channels, which require nonbinary channel alphabets. To design nonbinary codes, Hou et al. [18] suggested starting off with binary LDPC codes either as components realizations of the coset vector. Our approach is similar to the one used by Kavcić et al. [19] for binary channels with ISI. Random-coset analysis enables us to generalize several properties from the analysis of binary LDPC, including the all-zero codeword assumption 2 , and the symmetry property of densities. In [9] and [35], approximations of the density-evolution were proposed that use a Gaussian assumption. These approximations track one-dimensional surrogates rather that the true densities, and are easier to implement. A different approach was used in [6] to develop one-dimensional surrogates that can be used to compute lowerbounds on the decoding threshold. Unlike binary LDPC codes, the problem of finding an efficient algorithm for computing density evolution for nonbinary LDPC codes remains open. This is a result of the fact that the messages transferred in nonbinary belief-propagation are multidimensional vectors rather than scalar values. Just storing the density of a non-scalar random variable requires an amount of memory that is exponential in the alphabet size. Nevertheless, we show that approximation using surrogates is very much possible. With LDPC codes over GF(q), the nonzero elements of the sparse parity-check matrix are selected at random from GF(q)\{0}. In this paper, we show that this random selection induces an additional symmetry property on the distributions tracked by density-evolution, which we call permutation-invariance. We use permutation-invariance to generalize the stability property from binary LDPC codes. Gaussian approximation of nonbinary LDPC was first considered by Li et al. [22] in the context of transmission over binary-input channels. Their approximation uses q − 1 dimensional vector parameters to characterize the densities of messages, under the assumption that the densities are approximately Gaussian. We show that assuming permutation-invariance, the densities may in fact be described by scalar, one-dimensional parameters, like the densities of binary LDPC. Finally, binary LDPC codes are commonly designed using EXIT charts, as suggested by ten Brink et al. [35]. EXIT charts are based on the Gaussian approximation of density-evolution. In this paper, we therefore use the generalization of this approximation to extend EXIT charts to coset GF(q) LDPC codes. Using EXIT charts, we design codes at several spectral efficiencies, including codes at a spectral efficiency of 6 bits/s/Hz within 0.56 dB of the unconstrained Shannon limit (i.e., when transmission is not restricted to any signal constellation). To the best of our knowledge, these are the best codes designed for this spectral efficiency. We also compare coset GF(q) LDPC codes to codes constructed using multilevel coding and turbo-TCM, and provide simulation results that indicate that our codes outperform these schemes at short block-lengths. Our work is organized as follows: We begin by introducing some notation in Section II 3 . In Section III we formally define coset LDPC codes over GF(q) and ensembles of codes, and discuss mappings to the channel alphabet. In Section IV we present belief-propagation decoding of coset GF(q) LDPC codes, and discuss its efficient implementation. In Section V we discuss the all-zero codeword assumption, symmetry and channel equivalence. In Section VI we present density evolution for nonbinary LDPC and permutation-invariance. We also develop the 2 Note that in [38], an approach to generalizing density evolution to asymmetric binary channels was proposed that does not require the all-zero codeword assumption. 3 We have placed this section first for easy reference, although none of the notations are required to understand Section III. stability property and Gaussian approximation. In Section VII we discuss the design of LDPC codes using EXIT charts and present simulation results. In Section VIII, we compare our codes with multilevel coding and turbo-TCM. Section IX presents ideas for further research and concludes the paper. A. General Notation Vectors are typically denoted by boldface e.g. x. Random variables are denoted by upper-case letters, e.g. X and their instantiations in lower-case, e.g. x. We allow an exception to this rule with random variables over GF(q), to enable neater notation. For simplicity, throughout this paper, we generally assume discrete random variables (with one exception involving Gaussian approximation). The generalization to continuous variables is immediate. B. Probability and LLR Vectors An important difference between nonbinary and binary LDPC decoders is that the former use messages that are multidimensional vectors, rather than scalar values. Like the binary decoders, however, there are two possible representations for the messages: plain-likelihood probability-vectors or log-likelihood-ratio (LLR) vectors. A q-dimensional probability-vector is a vector x = (x 0 , ..., x q−1 ) of real numbers such that x i ≥ 0 for all i and q−1 i=0 x i = 1. The indices i = 0, ..., q − 1 of each message vector's components are also interpreted as elements of GF(q). That is, each index i is taken to mean the ith element of GF(q), given some enumeration of the field elements (we assume that indices 0 and 1 correspond to the zero and one elements of the field, respectively). Given a probability-vector x, the LLR values associated with it are defined as w i ∆ = log(x 0 /x i ), i = 0, ..., q − 1 (a definition borrowed from [22]). Notice that for all x, w 0 = 0. We define the LLR-vector representation of x as the q − 1 dimensional vector w = (w 1 , ..., w q−1 ). For convenience, although w 0 is not defined as belonging to this vector, we will allow ourselves to refer to it with the implicit understanding that it is always equal to zero. Given an LLR vector w, the components of the corresponding probability-vector (the probability vector from which w was produced) can be obtained by We use the shorthand notation x ′ to denote the LLR-vector representation of a probability-vector x. Similarly, if w is an LLR-vector, then w ′ is its corresponding probability-vector representation. A probability-vector random variable is defined to be a q-dimensional random variable X = (X 0 , ..., X q−1 ), that takes only valid probability-vector values. An LLR-vector random variable is a q − 1-dimensional random variable W = (W 1 , ..., W q−1 ). C. The Operations ×g and +g Given a probability vector x and an element g ∈ GF(q), we define the +g operator in the following way (note that a different definition will shortly be given for LLR vectors) x +g ∆ = (x g , x 1+g , ..., x (q−1)+g ) (2) where addition is performed over GF(q). x * is defined as the set We define n(x) as the number of elements g ∈ GF(q) satisfying x +g = x. For example, assuming GF (3) Similarly, we define Note that the operation +g is reversible, and (x +g ) −g = x. Similarly, ×g is reversible for all g = 0, and (x ×g ) ×g −1 = x. In Appendix I we summarize some additional properties of these operators that are used in this paper. In the context of LLR vectors, we define the operation +g differently. Given an LLR vector w, we define w +g using the corresponding probability vector. That is, w +g ∆ = LLR([LLR −1 (w)] +g ). Thus we obtain: The operation ×g is similarly defined as w ×g ∆ = LLR([LLR −1 (w)] ×g ). However, unlike the +g operation, the resulting definition coincides with the definition for probability vectors, and w ×g i = w i·g , i = 1, ..., q − 1 III. COSET GF(q) LDPC CODES DEFINED We begin in Section III-A by defining LDPC codes over GF(q). We proceed in Section III-B to define coset GF(q) LDPC codes. In Section III-C we define the concept of mappings, by which coset GF(q) LDPC codes are tailored to specific channels. In Section III-D we discuss ensembles of coset GF(q) LDPC codes. A. LDPC Codes over GF(q) A GF(q) LDPC code is defined in a way similar to binary LDPC codes, using a bipartite Tanner graph [34]. The graph has N variable (left) nodes, corresponding to codeword symbols, and M check (right) nodes corresponding to parity-checks. Two important differences distinguish GF(q) LDPC codes from their binary counterparts. Firstly, the codeword elements are selected from the entire field GF(q). Hence, each variable-node is assigned a symbol from GF(q), rather than just a binary digit. Secondly, at each edge (i, j) of the Tanner graph, a label g i,j ∈ GF(q)\{0} is defined. Figure 1 illustrates the labels at the edges adjacent to some check node of an LDPC code's bipartite graph (the digits 1, 2 and 5 represent nonzero elements of GF(q)). A word c with components from GF(q) is a codeword if at each check-node j, the following equation holds: where N (j) is the set of variable nodes adjacent to j. The GF(q)-LDPC code's parity-check matrix can easily be obtained from its bipartite graph (see [1]). As with binary LDPC codes, we say that a GF(q) LDPC code is regular if all variable-nodes in its Tanner graph have the same degree, and all check-nodes have the same degree. Otherwise, we say it is irregular. B. Coset GF(q) LDPC Codes As mentioned in Section I, rather than use plain GF(q) LDPC codes, it is useful instead to consider coset codes. In doing so, we follow the example of Elias [12] with binary codes. Definition 1: Given a length N linear code C and a length N vector v over GF(q), the code {c + v : c ∈ C} (i.e. obtained by adding v to each of the codewords of C) is called a coset code. Note that the addition is performed componentwise over GF (q). v is called the coset vector. The use of coset codes, as we will later see, is a valuable asset to rigorous analysis and is easily accounted for in the decoding process. C. Mapping to the Channel Signal Set With binary LDPC codes, the BPSK signals ±1 are typically used instead of the {0, 1} symbols of the code alphabet. With nonbinary LDPC, we denote the signal constellation by A and the mapping from the code alphabet (GF(q)) by δ(·). When designing codes for transmission over an AWGN channel, a pulse amplitude modulation (PAM) or quadrature amplitude modulation (QAM) constellation is a straightforward choice for A. In Section VIII we present codes where A is a PAM signal constellation. However, we now show that more careful attention to the design of the signal constellation can produce a substantial gain in performance. In [1] we have shown that ensembles of GF(q)-LDPC codes resemble uniform random-coding ensembles. That [17]. However, to approach capacity over asymmetric channels (and overcome the shaping gap [13]), we need the symbol distribution to be nonuniform. For example, to approach capacity over the AWGN channel, we need the distribution to resemble a Gaussian distribution. One solution to this problem is a variant of an idea by Gallager [17]. The approach begins with a mapping of symbols from GF(q) (the code alphabet) into the channel input alphabet. We typically use a code alphabet that is larger than the channel input alphabet. By mapping several GF(q) symbols into each channel symbol (rather than using a one-to-one mapping), we can control the probability of each channel symbol. For example, in Fig. 2 we examine a channel alphabet A = {a, b, c}, and a quantization mapping that is designed to achieve the distribution Q(a) = Q(b) = 3/8, Q(c) = 1/4 (The digits 0,...,7 represent elements of GF(8)). We call this a quantizaion mapping because the mapping is many-to-one. Formally, we define quantization mapping as follows: Definition 2: Let Q(·) be a rational probability assignment of the form Q(a) = N a /q, for all a ∈ A. A quantization δ(·) = δ Q (·) associated with Q(a) is a mapping from a set of GF(q) elements to A such that the number of elements mapped to each a ∈ A is q · Q(a). Quantizations are designed for finite channel input alphabets and rational-valued probability assignments. However, other probability assignments can be approximated arbitrarily close. Independently of our work, a similar approach was developed by Ratzer and MacKay [26] (note that their approach does not involve coset codes). A similar approach to designing mappings is based on Sun and van Tilborg [33] and Fragouli et al. [14] and is suitable for channels with continuous-input alphabets (like the AWGN channel). Instead of mapping many code symbols into each channel symbol, they used a one-to-one mapping to a set A of channel input signals that are non-uniformly spaced. To approximate a Gaussian input distribution, for example, the signals could be spaced more densely around zero. Given a mapping δ(·) over GF(q), we define the mapping of a vector v with symbols in GF(q), as the vector obtained by applying δ(·) to each of its symbols. The mapping of a code is the code obtained by applying the mapping to each of the codewords. It is useful to model coset GF(q) LDPC encoding as a sequence of operations, as shown in Figure 3. An incoming message is encoded into a codeword of the underlying GF(q) LDPC code C. The coset vector v is then added, and a mapping δ(·) is applied. In the sequel, we will refer to the resulting codeword as a coset GF(q) LDPC codeword, although strictly speaking, the mapping δ(·) is not included in Definition 1. Finally, the resulting codeword is transmitted over the channel. D. (λ, ρ, δ) Ensembles of Coset GF(q) LDPC Codes As in the case of standard, binary LDPC codes, the analysis of coset GF(q) LDPC focuses on the average behavior of codes selected at random from an ensemble of codes. The following method, due to Luby et al. [24] is used to construct irregular bipartite Tanner graphs. The graphs are characterized by two probability vectors, For convenience we also define the polynomials λ(x) = c i=2 λ i x i−1 and ρ(x) = d j=2 ρ j x j−1 . In a (λ, ρ) Tanner graph, for each i a fraction λ i of the edges has left degree i, and for each j a fraction ρ j of the edges has right degree j. Letting E denote the total number of edges, we obtain that there are λ i E/i left-nodes with degree i, and ρ j E/j right-nodes with degree j. Letting N denote the number of left-nodes and M denotes the number of right-nodes, we have Luby et al. suggested the following method for constructing (λ, ρ) bipartite graphs. The E edges originating from left nodes are numbered from 1 to E. The same procedure is applied to the E edges originating from right nodes. A permutation π is then chosen with uniform probability from the space of all permutations of {1, 2, . . . , E}. Finally, for each i, the edge numbered i on the left side is associated with the edge numbered π i on the right side. Note that occasionally, multiple edges may link a pair of nodes. Summarizing, a random selection of a code from a (λ, ρ, δ) coset GF(q) LDPC ensemble amounts to a random construction of its Tanner graph, a random selection of its labels and a random selection of a coset vector. 8 The rate of a (λ, ρ, δ) coset GF(q) LDPC code is equal to the rate of its underlying GF(q) LDPC code. The design rate R of a (λ, ρ) GF(q) LDPC code is defined as This value is a lower bound on the true rate of the code, measured in q-ary symbols per channel use. A. Definition of the Decoder The coset GF(q) LDPC belief-propagation decoder is based on Gallager [16] and Kschischang et al. [21]. The decoder attempts to recover c, the codeword of the underlying GF(q) LDPC code. Decoding consists of alternating rightbound and leftbound iterations. In a rightbound iteration, messages are sent from variable-nodes to check-nodes. In a leftbound iteration, the opposite occurs. Note that with this terminology, a rightbound message is produced at a left node (a variable-node) and a leftbound message is produced at a right node (a check-node). As mentioned in Section II, the decoder's messages are q dimensional probability vectors, rather than scalar values as in standard binary LDPC. Algorithm 1: Perform the following steps, alternately: 1) Rightbound iteration. For all edges e = (i, j), do the following in parallel: If this is iteration zero, set the rightbound message r = r(i, j) to the initial message r (0) = r (0) (i), whose components are defined as follows: y i and v i are the channel output and the element of the coset vector v corresponding to variable node i. The addition operation k + v i is performed over GF(q). Otherwise (iteration number 1 and above), where d i is the degree of the node i and l (1) , ..., l (di −1) denote the incoming (leftbound) messages across the edges {(i, j ′ ) : j ′ ∈ N (i) \ j}, N (i) denoting the set of nodes adjacent to i. 2) Leftbound iteration. For all edges e = (i, j), do the following in parallel: Set the components of the leftbound message l = l(j, i) as follows: where d j is the degree of node j, and r (1) , ..., r (dj −1) denote the rightbound messages across the edges {(i ′ , j) : i ′ ∈ N (j) \ i} and g 1 , ..., g dj −1 are the labels on those edges. g dj denotes the label on the edge (i, j). The summations and multiplications of the indices a n and the labels g n are performed over GF(q). Note that an equivalent, simpler expression will be given shortly. If x is a rightbound (leftbound) message from (to) a variable-node, then element x k represents an estimate of the a-posteriori probability (APP) that the corresponding code symbol is k, given the channel observations in a corresponding neighborhood graph (we will elaborate on this in Section IV-C). The decision associated with x is defined as follows: the decoder decides on the symbol k that maximizes x k . If the maximum was obtained at several indices, a uniform random selection is made among them. In our analysis, we focus on the probability that a rightbound or leftbound message is erroneous (i.e., corresponds to an incorrect decision). However, in a practical setting, the decoder stops after a fixed number of decoding iterations and computes, at each variable-node i, a final vectorr(i) of APP values. The vector is computed using (8), replacing N (i)\j with N (i).r(i) is unique to each variable-node (unlike rightbound or leftbound messages), and can thus be used to compute a final decision on its value. Consider expression (9) for computing the leftbound messages. A useful, equivalent expression is given by, where l is the entire leftbound vector (rather than a component as in (9)) and the × operator is defined as in (4). The GF(q) convolution operator ⊙ is defined as an operation between two vectors, which produces a vector whose components are given by, where the subtraction k − a is evaluated over GF(q). Throughout the paper, the following definitions are useful: Using these definitions, (10) may be further rewritten as, Like the standard binary LDPC belief-propagation decoder, the coset GF(q) LDPC decoder also has an equivalent formulation using LLR messages. Algorithm 2: Perform the following steps, alternately: 1) Rightbound iteration. For all edges e = (i, j), do the following in parallel: If this is iteration zero, set the LLR rightbound message r ′ = r ′ (i, j) to r ′ (0) = r ′ (0) (i), whose components are defined as follows: Otherwise (iteration number 1 and above), where d i is the degree of the node i and l ′ (1) , ..., l ′ (di−1) denote the incoming (leftbound) LLR messages Addition between vectors is performed componentwise. 2) Leftbound iteration. All rightbound messages are converted from LLR to plain-likelihood representation. Expression (9) is applied to obtain the plain-likelihood representation of the leftbound messages. Finally, the leftbound messages are converted back to their corresponding LLR representation. Both versions of the decoder have similar execution times. However, the LLR representation is sometimes useful in the analysis of the decoders' performance. Note that Wymeersch et al. [39] have developed an alternative decoder that uses LLR representation, which does not require the conversion to plain-likelihood representation that is used in the leftbound iteration of the above algorithm. B. Efficient Implementation To compute rightbound messages, we can save time by computing the numerators separately, and then normalizing the sum to 1. At a variable node of degree d i , the computation of each rightbound message takes O(q · d i ) computations. A straightforward computation of the leftbound messages at a check-node of degree d j has a complexity of O(d j q dj−1 ) per leftbound-message, and a total of O(d 2 j q dj−1 ) for all messages combined. We will now review a method due to Richardson and Urbanke [28] (developed for the decoding of standard GF(q) LDPC codes) that significantly reduces this complexity. This method assumes plain-likelihood representation of messages. It is nonetheless relevant to the implementation of Algorithm 2, which uses LLR representation, because with this algorithm the leftbound messages are computed by converting them to plain-likelihood representation, applying (9) and converting back to LLR representation. We first recount some properties of Galois fields (see e.g. [5] for a more extensive discussion). Galois fields GF(q) exist for values of q equal to p m , where p is a prime number and m is a positive integer. Each element of GF(p m ) can be represented as an m-dimensional vector over 0, ..., p − 1. The sum (difference) of two GF(p m ) elements corresponds to the sum (difference) of the vectors, evaluated as the modulo-p sums (differences) of the vectors' components. Consider the GF(q) convolution operator, defined by (11) and used in the process of computing the leftbound message in (10). We now replace the GF(q) indices a and k in (11) with their vector representations, α, κ ∈ {0, ..., p − 1} m . The expression can be rewritten as Consider, for example, the simple case of m = 2. (11) becomes The right hand side of (17) is the output of the two-dimensional cyclic convolution of x (1) and x (2) , evaluated at requires m · p m+1 = m · p · q multiplications and m · (p − 1)p m = m · (p − 1) · q additions. The m-IDFT can be computed in a similar manner. Note that a further reduction in complexity could be obtained by using numbertheoretic transforms, such as the Winograd FFT. We can use these results to reduce the complexity of leftbound computation at each check-node, by first computing the m-DFTs of all rightbound messages, then using the DFT vectors to compute convolutions. The resulting . The first element of the sum is the computation of m-DFTs and m-IDFTs, the second is the multiplications of m-DFTs for all messages. This is a significant improvement in comparison to the straightforward approach. Note that the m-DFT is particularly attractive when p = 2, i.e., when q is 2 m . The elements of the form Furthermore, all quantities are real-valued and no complex-valued arithmetic is needed. An additional improvement, to an order of O(d j · mpq + 3 · d j · q) (in the general case where p is not necessarily 2) can be achieved using a method suggested by Davey and MacKay [10]. This method produces a negligible improvement except at very high values of d j , and is therefore not elaborated here. C. Neighborhood Graphs and the Tree Assumption Before we conclude this section, we briefly review the concepts neighborhood graphs and the tree assumption. These concepts were developed in the context of standard binary LDPC codes and carry over to coset GF(q) LDPC codes as well. Definition 3: (Richardson and Urbanke [28]) The neighborhood graph of depth d, spanned from an edge e, is the induced graph containing e and all edges and nodes on directed paths of length d that end with e. At iteration t, a rightbound message produced from a variable-node i to a check node j is a vector of APP values for the code symbol at i, given information observed in the neighborhood of e = (i, j) of depth 2t. Similarly, a leftbound message from j to i is based on the information observed in the neighborhood of e = (j, i), of depth The APP values produced by belief-propagation decoders are computed under the tree assumption 4 . We say that the tree assumption is satisfied at a node n in the context of computing a message x, if the neighborhood graph on which the message is based is a tree. Asymptotically, at large block lengths N , the tree assumption is satisfied with high probability at any particular node [28]. At finite block lengths, the neighborhood graph frequently contains cycles and is therefore not a tree. Such cases are discussed in Appendix II. Nevertheless, simulation results indicate that the belief-propagation decoder produces remarkable performance even when the tree assumption is not strictly satisfied. V. COSET GF(q) LDPC ANALYSIS IN A RANDOM-COSET SETTING One important aid in the analysis of coset GF(q) LDPC codes is the randomly selected coset vector that was used in their construction. Rather than examine the decoder of a single coset GF(q) LDPC code, we focus on a set of codes. That is, given a fixed GF(q)-LDPC code C and a mapping δ(·), we consider the behavior of a coset GF(q) LDPC code constructed using a randomly selected coset vector v. We refer to this as random-coset analysis. With this approach, the random space consists of random channel transitions as well as random realizations of the coset vector v. The random coset vector produces an effect that is similar to output-symmetry that is usually required in the analysis of standard LDPC codes [28], [29]. Note that although v is random, it is assumed to have been selected in advance and is thus known to the decoder. Unlike the coset vector, in this section we keep the underlying GF(q) LDPC code fixed. In Section VI, we will consider several of these concepts in the context of selecting the underlying LDPC code at random from an ensemble. A. The All-Zero Codeword Assumption An important property of standard binary LDPC decoders [28] is that the probability of decoding error is equal for any transmitted codeword. This property is central to many analysis methods, and enables conditioning the analysis on the assumption that the all-zero 5 codeword was transmitted. 4 In [28] it is called the independence assumption. 5 In [28] a BPSK alphabet is used and thus the codeword is referred to as the "all-one" codeword. With coset GF(q) LDPC codes, we have the following lemma. Lemma 1: Assume a discrete memoryless channel. Consider the analysis, in a random-coset setting, of a coset GF(q) LDPC code constructed from a fixed GF(q)-LDPC code C. For each c ∈ C, let P t e (c) denote the conditional (bit or block) probability of decoding error after iteration t, assuming the codeword δ(c + v) was sent, averaged over all possible values of the coset vector v. Then P t e (c) is independent of c. The proof of the lemma is provided in Appendix III-B. Lemma 1 enables us to condition our analysis results on the assumption that the transmitted codeword corresponds to 0 of the underlying LDPC code. B. Symmetry of Message Distributions The symmetry property, introduced by Richardson and Urbanke [29] is a major tool in the analysis of standard binary LDPC codes. In this section we generalize its definition to q-ary random variables as used in the analysis of coset GF(q) LDPC decoders. We provide two versions of the definition, the first using probability-vector random variables and the second using LLR-vector random variables. Definition 4: A probability-vector random variable X is symmetric if for any probability-vector x, the following expression holds: where x * and n(x) are as defined in Section I. In the context of LLR-vector random variables, we have the following lemma. for all LLR-vectors w and all i ∈ GF(q). The proof of this lemma is provided in Appendix III-C. In the sequel, we adopt the lemma as a definition of symmetry when discussing variables in LLR representation. Note that in the simple case of q = 2, the LLR vector degenerates to a scalar value and from (5) we have w +1 = −w. Thus, (19) becomes This coincides with symmetry for binary codes as defined in [29]. We now examine the message produced at a node n. Theorem 1: Assume a discrete memoryless channel and consider a coset GF(q) LDPC code constructed in a random-coset setting from a fixed GF(q)-LDPC code C. Let X denote the message produced at a node n of the Tanner graph of C (and of the coset GF(q) LDPC code), at some iteration of belief-propagation decoding. Let the tree assumption be satisfied at n. Then under the all-zero codeword assumption, the random variable X is symmetric. The proof of the theorem is provided in Appendix III-D. C. Channel Equivalence Simple GF(q)-LDPC codes, although unsuitable for arbitrary channels, are simpler to analyze than coset GF(q) LDPC codes and decoders. Fig. 4 presents the structure of coset GF(q) LDPC encoding/decoding. x is the transmitted symbol (of the underlying code) and v is the coset symbol. u = x + v (evaluated over GF(q)) is the input to the mapper, x ′ = δ(u) is the mapper's output and y ′ is the physical channel's output. y will be discussed shortly. Comparing a coset GF(q) LDPC decoder with the decoder of its underlying GF(q) LDPC code we may observe that a difference exists only in the computation (7) of the initial messages r (0) . The messages r (0) are APP values corresponding to a single channel observation. After they are computed, both decoders proceed in exactly the same way. It would thus be desirable to abstract the operations that are unique to coset GF(q) LDPC codes into the channel, and examine an equivalent model, which employs simple GF(q)-LDPC codes and decoders. Consider the channel obtained by encapsulating the addition of a random coset symbol, the mapping and the computation of the APP values into the channel model. The input to the channel is a symbol x from the code alphabet 6 and the output is a probability vector y = r (0) of APP values. The decoder of a GF(q) LDPC code, if presented with y as raw channel output, would first compute a new vector of APP values. We will soon show that the computed vector would in fact be identical to y. We begin with the following definition: Definition 5: Let Pr[y | x] denote the transition probabilities of a channel whose input alphabet is GF(q) and whose output alphabet consists of q-dimensional probability vectors. Then the channel is cyclic-symmetric if there exists a probability function Q(y * ) (defined over sets of probability vectors (3)), such that Lemma 3: Assume a cyclic-symmetric channel. Let APP(y) denote the APP values for the channel output y. The proof of this lemma is provided in Appendix III-F. Returning to the context of our equivalent model, we have the following lemma, 6 In most cases of interest, x will be a symbol from a GF(q) LDPC codeword. However, in this section we also consider the general, theoretical case, where the input to the channel is an arbitrary GF(q) symbol. Lemma 4: The equivalent channel of Fig. 4 is cyclic-symmetric. The proof of this Lemma is provided in Appendix III-G. Once the initial messages are computed, the performance of both the coset GF(q) LDPC and GF(q) LDPC decoding algorithms is a function of these messages alone. Therefore, we have obtained that the performance of a coset GF(q) LDPC decoder in a random-coset setting over the original physical channel is identical to the performance of the underlying GF(q) LDPC decoder over the equivalent channel. This result enables us to shift our discussion from coset GF(q) LDPC codes over arbitrary channels to GF(q) LDPC codes over cyclic-symmetric channels. Note that a cyclic-symmetric channel is symmetric in the sense defined by Gallager [17][page 94]. Hence its capacity achieving distribution is uniform. This indicates that GF(q) LDPC codes, which have an approximately uniformly distributed code spectrum (see [1]), are suitably designed for it. We now relate the capacity of the equivalent channel to that of the physical channel. More precisely, we show that the equivalent channel's capacity is equal to the equiprobable-signalling capacity of the physical channel with the mapping δ(·), denoted C δ and defined below. Let U , X ′ and Y ′ be random variables corresponding to u, x ′ and y ′ in Fig. 4. Y ′ is related to X ′ = δ(U ) through the physical channel's transition probabilities. Assume that . C δ is equal to the capacity of transmission over the physical channel with an input alphabet {δ(i)} q−1 i=0 using a code whose codewords were generated by random uniform selection. Lemma 5: The capacity of the equivalent channel of Fig. 4 is equal to C δ . The proof of this lemma is provided in Appendix III-H. Finally, the following lemma can be viewed as a generalization of the Channel Equivalence Lemma of [29]. Lemma 6: Let P (y) be the probability function of a symmetric probability-vector random variable. Consider the cyclic-symmetric channel whose transition probabilities are given by Pr[y | x = i] = P (y +i ). Then, assuming that the symbol zero is transmitted over this cyclic symmetric channel, then the initial messages of a GF(q) LDPC decoder are distributed as P (y). The proof of this lemma is straightforward from Definitions 4 and 5 and from Lemma 3. We will refer to the cyclic-symmetric channel defined in Lemma 6 as the equivalent channel corresponding to P (y). Remark 1: Note that Lemma 6 remains valid if we switch to LLR representation. That is, we replace y with its LLR equivalent w = LLR(y) and define Pr[w | x = i] = P (w +i ) (where w +i is defined by (5)). VI. ANALYSIS OF DENSITY EVOLUTION In this section we consider density-evolution for coset GF(q) LDPC codes and its analysis. The precise computation of the coset GF(q) LDPC version of the algorithm is generally not possible in practice. The algorithm is however valuable as a reference for analysis purposes. We begin by defining density evolution in Section VI-A and examine the application of the concentration theorem of [28] and of symmetry to it. We proceed in Section VI-B to consider permutation-invariance, which is an important property of the densities tracked by the algorithm. We then apply permutation-invariance in Section VI-C to generalize the stability property to coset GF(q) LDPC codes and in Section VI-D to obtain an approximation of density-evolution under a Gaussian assumption. A. Density Evolution The definition of coset GF(q) LDPC density-evolution is based on that of binary LDPC codes. The description below is intended for completeness of this text, and focuses on the differences that are unique to coset GF(q) LDPC codes. The reader is referred to [28] and [29] for a complete rigorous development. Density evolution tracks the distributions of messages produced in belief-propagation, averaged over all possible neighborhood graphs on which they are based. The random space is comprised of random channel transitions, the random selection of the code from a (λ, ρ, δ) coset GF(q) LDPC ensemble (see Section III-D) and the random selection of an edge from the graph. The random space does not include the transmitted codeword, which is assumed to be fixed at the all-zero codeword (following the discussion of Section V-A). We denote by R (0) the initial message across the edge, by R t the rightbound message at iteration t and by L t the leftbound message at iteration t. The neighborhood graph associated with R t and L t is always assumed to be tree-like, and the case that it is not so is neglected. We will use the above notation when discussing plain-likelihood representation of density-evolution. When using LLR-vector representation, we let R ′ (0) , R ′ t and L ′ t denote the LLR-vector representations of R (0) , R t and L t . To simplify our notation, we assume that all random variables are discrete-valued and thus track their probability-functions rather than their densities. The following discussion focuses on plain-likelihood representation. The translation to LLR representation is straightforward. 1) The initial message. The probability function of R (0) is computed in the following manner. where Y and V are random variables denoting the channel output and coset-vector components, Y is the channel output alphabet and the components of r (0) (y, v) are defined by (7), replacing y i and v i with y and v. The expression is equal to, 2) Leftbound messages. L t is obtained from (9). The rightbound messages in (9) are replaced by independent random variables, distributed as R t−1 and assumed to be independent. Similarly, the labels in (9) are also replaced by independent random variables uniformly distributed in GF(q)\{0}. Formally, Let d be the maximal right-degree. Then for each d j = 2, ..., d we first define, where P is the set of all probability vectors, and the components of l(r (1) , ..., r (dj −1) , g 1 , ..., g dj ) are defined as in (9). G n is a random variable corresponding to the nth label, and thus Pr[G n = g] = 1/(q − 1) for all g. Pr[R t−1 = r (n) ] is obtained recursively from the previous iteration of belief propagation. The probability function of L t is now obtained by, 3) Rightbound messages. The probability function of R 0 is equal to that of R (0) . For t > 0, R t is obtained from (8). The leftbound messages and the initial message in (8) are replaced by independent random variables, distributed as L t and R (0) , respectively. obtained recursively from the previous iterations of belief propagation. The probability function of R t is now obtained by, Theoretically, the above algorithm is sufficient to compute the desired densities. In practice, a major problem is the fact that the quantities of memory required to store the probability density of a q-dimensional message grows exponentially with q. For instance, with 100 quantization 7 levels per dimension, the amount of memory required for a 7-ary code is of the order of 100 7 . Hence, unless an alternative method for describing the densities is found, the algorithm is not realizable. It is noteworthy, however, that the algorithm can be approximated using Monte Carlo simulations. We now discuss the probability that a message examined in density-evolution is erroneous. That is, the message corresponds to an incorrect decision regarding the variable-node to whom it is directed or from which it was sent. Under the all-zero codeword assumption, the true transmitted code symbol (of the underlying LDPC code), at the relevant variable-node, is assumed to be zero. We first assume that the message is a fixed probability-vector x. Suppose x 0 is greater than all other elements Given the decision criterion used by the belief propagation decoder, described in Section IV-A, the decoder will correctly decide zero. Similarly, if there exists an index i = 0 such that x i > x 0 , then the decoder will incorrectly decide i. However, if the maximum is achieved at 0 as well as k − 1 other indices, the decoder will correctly decide zero with probability 1/k. Given a random variable X, we define where the sum is over all probability vectors. Consider P e (R t ). This corresponds to the probability of error at a randomly selected edge at iteration t. Richardson and Urbanke [28] proved a concentration theorem that states that as the block length N approaches infinity, the bit error rate at iteration t converges to a similarly defined probability of error. The convergence is in probability, exponentially in N . Replacing bit-with symbol-error rate, this theorem carries over to coset GF(q) LDPC densityevolution unchanged. Let P t e ∆ = P e (R t ) be a sequence of error probabilities produced by density evolution. A desirable property of this sequence is given by the following theorem. Theorem 2: P t e is nonincreasing with t. The proof of this theorem is similar to that of Theorem 7 of [29] and is omitted. Finally, in Section V-B we considered symmetry in the context of the message corresponding to a fixed underlying GF(q) LDPC code and across a fixed edge of its Tanner graph. We now consider its relevance in the context of density-evolution, which assumes a random underlying LDPC code and a random edge. Theorem 3: The random variables R (0) , R t and L t (for all t) are symmetric. The proof of this theorem is provided in Appendix IV-A. B. Permutation-Invariance Induced by Labels Permutation-invariance is a key property of coset GF(q) LDPC codes that allows the approximation of their densities using one-dimensional functionals, thus greatly simplifying their analysis. The definition is based on the permutation, inferred by the operation ×g, on the elements of a probability vector. Before we provide the definition, let us consider (10), by which a leftbound message l is computed in the process of belief propagation decoding. Let h ∈ GF(q)\{0}, and consider l ×h . With density evolution, the label g dj is a random variable, independent of the other labels, of the rightbound messages This leads us to the following definition: Although this definition assumes plain-likelihood representation, it carries over straightforwardly to LLR representation, and the following lemma is easy to verify: Lemma 7: Let W be an LLR-vector random-variable and X = W ′ = LLR −1 (W). Then X is permutationinvariant if and only if, for any fixed h ∈ GF(q)\{0}, the random variable Ω To give an idea of why permutation-invariance is so useful, we now present two important lemmas involving permutation-invariant random variables. Both lemmas examine marginal random variables. The first lemma is valid for both probability-vector and LLR-vector representation. Lemma 8: Let X (W) be a probability-vector (LLR-vector) random variable. If X (W) is permutation-invariant then for any i, j = 1, ..., q − 1, the random variables X i and X j (W i and W j ) are identically distributed. The proof of this lemma is provided in Appendix IV-B. Lemma 9: Let W be a symmetric LLR-vector random variable. Assume that W is also permutation-invariant. Then for all k = 1, ..., q − 1, W k is symmetric in the binary sense, as defined by (20). Note that this lemma does not apply to plain-likelihood representation. The proof of the lemma is provided in Appendix IV-C. Consider the following definition, Definition 8: Given a probability-vector random variable X, we define the random-permutation of X, denoted X, as the random variable equal to X ×g where g is randomly selected from GF(q)\{0} with uniform probability, and is independent of X. The definition with LLR-vector representation is identical. The following lemma links permutation-invariance with random-permutation. Lemma 10: A probability-vector (LLR-vector) random-variable X (W) is permutation-invariant if and only if there exists a probability-vector (LLR-vector) random-variable T (S) such that X =T (W =S). In Appendix IV-E we present some additional useful lemmas that involve permutation-invariance. Finally, the following theorem discusses permutation-invariance's relevance to the distributions tracked by density evolution. Theorem 4: Let R (0) , R t and L t be defined as in Section VI-A. Then, where g is the label on the edge associated with the message. ThenR t is symmetric, permutation-invariant and satisfies P e (R t ) = P e (R t ). The proof of this theorem is provided in Appendix IV-F. Although not all distributions involved in density-evolution are permutation-invariant, Theorem 4 enables us to focus our attention on permutation-invariant random variables alone. Our interest in the distribution of the rightbound message R t is confined to the error probability implied by it. Thus we may instead examineR t . Similarly, our interest in the initial message R (0) is confined to its effect on the distribution ofR t and L t . Thus we may instead examineR (0) . C. Stability The stability condition, introduced by Richardson et al. [29], is a necessary and sufficient condition for the probability of error to approach arbitrarily close to zero, assuming it has already dropped below some value at some iteration. Thus, this condition is an important aid in the design of LDPC codes with low error floors. In this section we generalize the stability condition to coset GF(q) LDPC codes. Given a discrete memoryless channel with transition probabilities Pr[y | x] and a mapping δ(·), we define the following channel parameter. For example, consider an AWGN channel with a noise variance of σ. ∆ for this case is obtained in a similar manner to that of [29][Example 12]. In Appendix IV-G, we present the concept of non-degeneracy for mappings δ(·) and channels (taken from [1]). Under these assumptions, ∆ is strictly smaller than 1. We assume these non-degeneracy definitions in the following theorem. Finally, we are now ready to state the stability condition for coset GF(q) LDPC codes: Theorem 5: Assume we are given the triplet (λ, ρ, δ) for a coset GF(q) LDPC ensemble designed for the above discrete memoryless channel. Let P 0 denote the probability distribution function of R (0) , the initial message of density evolution. Let P t e ∆ = P e (R t ) denote the average probability of error at iteration t under density evolution. Assume E exp(s ·R ′ 1 (0) ) < ∞ in some neighborhood of zero (whereR ′ 1 (0) denotes element 1 of the LLR representation ofR (0) ). Then 1) If λ ′ (0)ρ ′ (1) > 1/∆ then there exists a positive constant ξ = ξ(ρ, λ, P 0 ) such that P t e > ξ for all iterations t. 2) If λ ′ (0)ρ ′ (1) < 1/∆ then there exists a positive constant ξ = ξ(ρ, λ, P 0 ) such that if P t e < ξ at some iteration t, then P t e approaches zero as t approaches infinity. Note that the requirement E exp(s ·R ′ 1 (0) ) < ∞ is typically satisfied in channel of interest. The proof of Part 1 of the theorem is provided in Appendix V and the proof of Part 2 is provided in Appendix VI. Outlines of both proofs are provided below. The proof of Part 1 is a generalization of a proof provided by Richardson et al. [29]. The proof [29] begins by observing that since the distributions at some iteration t are symmetric, they may equivalently be modelled as APP values corresponding to the outputs of a MBIOS channel. By an erasure decomposition lemma, the output of an MBIOS channel can be modelled as the output of a degraded erasure channel. The proof proceeds by replacing the distributions at iteration t by erasure-channel equivalents, and shows that the probability of error with the new distributions is lower bounded by some nonzero constant. Since the true MBIOS channel is a degraded version of the erasure channel, the true probability of error must be lower-bounded by the same nonzero constant as well. Returning to the context of coset GF(q) LDPC codes, we first observe that by Theorem 1 the random variable R t at iteration t is symmetric and hence by Lemma 6 it can be modelled as APP values of the outputs of a cyclicsymmetric channel. We then show that any cyclic-symmetric channel can be modelled as a degraded erasurized channel, appropriately defined. The continuation of the proof follows in the lines of [29]. The proof of Part 2 is a generalization of a proof by Khandekar [20]. As in [20] (and also [6]), our proof tracks a one-dimensional functional of the distribution of a message X, denoted D(X). We show that the rightbound messages at two consecutive iterations, satisfy where K < 1, and thus D(R t ) descends to zero. Further details, including the relation between D(R t ) and P t e , are provided in Appendix VI. D. Gaussian Approximation With binary LDPC, Chung et al. [9] observed that the rightbound messages of density-evolution are well approximated by Gaussian random variables. Furthermore, the symmetry of the messages in binary LDPC decoding implies that the mean m and variance σ 2 of the random variable are related by σ 2 = 2m. Thus, the distribution of a symmetric Gaussian random variable may be described by a single parameter: σ. This property was also observed by ten Brink et al. [35] and is essential to their development of EXIT charts. In the context of nonbinary LDPC codes, Li et al. [22] obtained a description of the q − 1-dimensional messages, under a Gaussian assumption, by q − 1 parameters. In the following theorem, we use symmetry and permutation-invariance as defined in Sections V-B and VI-B to reduce the number of parameters from q − 1 to one. This is a key property that enables the generalization of EXIT charts to coset GF(q) LDPC codes. Note that the theorem assumes a continuous Gaussian distribution. The definition of symmetry for LLR-vector random variables (Lemma 2) is extended to continuous distributions by replacing the probability function in (19) with a probability density function. Theorem 6: Let W be an LLR-vector random-variable, Gaussian distributed with a mean m and covariance matrix Σ. Assume that the probability density function f (w) of W exists and that Σ is nonsingular. Then W is both symmetric and permutation-invariant if and only if there exists σ > 0 such that, That is, m i = σ 2 /2, i = 1, ..., q − 1, and Σ i,j = σ 2 if i = j and σ 2 /2 otherwise. The proof of this theorem is provided in Appendix VII. A Gaussian symmetric and permutation-invariant random variable, is thus completely described by a single parameter σ. In Sections VII-B and VII-D we discuss the validity of the Gaussian assumption with coset GF(q) LDPC codes. VII. DESIGN OF COSET GF(q) LDPC CODES With binary LDPC codes, design of edge distributions is frequently done using extrinsic information transfer (EXIT) charts [35]. EXIT charts are particularly suited for designing LDPC codes for AWGN channels. In this section we develop EXIT charts for coset GF(q) codes. We assume throughout the section transmission over AWGN channels. A. EXIT Charts Formally, EXIT charts track the mutual information I(C; W) between the transmitted code symbol C at an average variable node 8 and the rightbound (leftbound) message W transmitted across an edge emanating from it. If this information is zero, then the message is independent of the transmitted code symbol and thus the probability of error is (q − 1)/q. As the information approaches 1, the probability of error approaches zero. Note that we assume that the base of the log function in the mutual information is q, and thus 0 ≤ I(C; W) ≤ 1. I(C; W) is taken to represent the distribution of the message W. That is, unlike density evolution, where the entire distribution of the message W at each iteration is recorded, with EXIT charts, I(C; W) is assumed to be a faithful surrogate (we will shortly elaborate how this is done). With EXIT charts, two curves (functions) are computed: The VND (variable node decoder) curve and the CND (check node decoder) curve, corresponding to the rightbound and leftbound steps of density-evolution, respectively. The argument to each curve is denoted I A and the value of the curve is denoted I E . With the VND curve, I A is interpreted as equal to the functional I(C; L t ) when applied to the distribution of the leftbound messages L t at a given iteration t. The output I E is interpreted as equal to I(C; R t ) where R t is the rightbound message produced at the following rightbound iteration. With the CND curve, the opposite occurs. Note that unlike density-evolution, where the densities are tracked from one iteration to another, the VND and CND curves are evaluated for every possible value of their argument I A . However, a decoding trajectory that produces an approximation of the functionals I(C; L t ) and I(C; R t ) at each iteration, may be computed (see [36] for a discussion of the trajectory). The decoding process is predicted to converge if after each decoding iteration (comprised of a leftbound and rightbound iteration), the resulting . In an EXIT chart, the CND curve is plotted with its I A and I E axes reversed (see, for example, Fig. 7). The decoding process is thus predicted to converge if and only if the VND curve is strictly greater than the reversed-axes CND curve. B. Using I(C; W) as a Surrogate Let W be a leftbound or rightbound message at some iteration of belief-propagation. Strictly speaking, an approximation of I(C; W) requires not only the knowledge of the distribution of W but primarily the knowledge of the conditional distribution Pr[W | C = i] for all i = 0, ..., q − 1 (we assume that C is uniformly distributed). However, as shown in Lemma 17 (Appendix III-A), the messages of the coset GF(q) LDPC decoder satisfy Thus, we may restrict ourselves to an analysis of the conditional distribution Pr[W | C = 0]. Lemma 11: Under the tree-assumption, the above defined W satisfies: The proof of this lemma is provided in Appendix VIII-A. Note that by Lemma 16 (Appendix III-A), we may replace the conditioning on C = 0 in (26) by a conditioning on the transmission of the all-zero codeword. In the remainder of this section, we will assume that all distributions are conditioned on the all-zero codeword assumption. In their development of EXIT charts for binary LDPC codes, ten Brink et al. [35] confine their attention to LLR message distributions that are Gaussian and symmetric. Under these assumptions, a message distribution is uniquely described by its variance σ 2 . For every value of σ, they evaluate (26) (with q = 2) when applied to the corresponding Gaussian distribution. The result, denoted J(σ), is shown to be monotonically increasing in σ. Thus J −1 (·) is well-defined. Given I = I(C; W ), J −1 (I) can be applied to obtain the σ that describes the corresponding distribution of W . Thus, I(C; W ) uniquely defines the entire distribution of W . The Gaussian assumption is not strictly true. With binary LDPC codes, assuming transmission over an AWGN channel, the distributions of rightbound messages are approximately Gaussian mixtures (with irregular codes). The distributions of the leftbound messages, resemble "spikes". The EXIT method in [35] nonetheless continues to model the distributions as Gaussian. Simulation results are provided, which indicate that this approach still produces a very close prediction of the performance of binary LDPC codes. With coset GF(q) LDPC codes, we discuss two methods for designing EXIT charts. The first method models the LLR-vector messages distributions as Gaussian random variables, following the example of [35]. This modelling also enables us to evaluate the VND and CND curves using approximations that were developed in [35], thus greatly simplifying their computation. However, the modelling of the rightbound message distributions of coset GF(q) LDPC as Gaussian is less accurate than it is with binary LDPC codes. As we will explain in Section VII-D, this results from the distribution of the initial messages, which is not Gaussian even on an AWGN channel. In Section VII-D we will therefore develop an alternative approach, which models the rightbound distributions more accurately. We will then apply this approach in Section VII-E, to produce an alternative method for computing EXIT charts. With this method, the VND and CND curves are more difficult to compute. However, the method produces codes with approximately 1dB better performance. C. Computation of EXIT Charts, Method 1 With this method, we confine our attention to distributions that are permutation-invariant 9 , symmetric and Gaussian. By Theorem 6, under these assumptions, a q −1 dimensional LLR-vector message distribution is uniquely defined by a parameter σ. We proceed to define J(σ) in a manner similar to that of [35]. In Appendix VIII-D we show that J(σ) is monotonically increasing and thus J −1 (σ) is well defined. Given I = I(C; W), the distribution of W may be obtained in the same way as [35]. We use the following method to compute the VND and CND curves, based on a development of ten Brink et al. [35] for binary LDPC codes. 1) The VND curve. By (15), a rightbound message is a sum of incoming leftbound messages and an initial message. Let I A and I (0) denote the mutual-information functionals of the incoming leftbound messages and initial messages, respectively. By Lemma 5, I (0) equals the equiprobable-signalling capacity of the channel with the mapping δ(·). It may be obtained by numerically evaluating I(U, Y ′ ) as defined in Section V-C. For each left-degree i, we let I E,V N D (I A ; i, I (0) ) denote the value of the VND curve when confined to the distribution of rightbound messages across edges whose left-degree is i. We now employ the following approximation, which holds under the tree assumption, when both the initial and the incoming leftbound messages are Gaussian. The validity of this approximation relies on the observation that a rightbound message (15) 2) The CND curve. Let I E,CN D (I A ; j) denote the value of the CND curve when confined to the distribution of leftbound messages across edges whose right-degree is j. This approximation is based on a similar approximation that was used in [35] and relies on Sharon et al. [31]. In the context of coset GF(q) LDPC codes, we have verified its effectiveness empirically. Given an edge distribution pair (λ, ρ), we have Code design may be performed by fixing the right-distribution ρ and computing λ. Like [35], the following constraints are used in the design. 9 Strictly speaking, rightbound messages are not permutation-invariant. However, in Appendix VIII-B, we show that this does not pose a problem to the derivation of EXIT charts. 1) λ is required be a valid probability vector. That is λ i ≥ 0 ∀i, and i λ i = 1. 2) To ensure decoding convergence, we require I E,V N D (I, I (0) ) > I −1 E,CN D (I) (as explained in Section VII-A) for all I belonging to a discrete, fine grid over (0, 1). The design process seeks to maximize λ i /i, which by (6) is equivalent to maximizing the design rate of the code. Typically, this can be done using a linear program. A similar process can be used to design ρ with λ fixed. D. More Accurate Modelling of Message Distributions We now provide a more accurate model for the rightbound messages, as mentioned in Section VII-B. We focus, for simplicity, on regular LDPC codes. Observe that the computation of the rightbound message using (15) involves the summation of i.i.d leftbound messages, l ′ (n) . This sum is typically well-approximated by a Gaussian random variable 10 . To this sum, the initial message r ′ (0) is added. With binary LDPC codes, transmission over an AWGN channel results in an initial message r ′ (0) which is also Gaussian distributed (assuming the all-zero codeword was transmitted). Thus, the rightbound messages are very closely approximated by a Gaussian random variable. With coset GF(q) LDPC codes, the initial message is not well approximated by a Gaussian random variable, as illustrated in the following lemma: Lemma 12: Consider the initial message produced at some variable node, under the all-zero codeword assumption, using LLR representation. Assume the transmission is over an AWGN channel with noise variance σ 2 z and with a mapping δ(·). Let the coset symbol at the variable node be v. Then the initial message r ′ (0) is given by r ′ (0) = α(v) + β(v) · z, where z is the noise produced by the channel and α(v) and β(v) are q − 1 dimensional vectors, dependent on v, whose components are given by, The proof of this lemma is straightforward from the observation that the received channel output is y = δ(v) + z. In our analysis, we assume a random coset symbol V that is uniformly distributed in GF(q). Thus, α(V ) and β(V ) are random variables, whose values are determined by the mapping δ(·) and by the noise variance σ 2 z . The distribution of the channel noise Z is determined by σ 2 z . The distribution of the initial messages is therefore determined by δ(·) and σ 2 z . Fig. 5 presents the empirical distribution of LLR messages at several stages of the decoding process, as observed by simulations. The code was a (3, 6) coset GF(3) LDPC. Since q = 3, the LLR messages in this case are two-dimensional. The distribution of the initial messages ( Fig. 5(a)) is seen to be a mixture of one-dimensional Gaussian curves, as predicted by Lemma 12. The leftbound messages at the first iteration are shown in Fig. 5(b). We model their distribution as Gaussian, although it resembles a "spike" and not the distribution of a Gaussian random variable (this situation is similar to the one with binary LDPC [9]). Fig. 5(c) presents the sum of leftbound messages computed in the process of evaluating (15). As predicted, this sum is well approximated by a Gaussian random variable. Finally, the rightbound messages at the first iteration are given in Fig. 5(d). 10 Quantification of the quality of the approximation is beyond the scope of this discussion. "Well approximated" is to be understood in a heuristic sense, in the context of suitability to design using EXIT charts. Following the above discussion, we model the distribution of the rightbound messages as the sum of two random vectors. The first is distributed as the initial messages above, and the second (the intermediate sum of leftbound messages) is modelled as Gaussian 11 . The intermediate value (the second random variable) is symmetric and permutation-invariant. This may be seen from the fact that the leftbound messages are symmetric and permutation-invariant (by Theorems 3 and 4) and from Lemmas 18 (Appendix III-E) and 22 (Appendix IV-E). Thus, by Theorem 6, it is characterized by a single parameter σ. Summarizing, the approximate distribution of rightbound messages is determined by three parameters: σ 2 z and δ(·), which determine the distribution of the initial message, and σ, which determines the distribution of the intermediate value. E. Computation of EXIT Charts, Method 2 The second method for designing EXIT charts differs from the first (Section VII-C) in its modelling of the initial and rightbound message distributions, following the discussion in Section VII-D. We continue, however, to model the leftbound messages as Gaussian. For every value of σ, we define J R (σ; σ z , δ) (σ z and δ are fixed parameters) in a manner analogous to J(σ) as discussed in Section VII-C. That is, J R (σ; σ z , δ) equals (26) when applied to the rightbound distribution corresponding to σ, σ 2 z and δ. In an EXIT chart, σ z and δ(·) are fixed. The remaining parameter that determines the rightbound distribution is thus σ, and σ = J −1 R (I; σ z , δ) is well-defined 12 . The computation of J R and J −1 R is discussed in Appendix VIII-E. The following method is used to compute the VND and CND curves. 1) The VND curve. For each left-degree i, we evaluate I E,V N D (I A ; i, σ z , δ) (defined in a manner analogous to I E,V N D (I A ; i, I (0) ) of Section VII-C) using the following approximation: 2) The CND curve. Let I E,CN D (I A ; j, σ z , δ) be defined in a manner analogous to I E,CN D (I A ; j) of Section VII-C. The parameters σ z and δ are used in conjunction with σ = J −1 R (I A ; σ z , δ) to characterize the distribution of the rightbound messages at the input of the check-nodes. The computation of I E,CN D (I A ; j, σ z , δ) is done empirically and is elaborated in Appendix VIII-F. Given an edge distribution pair (λ, ρ) we evaluate I E,V N D (I A ; σ z , δ) and I E,CN D (I A ; σ z , δ) from the above and {I E,CN D (I A ; j, σ z , δ)} d j=1 using expressions similar to (27). Note that J R (σ; σ z , δ) needs to be computed once for each choice of σ z and δ(·). I E,CN D (σ; j, σ z , δ) needs to be computed also for each value of j. J(σ) needs to be computed once for each choice of q. 11 Note that with irregular codes, the number of i.i.d leftbound variables that is summed is a random variable itself (distributed as {λi} c i=1 ), and thus the distribution of this random variable resembles a Gaussian mixture rather than a Gaussian random variable. However, we continue to model it as Gaussian, following the example that was set with binary codes [35]. 12 See Appendix VIII-E for a more accurate discussion of this matter. Section VII-C. Further details are provided in Section VII-F below. F. Design Examples We designed codes for spectral efficiencies of 6 bits/s/Hz (3 bits per dimension) and 8 bits/s/Hz (4 bits per dimension) over the AWGN channel. In all our constructions, we used the above method 2 (Section VII-E) to compute the EXIT charts. Our Matlab source code is provided at [4]. For the code at 6 bits/s/Hz, we set the alphabet size at q = 32. We used a nonuniformly-spaced signal constellation A (following the discussion of Section III-C). The constellation was obtained by applying the following method, which is a variation of a method suggested by Sun and van Tilborg [33]. First, the unique points x 0 < x 1 < ... < x q−1 were computed such that for X ∼ N (0, 1), Pr[x i < X < x i+1 ] = 1/(q + 1) i = 0, ..., q − 2 and Pr[X < x 0 ] = Pr[X > x q−1 ] = 1/(q + 1). The signal constellation was obtained by scaling the result so that the average energy was 1. The mapping δ from the code alphabet is given below, with its elements listed in ascending order using the representation of GF(32) elements as binary numbers (e.g. δ(00000) = −2.0701, δ(00001) = −1.7096). Note, however, that our simulations indicate that for a given A, different mappings δ typically render the same performance. We fixed ρ(7) = 1 and iteratively applied linear programming, first to obtain λ, and then, fixing λ, to obtain a better ρ. Interestingly, this code is right-irregular, unlike typical binary LDPC codes. Fig. 6 presents the EXIT chart for the code (computed by method 2). Note that the CND curve in Fig. 6 does not begin at I A = 0. This is discussed in Appendix VIII-F. Simulation results indicate successful decoding at an SNR of 18.55 dB. The block length was 1.8·10 5 symbols, and decoding typically converged after approximately 150-200 iterations. The symbol error rate, after 50 simulations, was approximately 10 −6 . The unconstrained Shannon limit (i.e. not restricted to any signal constellation) at this rate is 17.99 dB, and thus our gap from this limit is 0.56 dB. This result is well beyond the shaping gap, which at 6 bits/s/Hz is approximately 1.1 dB. We can obtain some interesting insight on these figures by considering the equiprobable-signalling Shannon-limit for our constellation (defined based on the equiprobable-signalling capacity, which was introduced in Section V-C). At 6 bits/s/Hz, this limit equals 18.25 dB. The equiprobable-signalling Shannon limit is the best we can hope for with any design method for the edge-distributions of our code. The gap between our code's threshold and this limit is just 0.3 dB, indicating the effectiveness of our EXIT chart design method. The equiprobable-signalling Shannon limit for a 32-PAM constellation, at 6 bits/s/Hz is 19.11 dB. The gap between this limit and the above-discussed limit for our constellation, is 0.86 dB. This is the shaping gain obtained from the use of a nonuniform signal constellation. For the code at 8 bits/s/Hz, we set the alphabet size at q = 64. We used the same method to construct a The code rate is 2/3 GF(64) symbols per channel use, equal to 4 bits per channel use, and a spectral efficiency of 8 bits/s/Hz. Fig. 7 presents the EXIT charts for the code using the two methods. Simulation results indicate successful decoding at an SNR of 25.06 dB over the AWGN channel. The block length was 10 5 symbols, and decoding typically converged after approximately 70 iterations. The symbol error rate, after 100 simulations, was exactly zero. We also applied an approximation of density-evolution by Monte-Carlo simulations, as mentioned in Section VI-A, and obtained similar results. The gap between our code's threshold and the unconstrained Shannon limit, which at 8 bits/s/Hz is 24.06 dB, is 1 dB. This result is beyond the shaping gap, which at 8 bits/s/Hz is 1.3 dB. The equiprobable-signalling Shannon limit for our signal constellation at 8 bits/s/Hz is 24.34 dB. The gap between our code's threshold and this limit is thus only 0.72 dB. VIII. COMPARISON WITH OTHER BANDWIDTH-EFFICIENT CODING SCHEMES The simulation results presented in Section VII-F indicate that coset GF(q) LDPC codes have remarkable performance over bandwidth-efficient channels. In this section, we compare their performance with multilevel coding using binary LDPC component codes and with with turbo-TCM. A. Comparison with Multilevel Coding (MLC) Hou et al. [18] presented simulations for MLC over the AWGN channel at a spectral efficiency of 2 bits/s/Hz (equal to 1 bit per dimension), using a 4-PAM constellation. The equiprobable-signalling Shannon limit 13 for 4-PAM and at this rate is 5.12 dB (SNR). Their best results were obtained using multistage decoding (MSD). At a block length of 10 4 symbols, their best code is capable of transmission at 1 dB of the Shannon limit with an average BER of about 10 −5 . It is composed of binary LDPC component codes with maximum left-degrees of 15. Our above code has obtained its superior performance at the price of increased decoding complexity, in comparison with the MLC code of [18]. We also designed a second code, with a lower decoding complexity, in order to compare the two schemes when the complexity is restricted. This code's edge distributions are given by λ(2, 3, 6) = (0.3978, 0.2853, 0.3169) and ρ(5, 6) = (0.203, 0.797). Our simulation results indicate that the code is capable of reliable transmission within 0.8 dB of the Shannon limit. The code's maximum left-degree is 6, and is thus lower than the MLC code of [18]. Consequently, it has a lower level of connectivity in its Tanner graph, implying that its slightly better performance was achieved at a comparable decoding complexity. A precise comparison between the decoding complexities of the two codes must account for the entire edge-distributions (rather than just the 13 Throughout this section, we assume equiprobable-signalling whenever we refer to the Shannon limit. maximum left-degrees), and for the number of decoding iterations. Such a comparison is beyond the scope of this work. Hou et al. [18] also experimented at a large block length of 10 6 symbols. Their best code is capable of transmission within 0.14 dB of the Shannon limit. At a slightly smaller block length (5 · 10 5 symbols), our abovediscussed first code is capable of transmission within 0.2 dB of the Shannon limit (14 simulations), and thus has a slightly inferior performance. This may be attributed either to the smaller block-length that we used, or to the availability of density-evolution for the design of binary MLC component LDPC codes at large block lengths. Hou et al. [18] obtained their remarkable performance at large block lengths also at the price of increased decoding complexity (the maximum left-degrees of their component codes are 50). It could be argued that increasing the decoding complexity could produce improved performance also at the above mentioned block length of 10 4 . We believe this not to be true, because increasing the maximum left-degree would also result in an increase in the Tanner graph connectivity. This, at short block lengths, would dramatically increase the number of cycles in the graph, thus reducing performance. Summarizing, our simulations indicate that coset GF(q) LDPC have an advantage over MLC LDPC codes at short block lengths in terms of the gap from the Shannon limit. This result assumes no restriction on decoding complexity. The simulations also indicate that when decoding complexity is restricted, both schemes admit comparable performance. In this case, however, further research is required in order to provide a more accurate comparison of the two schemes. B. Comparison with Turbo Trellis-Coded Modulation (Turbo-TCM) Robertson and Wörz [30] experimented with turbo-TCM at several spectral efficiencies and block lengths. The highest spectral-efficiency they experimented at was 5 bits/s/Hz. They used a 64-QAM constellation, and their best results were achieved at a block length of 3000 QAM symbols. They obtained a BER of 10 −4 at an SNR of about 16.85 dB. The equiprobable-signalling Shannon-limit at 5 bits/s/Hz is 16.14 dB, and thus their result is within approximately 0.7 dB of the Shannon limit. We experimented with an 8-PAM constellation and a block length of 6000 PAM symbols, which are the one- A. Suggestions for Further Research 1) Nonuniform labels. The labels of GF(q) LDPC codes, as defined in Section III-A, are randomly selected from GF(q)\{0} with uniform probability. Davey and MacKay [10], in their work on GF(q) LDPC codes for binary channels, suggested selecting them differently. It would be interesting to investigate their approach (and possibly other approaches to the selection of the labels) when applied to coset GF(q) LDPC codes for nonbinary channels. 2) Density evolution. In Section VI-A, we discussed the difficulty in efficiently computing density evolution for nonbinary codes. An assumption in that discussion is that the densities would be represented on a grid of the form {−M/2 · ∆, ..., M/2 · ∆} q−1 (assuming LLR-vector representation), requiring an amount of memory of the order of (M + 1) q−1 . However, a more efficient approach would be to experiment with other forms of quantization, perhaps tailored to each density. We have tried applying the Lloyd-Max algorithm to design such quantizers for each density. However, the computation of the algorithm, coupled with the actual application of the quantizer, are too computationally complex. An alternative approach would perhaps make use of a Gaussian approximation as described in Section VI-D to design effective quantizers. 3) Other surrogates for distributions. In [6], the functional EX (X denoting a message of a binary LDPC decoder) was used to lower-bound (rather than approximate) the asymptotic performance of binary LDPC codes. It would be interesting to find a similar, scalar, functional that can be used to bound the performance of coset GF(q) LDPC codes. Another possibility is to experiment with the function D(X), which is defined in Appendix VI. 4) Comparison with the q-ary erasure channel (QEC). In a QEC(ǫ) channel, the output symbol is equal to the input with a probability of 1 − ǫ and to an erasure with a probability of ǫ. Much of the analysis of Luby et al. [23] for LDPC codes over binary erasure channels is immediately applicable to GF(q) LDPC codes over QEC channels. It may be possible to gain insight on coset GF(q) LDPC codes from an analysis of their use over the QEC. 5) Better mappings. The mapping function δ(·) that was presented in Section VII-F was designed according to a concept that was developed heuristically. Further research may perhaps uncover better mapping methods. 6) Additional channels. The development in Section VII focuses on AWGN channels. It would be interesting to extend this development to additional types of channel. 7) Additional applications. In [3], coset GF(q) LDPC codes were used for transmission over the binary dirtypaper channel. Applying an appropriately designed quantization mapping (as discussed in Section III-C), a binary code was produced whose codewords' empirical distribution was approximately Bernoulli(1/4). There are many other applications, beside bandwidth-efficient transmission, that could similarly profit from codewords with a nonuniform empirical distribution. B. Other Coset LDPC Codes In [1], other nonbinary LDPC ensembles, called BQC-LDPC and MQC-LDPC, are considered (beside coset GF(q) LDPC). Random-coset analysis, as defined in Section V, applies to these codes as well. Similarly, the allzero codeword assumption (Lemma 1) and the symmetry of message distributions (Definition 4 and Theorem 1) apply to these codes. With MQC-LDPC, +i in (2) is evaluated using modulo-q arithmetic instead of over GF(q). With BQC-LDPC decoders, which use scalar messages, symmetry coincides with the standard binary definition of [29]. Channel equivalence as defined in Section V-C applies to MQC-LDPC codes, but not to BQC-LDPC. C. Concluding Remarks Coset GF(q) LDPC codes are a natural extension of binary LDPC codes to nonbinary channels. Our main contribution in this paper is the generalization of much of the analysis that was developed by Richardson et al. [28], [29], Chung et al. [9], ten Brink et al. [35] and Khandekar [20] from binary LDPC codes to coset GF(q) LDPC codes. Random-coset analysis helps overcome the absence of output-symmetry. With it, we have generalized the allzero codeword assumption, the symmetry property and channel equivalence. The random selection of the nonzero elements of the parity-check matrix (the labels) induces permutation-invariance on the messages. Although densityevolution is not realizable, permutation-invariance enables its analysis (e.g. the stability property) and approximation (e.g. EXIT charts). Analysis of GF(q) LDPC codes would not be interesting if their decoding complexity was prohibitive. Richardson and Urbanke [28] have suggested using the multidimensional DFT. This, coupled with an efficient recursive algorithm for the computation of the DFT, dramatically reduces the decoding complexity and makes coset GF(q) LDPC an attractive option. Although our focus in this work has been on the decoding problem, it is noteworthy that the work done by Richardson and Urbanke [27] on efficient encoding of binary LDPC codes is immediately applicable to coset GF(q) LDPC codes. For simulation purposes, however, a pleasing side-effect of our generalization of the all-zero codeword assumption is that no encoder needs to be implemented. In a random coset setting, simulations may be performed on the all-zero codeword alone (of the underlying LDPC code). Using quantization or non-uniform spaced mapping produces a substantial shaping gain. This, coupled with our generalization of EXIT charts has enabled us to obtain codes at 0.56 dB of the Shannon limit, at a spectral efficiency of 6 bits/s/Hz. To the best of our knowledge, these are the best codes found for this spectral efficiency. However, further research (perhaps in the lines of Section IX-A) may possibly narrow this gap to the Shannon limit even further. APPENDIX I PROPERTIES OF THE +g AND ×g OPERATORS Lemma 13: For g ∈ GF(q)\{0} and i ∈ GF(q), 2) (x +i ) * = x * The proof of the first identity is obtained from Lemma 13, identity 2. The second identity is straightforward. APPENDIX II NEIGHBORHOOD GRAPHS WITH CYCLES Fig. 8(b) gives an example of a case where a neighborhood graph contains cycles. The neighborhood graph corresponds to the Tanner graph of Fig. 8(a). When the neighborhood graph contains cycles, the APP values computed by a belief-propagation decoder correspond to a virtual neighborhood graph. In this graph, nodes that are contained in cycles are duplicated to artificially create a tree structure. For example, in Fig. 8(c) a variable-node 1 ′ was produced by duplicating 1. The APP values are computed according to the virtual code 14C implied by this graph.C is virtual in the sense that it is based on false assumptions regarding the channel model and the transmitted code. In Fig. 8(c), the channel model falsely assumes that the nodes 1 ′ and 1 correspond to different channel observations. APPENDIX III A. Preliminary Lemmas The proofs in this section focus on the properties of a message produced at some iteration t of coset GF(q) LDPC belief propagation at a node n. Assuming the underlying code C is fixed, this message is a function of the 14 See Frey et al. [15] for an elaborate discussion. channel output y and the coset vector v. We therefore denote it by m(y, v). m(y, v) may be either a rightbound message from a variable-node or a leftbound message to a variable-node. In both cases, we denote the variable-node involved by i. We begin with the following lemma. Lemma 15: Let c be a codeword of C, y some given channel output, and v an arbitrary coset vector. Then (28) where c i is the value of c at the codeword position i. In the left hand side of (28), v − c is evaluated componentwise over GF(q). In the right hand side, we are using the notation of (2). The above expression is only an estimate of the true APP value. The code used by the decoder is not the LDPC code C, but rather the codeC defined by the parity-checks of the neighborhood graph spanned from n, as defined in Section IV-C and Appendix II. σ is a random variable representing the transmitted codeword ofC (prior to the addition of the coset vector) and σ i is its value at position i. The vectorsṽ andỹ are constructed from v and y by including only values at nodes contained in the neighborhood graph of node n. We definec similarly. If the neighborhood graph contains cycles, we use the virtual neighborhood graph defined in Appendix II. For each variable-node that has duplicate copies in this graph, elements of the true y, v and c will have duplicate entries iñ y,ṽ andc. The decoder assumes that all codewords are equally likely, hence (29) becomes m k (y, v) = σi=k,σ∈C Pr[ỹ was received | δ(σ +ṽ) was transmitted] σ∈C Pr[ỹ was received | δ(σ +ṽ) was transmitted] Equivalently, we obtain The wordc, having being constructed from a true codeword c ∈ C, satisfies all parity-checks in the neighborhood graph and is therefore a codeword ofC. Changing variables, we set σ ′ = σ −c. Thus, for any σ ∈C, we have We now examine X ∆ =m(Y, V), which denotes the rightbound (leftbound) message from (to) a variable-node i, at some iteration of belief-propagation. V and Y are random variables representing the coset vector and channel-output vectors, respectively. Lemma 16: For any k ∈ GF(q), the value Pr[X = x | c i = k] is well-defined in the sense that for any two codewords c (1) , c (2) ∈ C that satisfy c (1) was transmitted] = Pr[X = x | c (2) was transmitted] for all probability vectors x. (1) . Consider transmission of c (1) with an arbitrary coset vector of v, compared to transmission of c (2) with a coset vector of v−c. In both cases, the transmitted signal over the channel is δ(v+c (1) ), and hence the probability of obtaining any particular y is identical. The wordc satisfiesc i = 0. Since C is linear, we havec ∈ C. Therefore, Lemma 15 (Appendix III-A) implies We therefore obtain that Since V is uniformly distributed, averaging over all possible values of V completes the proof. The following lemma will be useful in Section VII-A. Lemma 17: For any k ∈ GF(q), Proof: The proof follows almost in direct lines as Lemma 16. Let c (2) be the all-zero codeword, and c (1) a codeword that satisfies c Averaging over all possible values of V completes the proof. B. Proof of Lemma 1 Let c be some codeword. Let E t y,v (c) denote the event of error at a message produced at a variable-node i after iteration t, assuming the channel output was y, the coset vector was v and the true codeword was c. In both cases, the word transmitted over the channel is δ(v) and hence the probability of obtaining any channel output y is the same. Therefore we obtain C. Proof of Lemma 2 We first assume that X is symmetric and prove (19). Let w be an arbitrary LLR-vector, x ∆ = LLR −1 (w) and x +i , w +i be defined using (2) and (5), respectively. where we have relied on Lemmas 13 and 14 (Appendix I). This proves (19). We now assume (19) and prove that X is symmetric. Let x and w be defined as above. The last equality is obtained from the fact that n(z) = n(x) (Lemma 13, Appendix I), and hence each z ∈ x * is added in q−1 i=0 Pr[X = x +i ] exactly n(x) times. We continue, The equality before last results from (1), recalling that w 0 = 0 in all LLR vectors. We thus obtain that X is symmetric as desired. D. Proof of Theorem 1 Let i be a variable-node associated with the message produced at n, defined as in Lemma 15 (Appendix III-A). Let C,ṽ andỹ be defined as in the proof of the lemma. Using this notation, we may equivalently denote the message produced at n by m(ỹ,ṽ). This is because the message is in fact a function only of the channel observations and coset vector elements contained in the neighborhood graph spanning from n. The following corollary follows immediately from the proof of Lemma 15. Corollary 1: Let σ be a codeword ofC. Then for anyỹ andṽ as defined above, where σ i is the value of σ at the codeword position corresponding to the variable-node i. We now return to X, a random variable corresponding to the message produced at n and equal to m(Ỹ,Ṽ). We assume plain-likelihood representation of messages. Let x be an arbitrary probability vector. Since we assume the all-zero codeword was transmitted, the random space consists of random selection ofṽ and the random channel transitions. Therefore, LetÑ be the block length of codeC (note that likeC,Ñ is a function of the neighborhood graph spanning from n, which is also a function of the iteration number). The set of all vectorsṽ ∈ {GF(q)}Ñ can be presented as a union of nonintersecting cosets ofC. That is where R is a set of coset representatives with respect toC. For each vectorṽ ∈ {GF(q)}Ñ , we let r ∈ R and σ ∈C denote the unique vectors that satisfyṽ = r + σ. Applying Corollary 1 again, we have forṼ ∈ {r +C} and assumingỸ =ỹ, where z ∆ = m(ỹ, r). We assumed m(ỹ, r) ∈ x * and therefore there exists some index l such that z = x −l (or equivalently x = z +l ). We first assume, for simplicity, that n(x) = 1. Therefore, l is unique, and no other index l ′ satisfies z = x −l ′ . From (36) we have that X = x if and only if Σ i = l. Therefore, = Pr[Σ i = l | Σ ∈C, δ(r + Σ) was transmitted andỸ =ỹ was received] Now the key observation in this proof is that under the tree assumption, the above corresponds to m l (ỹ, r) = z l . Therefore We now consider the general case of n(x) = K, for arbitrary K. In this case there are exactly K indices l 1 , ..., l K satisfying z = x −lk , k = 1, ..., K. Using the same arguments as before, we have Pr[X = x |Ṽ ∈ {r +C},Ỹ =ỹ] = = Pr[Σ i ∈ {l 1 , ..., l K } | Σ ∈C, δ(r + Σ) was transmitted andỸ =ỹ was received] Recalling (34) and (35), we now have This proves (18). E. The Sum of Two Symmetric Variables The following lemma is used in Section VI-D. Lemma 18: Let A and B be two independent LLR-vector random-variables. If A and B are symmetric, then A + B is symmetric too. Proof: The proof relies on the observation that for all i ∈ GF(q) and LLR vectors a and b, (a + b) +i = a +i + b +i . Let w be an LLR-vector and i ∈ GF(q) an arbitrary element. Where α is some constant, independent of i (but dependent on y), selected such that the sum of the vector components is 1. Using (21), we have APP(y) i = α · y i · n(y) · Q(y * ) = (α · n(y) · Q(y * )) · y i y, being the output of the equivalent channel, is a probability vector. Thus the sum of all y components is 1. Hence α · n(y) · Q(y * ) = 1. We therefore obtain our desired result Let Y be a random variable denoting the equivalent channel output, and assume the equivalent channel's input (denoted x in Fig. 4) was zero. Y thus corresponds to a vector of APP probabilities, computed using the physical channel output y ′ and the coset vector component v. We can therefore invoke Theorem 1 and obtain that for any probability vector y, Note that Theorem 1 requires that the entire transmitted codeword be zero and not only the symbol at a particular discrete channel time. However, since the initial message is a function of a single channel output, we can relax this requirement by considering a code that contains a single symbol. Let i be an arbitrary symbol from the code alphabet. Applying Lemma 17 (Appendix III-A) to the single-symbol code we obtain, Therefore the equivalent channel is cyclic-symmetric. H. Proof of Lemma 5 Consider the following set of random variables, defined as in Fig. 4. X is is the input to the equivalent channel. V is the coset symbol, and U = X + V , evaluated over GF(q). X ′ = δ(U ) is the physical channel input and Y ′ is the physical channel output, related to X ′ through the channel transition probabilities. Y ∆ = APP(Y ′ , V ) equals the output of the equivalent channel, which is a deterministic function of Y ′ and V . Since the equivalent channel is symmetric, a choice of X that is uniformly distributed renders I(X; Y) that is equal to the equivalent channel's capacity. This choice of X renders U uniformly distributed as well, and thus C δ = I(U ; Y ′ ). We will now show that I(U ; Y ′ ) = I(X; Y). where Y denotes the physical channel's output alphabet, and Y X denotes the element of Y at index number X. where P is the set of all probability vectors. Using Lemma 4 and Definition 5 we have, for some probability function Q(y * ), By definition of y as a probability vector, we have q−1 i ′ =0 y i ′ = 1 and thus, Combining (37) with (38) completes the proof. A. Proof of Theorem 3 We prove the theorem for R t . R t is the message at iteration t averaged over all possibilities of the neighborhood tree T t . The last equation was obtained from Theorem 1. Hence R t is symmetric as desired (R (0) = R 0 is obtained as a special case). The proof for L t is similar. B. Proof of Lemma 8 Let g ′ ∆ = j/i (evaluated over GF(q)), The proof for W is identical. C. Proof of Lemma 9 First, we observe that w +k −k = w 0 − w k = −w k . We now have The last result having been obtained from Lemma 8. D. Proof of Lemma 10 We prove the lemma for the probability-vector representation. The proof for LLR-vector representation is identical. We first assume X =T and show that X is permutation-invariant. Let g ∈ GF(q)\{0} be randomly selected as in Definition 8, such that X = T ×g . Let g ′ ∈ GF(q)\{0} be arbitrary such that Ξ g · g ′ is a random variable, independent of T that is distributed identically with g. Thus, Ξ is identically distributed with T ×g =T = X. Since g ′ was arbitrary, we obtain that X is permutation-invariant. We now assume that X is permutation-invariant. Consider T ∆ =X ×g −1 , where g is uniformly random in GF(q)\{0} and independent of X. Equivalently, X = T ×g . We now show that T is independent of g, the last result having been obtained by the definition of X as permutation-invariant. Since the above is true for all g, T is independent of g. Thus, X =T as desired. E. Some Lemmas Involving Permutation-Invariance We now present some lemmas that are used in Appendices IV-F, VI and V and in Section VI-D. The first three lemmas apply to both the probability-vector and LLR representations of vectors. Lemma 19: IfX is a random-permutation of X, then P e (X) = P e (X). The proof of this lemma is obtained from the fact that the operation ×g, for all g, leaves element X 0 unchanged. Lemma 20: If X is a symmetric random variable, andX is a random-permutation of X, thenX is also symmetric. Proof: In the following derivation, we make use of the fact that n(x ×g ) = n(x) (see Lemma 13, Appendix I) and (x * ) ×g = (x ×g ) * (see Lemma 14, Appendix I). Combining (40) and (41) we obtain and thus conclude the proof. Lemma 21: If X is permutation-invariant andX is a random-permutation of X, thenX and X are identically distributed. The proof of this lemma is straightforward from Definitions 7 and 8. The following lemmas discuss permutation-invariance in the context of the LLR representation of randomvariables. Lemma 22: Let A and B be two independent, permutation-invariant LLR-vector random-variables. Then W = A + B is also permutation-invariant. Since g and w are arbitrary, this implies that W is permutation-invariant, as desired. Proof: We begin with the following equalities, Consider the expressions forW and Ω. g · h is identically distributed with g, and h is identically distributed with k. g · h is independent of h, and both are independent of A and B. The same holds if we replace g · h and h with g and k. ThusW and Ω are identically distributed. The proof forΩ is similar. F. Proof of Theorem 4 L t is permutation-invariant following the discussion at the beginning of Section VI-B, and thus Part 1 of the theorem is proved. where the label g is randomly selected, uniformly from GF(q)\{0}. ThusR t is a randompermutation of R t , and by Lemma 10 it is permutation-invariant.R t is symmetric by Lemma 20 (Appendix IV-E), and P e (R t ) = P e (R t ) by Lemma 19 (Appendix IV-E). This proves part 2 of the theorem. R (0) is permutation-invariant by its construction.R t is a random-permutation of R t . Switching to LLR representation, R ′ t is obtained by applying expression (15). The leftbound messages are permutation-invariant, hence, by Lemma 22 (Appendix IV-E) the sum di−1 k=1 L ′ (k) t is also permutation-invariant. Using Lemma 23 (Appendix IV-E), the distribution ofR ′ t may equivalently be computed by replacing the instantiation r ′ (0) of (15) with an instantiation ofR ′ (0) . The distribution of L t is computed in density evolution recursively fromR t , using (10). Thus, the above discussion implies that replacing R (0) withR (0) would not affect this density either. The remainder of Part 3 of the theorem is obtained from Lemmas 20 and 19. G. Non-Degeneracy of Channels and Mappings A mapping is non-degenerate if there exists no integer n > 1 such that for all a ∈ A, the number of elements satisfying δ(x) = a is a multiple of n. With quantization-mapping, such a mapping could be replaced by a simpler quantization over an alphabet of size q/n that would equally attain the desired input distribution Q(a). With nonuniform-spaced mapping, the number of elements mapped to each a ∈ A is 1 and thus this requirement is typically observed. In this section, we prove the necessity condition of Theorem 5. Our proof is a generalization of the proof provided by Richardson et al. [29]. An outline of the proof was provided in Section VI-C. A. The Erasurized Channel We begin by defining the erasurized channel for a given cyclic-symmetric channel and examining its properties. Our development in this subsection is general, and will be put into the context of the proof in the following subsection. Definition 9: Let Pr[y | x] denote the transition probabilities of a cyclic-symmetric channel (see Definition 5). • y scnd is obtained by ordering the elements of the sequence (y 0 , ..., y q−1 ) in descending order and selecting the second largest. This means that if the maximum of the sequence elements is obtained more than once, then y scnd would be equal to this maximum. For output alphabet elements j ∈ {0, ..., q − 1} we definê whereǫ is definedǫ The following lemma discusses the erasurized channel: Lemma 24: The erasurized channel satisfies the following properties 1) The transition probability function is valid. 2) The original cyclic-symmetric channel can be represented as a degraded version of the erasurized channel. That is, it can be represented as a concatenation of the erasurized channel with another channel, whose input would be the erasurized channel's output. Proof: 1) It is easy to verify thatǫ ≤ 1, and hencePr[y | x = i] ≥ 0 for all i by definition. The rest of the proof follows from the observation that for all vectors y ∈ Y (recall that Y ⊂Ŷ)Pr[y | 2) We define a transition probability function q(y |ŷ) whereŷ ∈Ŷ and y ∈ Y. It is easy to verify that the concatenation of the erasurized channel with q(· | ·) produces the transition probabilities Pr[y | x] of the original cyclic-symmetric channel. The erasurized channel is no longer cyclic-symmetric. Hence, if we apply a belief-propagation decoder on the outputs of an erasurized channel, Lemma 3 does not apply, and the initial messages are not identical to the channel outputs. However, the following lemma summarizes some important properties of the initial message distribution, under the all-zero codeword assumption. Proof: For any probability vector z, we define P E (z) = Pr(z | the channel output was y ∈ Y), and P 2 (z) = Pr(z | the channel output was j ∈ {0, ..., q − 1}). We now have We first examine P E (z). Let y ∈ Y denote the channel output. By definition we have where α is a normalization constant, dependent of y but not on i, selected so that the sum of the vector elements (z 0 , ..., z q−1 ) is 1. We now examine all possibilities for y. First assume that the maximum of {y 0 , ..., y q−1 } is obtained at y 0 and at y 0 only. Let i scnd = 0 be an index where the second-largest element of {y 0 , ..., y q−1 } is obtained. Then by (47) and (42), Now assume that the maximum is obtained at y 0 and also at y imax where i max = 0. Then it is easy to observe that z 0 = z imax . Finally, assume that the maximum of {y 0 , ..., y q−1 } is not obtained at y 0 . Let i max be an index such that y imax obtains the maximum. Then In all cases, there exists an index i = 0 such that z i ≥ z 0 , as required (45). Consider transmission over the original, cyclic-symmetric channel. Let P e be the uncoded MAP probability of error. LetP e be the corresponding probability over the erasurized channel. In the erasure decomposition lemma of [29], similarly defined P e andP e are both equal to 1/2 · ǫ, where ǫ is the erasure channel's erasure probability. In the following lemma we examineǫ of the erasurized channel. Hence, the decoding error is independent of the transmitted symbol, and we may assume that the symbol was 0. Consider the erasurized channel outputŶ . The MAP decoder decides on the symbol with the maximum APP value. If more than one such symbol exists, a random decision among the maximizing symbols is made. Let Z denote the vector of APP values corresponding toŶ . By Lemma 25, we have that with probabilityǫ, Z is distributed as P E (z). Recalling (45), we have that for messages distributed as P E (z), an error is made with probability at least 1/2. Therefore,P e ≥ (1/2)ǫ. 2) By Lemma 24, the cyclic-symmetric channel is a degraded version of the erasurized channel. Hence P e ≥P e . 3) We now prove P e ≤ǫ. Let us assume once more that the symbol 0 was transmitted. Recall that we are now examining the decoder's performance over the cyclic-symmetric channel (and not the erasurized channel). Therefore, by Lemma 3, the vector of APP values (according to which the MAP decision is made) is identical to the channel output. Let P e (y) be defined as in Definition 6. We will now show that the following inequality To complete the proof, we would like to show that the probability of error at iteration t cannot be too small. Let R t+n , denote the rightbound messages at iteration t + n, where n = 0, 1, .... By Lemma 4 (in a manner similar to [29]), R t may equivalently be obtained as the initial message of a cyclic-symmetric channel. We now replace this channel with the corresponding erasurized channel, and obtain a lower bound on the probability of error at subsequent iterations. We letR t+n , n = 0, 1, .., denote the respective messages following the replacement. In the remainder of the proof, we switch to log-likelihood representation of messages. We letR ′ t+n denote the LLR-vector representation ofR t+n , n = 1, .... Adopting the notation of [29], we let Q n (w) denote the distribution ofR ′ t+n . P 0 denotes the distribution of the initial message R ′ (0) of the true cyclic-symmetric channel. Using LLR messages, Lemma 25 becomes After n iterations of density evolution, the density becomes (in a manner similar to the equivalent binary case [29]) where P 0 is defined in Theorem 5. P 0 and P E correspond to the random-permutations of P 0 and P E (resulting from the effect of randomly selected labels), respectively and ⊗ denotes convolution. Let Q n denote the distribution of (R ′ i ) ×g , where g is the random label on the edge along whichR ′ i is sent. Then where we have used Lemma 23 (Appendix IV-E) to obtain that a random-permutation of P E ⊗ P ⊗(n−1) 0 ⊗ P 0 is distributed as P E ⊗ P ⊗n 0 . Using Lemma 19 (Appendix IV-E), the probability of error (assuming the zero symbol was selected) is the same for Q n and Q n . Letting P e (Q n ) denote this probability of error, we have Defining the probability function T = P E ⊗ P ⊗n 0 , we have Recalling (49), P E satisfies that with probability 1 there exists at least one index i = 0 such that W i ≤ 0. A random-permutation would transfer W i to index 1 with probability 1/(q − 1). Hence Let P (1) 0 denote the marginal distribution of theR ′ 1 (0) element ofR ′ (0) . By Lemma 9, P (1) 0 is symmetrically distributed in the binary sense. Following the development of [29] (similarly relying on results from [32][page 14]), we obtain For the above limit to be valid, we first need (see [32]) that E exp(s ·R ′ 1 (0) ) < ∞ in some neighborhood of zero, as appears in the conditions of the theorem. We also need to show that ER ′ 1 (0) > 0 (also see [32]). This will be proven shortly. We first examine E exp(− 1 2R ′ 1 (0) ). We are now ready to show ER ′ 1 (0) > 0. Recall from the discussion in Section VI-C that ∆ < 1. Using (53) and the Jensen inequality, we obtain We now proceed with the proof. By (53), (52) becomes The remainder of the proof follows in direct lines as in [29] and is provided primarily for completeness. Combining (50) with (51) and (56) we obtain that for arbitrary η > 0 and large enough n, If λ ′ (0)ρ ′ (1) > 1/∆, by appropriately selecting η we obtain that for n large enough O(·) denotes a function, dependent on λ, ρ and n such that |O(x)| < cx for some constant c. Hence there exists a constantǫ(λ, ρ, n) such that ifǫ <ǫ(λ, ρ, n), then We now return to examine P t e and P t+n e , the probabilities of error over the true channel, prior to the replacement of messages with those of an erasurized channel. Since the true channel is degraded in relation to the erasurized channel, we must have forǫ <ǫ(λ, ρ, n), P t+n e ≥ P e (Q n ). By Lemma 26,ǫ ≤ 2P t e . Hence there exists ξ(ρ, λ, P 0 ) such that if P t e ≤ ξ, thenǫ <ǫ(λ, ρ, n) and hence (58) is satisfied. However, Lemma 26 also asserts P t e ≤ǫ. Hence P e (Q n ) > P t e and consequently P t+n e > P t e . This contradicts Theorem 2. Thus we obtain our desired result of P t e > ξ(ρ, λ, P 0 ) for all t. APPENDIX VI PROOF OF PART 2 OF THEOREM 5 In this section, we prove the sufficiency condition of Theorem 5. Our proof is a generalization of the proof provided by Khandekar [20] from binary to coset GF(q) LDPC. An outline of the proof was provided in Section VI-C. Note that throughout the proof we denote by O(·) functions for whom there exists a constant c > 0, not dependent on the iteration number t, such that |O(x)| < c · x. We are interested in P e (R t ) (defined as in (22)) where R t is the rightbound message as defined in Section VI-A. We begin, however, by analyzing a differently defined D(R t ). Let X be a probability-vector random variable. The operator D(X) is defined as follows: WhereX is a random-permutation of X. By definition of the random-permutation, the above definition is equivalent to for all k = 1, ..., q − 1. Letting W = LLR(X) we obtain that Note that when q = 2, this equation coincides with the Bhattacharya parameter that is used in [20], equation (4.4). From Lemma 27 (Appendix V-B) we obtain that, where R (0) is the initial message as defined in Section VI-A. We now develop a convenient expression for D(X). Lemma 28: Let X denote a probability-vector symmetric random variable. Then is given by Proof: From (59) we have The outer expectation is over all sets x * . The inner expectation is conditioned on a particular set x * . We first focus on the inner expectation. The last equality was obtained in the same way as (31). In the following, we use the fact that n(x +k ) = n(x) (Lemma 13, Appendix I). f (x) is invariant under any permutation of the elements. It is therefore constant for all vectors of the set x * . Thus we can rewrite the above as Plugging the above into (63) completes the proof. We now examine the function f (·). Proof: f (x) ≥ 0 is obtained trivially from (62) by observing that all elements of the sum are nonnegative. Lemma 30: Proof: Let i max be an index that achieves the maximum in (x 0 , ..., x q−1 ). Consider (62). For a particular element x i x j , assume without loss of generality i = i max . By definition of x, we . We now have Combining both inequalities proves the lemma. We now state our main lemma of the proof: Lemma 31: Let x (1) ,...,x (K) be a set of probability vectors. Then denotes GF(q) convolution, defined in (11) and used in (13). Proof: We begin by examining the case of K = 2. We denote x (1) and x (2) by a and b. To simplify our analysis, we assume that a 0 = max(a 0 , ..., a q−1 ). We may assume this, because otherwise we can apply a shift by −i max to move the maximum to zero. This operation does not affect f (a). It is easy to verify that a −imax ⊙ b = (a ⊙ b) −imax and hence the operation does not affect f (a ⊙ b) either. Similarly, we assume b 0 = max(b 0 , ..., b q−1 ). By the definition of f (·), we have We now examine elements of the sum. We first examine the case that i = 0 and j = 0. The result for the case of i = 0 and j = 0 is similarly obtained. We now assume i, j = 0 (the element i = j = 0 does not participate in the sum). Inserting the above into (65) we obtain The last equality having been obtained from Lemma 30. Finally, from the above we easily obtain the desired result For the case of K > 2 we begin by observing that The remainder of the proof is obtained by induction, using Lemma 29. We now use the above lemma to obtain the following results Proof: Consider R t . SinceR t is obtained from it by applying a random permutation ×g −1 , we obtain, using Lemma 28 and the fact that f (x) is invariant under a permutation on x, that D(R t ) = Ef (R t ) = Ef (R t ) = D(R t ). Thus we may instead examineR t . Similarly, we examineL t instead of L t . Assume the right-degree at a check-node is d. By (13) we have, where {R (k) } d−1 k=1 are i.i.d. and distributed asR t . In the following, we make use of Lemma 31. Averaging over all possible values of d, we obtain, We now turn to examine D(R t+1 ). Assume the variable-node degree at which R t is produced is deg. Applying (59) and (8) we have where {L (n) } di−1 n=1 are i.i.d. and distributed as L t+1 . By Theorem 4, {L (n) } di−1 n=1 are permutation-invariant, and thus, by Lemma 21 (Appendix IV-E), are distributed identically with their random-permutations {L (n) } di−1 n=1 . Thus we obtain Applying (60) and reordering the elements, we obtain The second equality was obtained from (59). The last equality is obtained from (61). Averaging over all values of deg, we obtain, The function λ(x) is by definition a polynomial with non-negative coefficients. It is thus nondecreasing in the range 0 ≤ x ≤ 1. Using (67) and (68) we obtain (66). The following lemma examines convergence to zero of D(R t ). Proof: Using the Taylor expansion of the function ρ(1 − x) around x = 0 where the equality ρ(1) = 1 is obtained by the definition of the function ρ(x). Plugging the above into (66) we obtain, Using the Taylor expansion of λ(x) around x = 0, we obtain where K is a positive constant smaller than 1. By induction, this holds for all t > t 0 . We have D(R t ) ≥ 0 by definition, and therefore the sequence {D(R t )} ∞ t=t0 converges to zero. Finally, the following lemma links the operator D(·) with our desired P e (·), defined as in (22). Lemma 34: Let X be a symmetric probability-vector random-variable. Then Proof: We begin by showing that P e (X) = Eε(X). The last result was obtained in the same way as (63) Using this and the symmetry of X, we obtain By Lemma 13 (Appendix I), n(x +i ) = n(x). We thus continue our development, The result P e (X) = Eε(X) is obtained from the fact that ε(·) is constant over all vectors in x * . We now have, using Lemmas 28 and 30 and the Jensen inequality This proves 1/q 2 · D(X) 2 ≤ P e (X). For the second inequality, we observe The last inequality is obtained by Markov's inequality. Combining the above with (59) we obtain our desired result of P e (X) ≤ (q − 1) · D(X). Finally, consider the value α of Lemma 33. Setting ξ = α 2 /q 2 we have from Lemma 34 that if P e (R t0 ) < ξ then D(R t0 ) < α and thus D(R t ) converges to zero. Applying Lemma 34 again, this implies that P e (R t ) converges to zero, and thus completes the proof of Part 2 of the theorem. PROOF OF THEOREM 6 We begin by observing that since W is Gaussian, W is symmetric if and only if for all i = 1, ..., q − 1 and arbitrary LLR vector w, We first assume that W is symmetric and permutation-invariant and prove (25). Since W is permutation-invariant, by Lemma 8 we have m i = EW i = EW j = m j for all i, j = 1, ..., q−1. We therefore denote m ∆ =m 1 = ... = m q−1 . We begin by proving that m = 0. We prove this by contradiction, and hence we first assume m = 0. Consider the marginal distribution of W i for i = 1, ..., q − 1, which must also be Gaussian. Since m i = 0, the pdf of W i satisfies f i (w) = f i (−w). By Lemma 9, W i is symmetric in the binary sense. Hence f i (w) = e −w f i (−w). Combining both equations yields f i (w) = 0 for all w = 0. Hence W i is deterministic, with zero variance, for all i. This leads to Σ = 0, which contradicts the theorem's condition that Σ is nonsingular. We now show that conditions (69) i = 1, ..., q − 1 uniquely define Σ. Since Σ is symmetric, so is Σ −1 . Assume A and B are two symmetric matrices such that (69) is satisfied, substituting Σ −1 with A and with B, respectively. We Subtracting the equation for B from that of A we obtain, for i = 1, ..., q − 1, For convenience, we let L i denote the matrix corresponding to the linear transformation L i w = w +i . Differentiating (70) twice with respect to w, we obtain that L T i DL i = D. (70) may now be rewritten as Observe that x, like w, is arbitrary. Simple algebraic mainpulations lead us to . We wish to show that these vectors are linearly independent. From (5), we have (m +i − m) k = m i+k − m i − m k . Recall from Section II that i + k is evaluated over GF(q) and that m 0 = 0. From our previous discussion, m i = m for all i = 1, ..., q − 1. Therefore, for all i = 0, k = 0. Let the matrix V be defined by, It is easy to verify that V is the inverse of M . Hence M is nonsingular, and its columns, the vectors {m +i − m} q−1 i=1 , are thus linearly independent. We now have q − 1 linearly-independent vectors that satisfy D(m +i − m) = 0. Hence D = 0, and we obtain that A = B as desired. We now assume (25) and prove that W is symmetric and permutation-invariant. From (25) it is clear that any reordering of the elements of W has no effect on its distribution, and thus W is permutation-invariant. To prove symmetry, we observe that the development ending with (77) relies on (25) By (5), w j − w k = w +k j−k . Since the third summation is over all j, we obtain by changing variables j ′ = j − k (evaluated over GF(q)), B. The Permutation-Invariance Assumption with EXIT Method 1 In this section, we discuss a fine-point of the assumption of permutation-invariance used in the development of EXIT charts by Method 1 (Section VII-C). Strictly speaking, the initial message R ′ (0) and rightbound messages R ′ t are not permutation-invariant. However, we now show that we may shift our attention toR ′ (0) andR ′ t , defined as in Theorem 4, which are symmetric and permutation-invariant. We first show that I(C; R ′ (0) ) and I(C; R ′ t ), evaluated using (26), are equal to I(C;R ′ (0) ) and I(C;R ′ t ) (respectively). It is straightforward to observe that the right-hand-side of (26) is invariant to any fixed permutation of the elements of the random vector W. Thus, a random-permutation will also have no effect on its value. By the discussion in Appendix IV-F,R ′ (0) andR ′ t are random-permutations of R ′ (0) and R ′ t , respectively. Thus, we have obtained our desired result. We proceed to show that the derivation of the approximation of I E,V N D in Section VII-C is justified if we replace R ′ (0) and R ′ t withR ′ (0) andR ′ t . By the discussion in Appendix IV-F,R ′ t may be obtained by replacing the instantiation r ′ (0) of R ′ (0) in (15) with an instantiation ofR ′ (0) . Thus,R ′ t is obtained from L ′ t andR ′ (0) using the same expressions through which R ′ t is obtained from L ′ t and and R ′ (0) . Therefore, the discussion of the derivation of the approximation for I E,V N D (see Appendix VII-C) remains justified. By the discussion in Appendix IV-F, the distribution of L t is obtained fromR t using (10), and the distribution of R t is not required for its computation. Finally, the approximation for I E,CN D in Section VII-C has been verified empirically, and therefore does not require any further justification. C. Gaussian Messages as Initial Messages of an AWGN Channel Let W be a Gaussian LLR-vector random variable defined as in Theorem 6. Let Pr[w | x] be the transition probabilities of the cyclic-symmetric channel defined by W (see Lemma 6 and Remark 1, Section V-C). We will now show that this channel is in effect a q − 1-dimensional AWGN channel. We begin by examining Pr[w | x = i]. Thus the channel output, conditioned on transmission of i, is distributed as W −i . The operation −i, as defined by (5), is linear. Thus W −i is Gaussian with a mean of m −i (m being defined by (25)) and a covariance matrix which we will denote by Σ (−i) . Let k, l = 1, ..., q − 1. The above implies that the cyclic-symmetric channel defined by W is distributed as a q − 1 dimensional AWGN channel whose noise is distributed as N (0, Σ) and whose input alphabet is given by δ(i) = m −i . Both the noise and the input alphabet are functions of σ. By definition, this channel is cyclic-symmetric and thus the LLR-vector initial messages of LDPC decoding satisfy r ′ (0) = w where w is the channel output. In the sequel, we would like to consider channels whose input alphabet is independent of σ. For this purpose, we consider a channel whose output y is obtained from w by y = (2/σ 2 ) · w. The result is equivalent to an AWGN channel whose input alphabet is given by δ(i) = (2/σ 2 ) · m −i = 1 −i where 1 ∆ = [1, ..., 1] T and whose noise is distributed as N (0, Σ z ) where Σ z = (2/σ 2 ) 2 Σ. Letting σ z ∆ = 2/σ, we obtain that Σ z is defined as the matrix Σ of (25) with σ substituted by σ z . The multiplication by 2/σ 2 does not fool the initial messages of LDPC decoding, and thus r ′ (0) = w = (σ 2 /2) · y = (2/σ 2 z ) · y. We summarize these results in the following lemma. Lemma 35: Consider transmission over a q − 1-dimensional AWGN channel, and assume zero-mean noise with a covariance matrix Σ z defined as the matrix Σ of (25) with σ substituted by σ z . Assume the following mapping from the code alphabet δ(i) = 1 −i , i = 0, ..., q − 1, where −i is defined using LLR representation and 1 is defined above. D. Properties and Computation of J(·) We examine J(σ) in lines analogous to the development of ten Brink [36] for binary codes. In Appendix VIII-C, we showed that a Gaussian W distributed as in Theorem 6 and characterized by σ, may equivalently be obtained as the initial message, under the all-zero codeword assumption, of a q − 1 dimensional AWGN channel characterized by a parameter σ z = 2/σ. The capacity of this channel is J(σ) = I(C; W). The parameter σ z infers an ordering on the AWGN channels such that channels with a greater σ z are degraded with respect to channels with a lower σ z . Thus J(σ) is monotonically increasing and J −1 (·) is well-defined. As σ → ∞, σ z approaches zero. Thus To compute J(·) and J −1 (·), we need to evaluate (26) for a Gaussian random variable as defined in Theorem 6. Following [35], we evaluate (26) for values of σ along a fine grid in the range σ ∈ (0, ..., 6.5) (6.5 being selected because J(6.5) ∼ 1), and then applied a polynomial best-fit to obtain an approximation of J(·) and J −1 (·) (note that this operation is performed once: the resulting polynomial approximations of J(·) and J −1 (·) are the same for all codes). In [35] the equivalent J(·) was evaluated by numerically computing the one-dimensional integral by which the expectation is defined. In our case, the distribution of W is multidimensional, and is more difficult to evaluate. We therefore evaluate the right-hand side of (26) empirically, by generating random samples of W according to Theorem 6. E. Computation of J R (σ; σ z , δ) The computation of J R (σ; σ z , δ) is performed in lines analogous to the computation of J(σ) as described in Appendix VIII-D. We compute J R (σ; σ z , δ) for fixed values of σ z and δ and for values of σ along a fine grid in the range σ ∈ (0, ..., 6.5). We then apply a polynomial best-fit to obtain an approximation of J R (σ; σ z , δ) for all σ and an approximation of J −1 R (I; σ z , δ). To compute J R (σ; σ z , δ) at a point of the above discussed grid, we evaluate the right-hand side of (26) (replacing Note that unlike J(σ), which satisfies J(0) = 0, J R (0; σ z , δ) is greater than zero. This results from the fact that the distribution of the rightbound message R ′ corresponding to σ = 0 is equal to the initial message R ′ (0) , and I(C; R ′ (0) ) > 0. Letting I (0) = I(C; R ′ (0) ), we have that J −1 R (I; σ z , δ) is not defined in the range I ∈ [0, I (0) ). F. Computation of I E,CN D (I A ; j, σ z , δ) Our development begins in the lines of Appendices VIII-D and VIII-E. We compute I E,CN D (I A ; j, σ z , δ) for fixed values of σ z and δ and for values of I A along a fine grid. We then apply a polynomial best-fit to obtain an approximation of I E (σ; j, σ z , δ) for all σ in this range. To compute I E,CN D (I A ; j, σ z , δ) at a point of the above discussed grid, we again evaluate the right-hand side of (26) empirically. We begin by applying J −1 R (I A ; σ z , δ) to obtain the value of σ which (together with σ z and δ) characterizes the LLR-vector rightbound message distribution. We then produce samples of rightbound messages as described in Appendix VIII-E. We also produce samples of labels g ∈ GF(q)\{0} that are required to compute the leftbound samplesl ′ ofL ′ . The label samples are generated by uniform random selection. We use the samples l ′ ofL ′ to empirically evaluate the right-hand side of (26) (replacing W withL ′ ) and obtain I E,CN D (I A ; j, σ z , δ). Note that computing (26) withL ′ instead of L ′ had no effect on the final result. Finally, I E,CN D (I A ; j, σ z , δ) as defined in Section VII-E, like J −1 R (I; σ z , δ) (discussed in Appendix VIII-E), is not defined for I ∈ [0, I (0) ). This interval is not used in the EXIT chart analysis of Section VII-E.
2005-11-09T07:36:53.000Z
2005-11-09T00:00:00.000
{ "year": 2005, "sha1": "0b8ad432928312ae8a61adce7bf4b08fb598d293", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cs/0511040", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0b8ad432928312ae8a61adce7bf4b08fb598d293", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
232479141
pes2o/s2orc
v3-fos-license
Application of time-series regularity metrics to ion flux data from a population of pollen tubes ABSTRACT Detecting the presence of an irregularity/regularity or chaos in the ion flows of an evolving plant cell is an important task that can be unraveled by performing the analyses by different metrics. Here I show that the results of the advanced fluctuation estimation methods that are obtained from the time series that is generated by the extracellular ion fluxes of tobacco pollen tubes (Nicotiana tabacum L.) have long-range correlations at critical temperatures. Further experimental evidence has been found to support the claim that the autonomous growth organization of extreme plant cell expansion is accomplished by self-organizing criticality (SOC), which is an orchestrated instability that occurs in an optimally evolving cell. The temperature-induced synchronous action of the ionic fluxes that are manifested, inter alia, by minimal dynamic entropy enabled the molecularly encoded information about germination and optimal growth temperatures of tobacco pollen tubes to be determined. In seed plants, the pollen tube is a cellular extension that serves as a conduit through which the male gametes pass until the egg is fertilized. It consists of a single elongated cell that has a distinctive feature that changes its growth rate periodically [e.g., 1]. Pollen tubes have an extremely rapid growth rate that can also be reproduced ex vivo. They are highly polarized tip-growing cells that depend on the cytosolic pH gradients for signaling and growth [2]. Plasma membrane H + -ATPase has been theoretically proposed to supply the energy for pollen tube growth and underlie, throughout the chemical potential of H + ions, the synchronous growth oscillations [3]. These predicted pH/ growth rate cross-correlations have recently been confirmed empirically [2] with pH having crucial roles in regulating pollen tube growth. In our previous work [4], among others, the bioelectric behavior of tobacco pollen tubes (Nicotiana tabacum L.) was examined. It was shown that the scalefree processes that result from the critical phenomena can be an essential property of a living cell. In particular, the canonical value of the spectral exponent (β c ¼ 1), determined by the slope of the power spectral density (PSD) function, was obtained for the so-called flicker noise (pink) at the optimal growth temperature. However, the spectral exponent β ð Þ was the only measure (quantity) that was specified for the entire range of (physiological) temperatures concerned and therefore it should be supplemented with other statistical metrics. Here, I evaluated quantitatively (numerically) the advanced statistical measures, namely the Hurst exponent, the largest (maximal) Lyapunov exponent (LLE) and the Kolmogorov-Sinai dynamic entropy of an experimental time series for the detected external ion fluxes from elongating pollen tubes. I also reconstructed the corresponding phase space according to Takens' theorem. The Hurst exponent [5] is used to measure the longterm memory of a time series. It refers to the autocorrelation of a time series and the rate at which it decreases with increasing delay between the pairs of values [6]. The Lyapunov exponent is by definition the rate of the exponential separation with time of the initially close trajectories. It describes the speed of the convergence or divergence of the trajectories in each dimension of the attractor and estimates the amount of chaos in a system [7,8]. Dynamic entropy [9] quantifies the size of the fluctuation regularity in a time series. A low entropy value indicates that a time series is deterministic, while a high value indicates its randomness [10,11]. All of these measures, which enable the level of irregularity/chaos to be estimated, were compared with our recent results for calculating the spectral exponent of the linearly approximated PSD for the same experimental data (time series). Signatures of coherent dynamics in the extracellular ionic fluxes of pollen tubes Entropy is a crucial state variable in thermodynamics, statistical mechanics, quantum mechanics and information processing [9] that carries global information about whether a system is ordered, less ordered or disordered. The dynamic entropy of a living system (ensemble of pollen tubes), which is considered in this article, is not a function of the state of a system, but a function of its dynamics. It was found that the dynamic entropy of extracellular ionic fluxes of tobacco pollen tubes (net current) as a function of temperature can be calculated by analyzing the time series of the electromotive force (total voltage) that is generated by the unperturbed ion fluxes. In the case of strong correlations, due to critical fluctuations, the dynamic entropy of the tobacco pollen tubes has a minimum at the characteristic temperatures for germination and optimal growth. The results of the calculations are shown in Table 1 (to be on the safe side, I calculated the approximate (S a ) and the sample (S s ) entropy), which presents the representative values (from 43), and in Figure 1. The latter, however, deserves attention first. Note that in Figure 1, the gray points (which are not experimental points) result from the approximate entropy calculation [12]. Each one is derived from N = 5000 voltage time series (EMF) measurement points, which are generated as pink noise by the extracellular ion fluxes of elongating pollen tubes. After interpolation (Lorentz fit), the maximum values in the diagram indicate the characteristic (critical) temperatures in the tested system in accordance with the literature data. However, this likely means that both of these physiologically relevant temperatures (germination and optimal growth) have been fine-tuned and molecularly encoded. Otherwise, these characteristic temperatures would not be repeated in the next generations. While the Hurst exponent (Table 1) represents the expected quantities that are similar to those presented in 4, the negative LLE values presumably reflect the periodic or quasi-periodic (pH ?) oscillations that are observed in pollen tubes. As such, they may indeed reproduce a stable deterministic component that underlies the dynamics of this system. However, one must note that the series is non-stationary and violates the assumptions of the LLE calculations, which is fine since a completely conservative and periodic system would be in conflict with the flicker noise that is considered in the paper. Thus, while LLEs probably indicate a stable periodic component in the series, they do not provide a complete description of the dynamics, which was formerly determined by the spectral exponent. This apparent inconsistency can be resolved by noting that the deterministic component in the signal (Table 1) comes from synchronized wave form [e.g., plasmon-polariton oscillations, 13], while the chaotic (random) component is the usual pink channel noise. In this context, the attractor reconstruction according to Takens' theorem (not shown) was even more appealing. There, the states of the system that are constructed through the condensation in the lowest energy state -corresponding to the minimum entropy -of the quanta associated with the longrange correlations change the three-dimensional spherically symmetric-phase space outside the [4], showing the canonical case ( 1 f β , β ¼ 1]) of pink (flicker) noise at the optimum temperature of 25.9 ± 0.5°C. Pink noise is a signal or process in the frequency spectrum of whose PSD is inversely proportional to frequency. The PSD of pink noise drops 10 dB per decade. The value of β at 19.8 ± 0.5°C though still at a pink noise range (in broad sense) is nearing the white noise, which is a random signal having equal intensity at different frequencies, giving it a constant PSD. **) Calculated with the R [12] Practical Numerical Math Functions package. The hurstexp(x) function calculates the Hurst exponent of a time series x using R/S analysis [5,6]. transition area into its two-dimensional projection, or axially elongated (quasi-one-dimensional) ellipsoid at the critical point. Hence, a spontaneous symmetry breaking that is reflected in the phase space occurs, which indicates a dynamic phase transition in the system with a change in the control parameter, i.e., temperature. The latter, however, presumably reflects the formation of the onset (like in the two-fluid model: condensate/non-condensate fraction) of the dynamic collective modes [14,15]. It seems that the obtained results can be framed in the research line that showed that macroscopic coherent states are formed in the processes of quantum condensation at the microscopic level [16] in nonequilibrium (dissipative) systems. Coherent states have (fractal) self-similarity properties, which have already been attributed to the scale-free dynamics of the critical spectral exponent (β c ) in [4]. However, as S approaches zero in T c (Figure 1), there are no (or insignificant) gradients with respect to temperature or chemical potential, but flux fluctuations are still there. Therefore, it seems appropriate to further describe this very specific activity of ions (the ionic avalanches [17], or super-diffusion/superfluid component) at the critical temperature in the formalism of dissipative systems [18]. A recurrent idea in the investigation of complex systems is that optimal (information) processing is to be found near phase transitions [19]. However, to my best knowledge, this assumption has no experimental realizations where a biologically relevant quantity is optimized at criticality. The presented results exemplify a network of excitable elements (living cells) at a critical point in non-equilibrium phase transition. Synchronization (avalanches) and already mentioned global oscillations may also emerge from the system dynamics. Needless to say, the synchronized ion avalanches (SOC) that are considered in this report may also be related to the microscopic explanation of the oscillating growth characteristics of the pollen tubes of tobacco [e.g., Fig. 6.1 and Fig. 6.3 in 1]. This issue, however, must undergo further in-depth analysis in future works. The analysis according to five different indexes confirmed the previously observed criticality of the plant cell ensemble living under optimal temperature conditions and an excellent agreement with our previous findings for the spectral exponent was achieved in the calculated minimum for entropy. The observed long-range correlations, or even coherence, that occur at the optimal growth temperature, which are expressed by the lowest dynamic entropy value, indicate a synchronous (wave) operation without scattering (dispersion) of the ion/electron collective excitations, which presumably paves the way for innovative research in this emerging multidisciplinary field. Methods The measurement system consisted of an external polystyrene thermostat and an internal Alcoated polystyrene measurement chamber containing a semiconductor-solute interface [ELoPvC detector, 20]. The sample containing the Nicotiana tabacum L. pollen tubes was placed in a liquid (conductive) germination medium [see 4, for details]. In the physiological temperature range, DC voltage measurements were taken (capturing a mean field of a collective of cells) at 4.1 Hz sampling with a DMM 4040 6-1/2 Digit Precision Multimeter from Tektronix, Inc. and then recorded as a 20 min. time series (N = 5000) on external media. The temperature control system consisted of an integrated control circuit and a 1 W heater (ceramic resistor) or ice added at temperatures below ambient temperature. The series of time data, which were collected at each temperature using this noninvasive solute-semiconductor interface technique, were detrended and analyzed by a program written by the author in R. Disclosure statement No potential conflicts of interest were disclosed.
2021-04-02T05:30:48.915Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "bc17fa081c7bb3fea856ac4e5f5206116fcd05ea", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19420889.2021.1899574?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bc17fa081c7bb3fea856ac4e5f5206116fcd05ea", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
254268912
pes2o/s2orc
v3-fos-license
Controversies in spine research: Organ culture versus in vivo models for studies of the intervertebral disc Abstract Intervertebral disc degeneration is a common cause of low back pain, the leading cause of disability worldwide. Appropriate preclinical models for intervertebral disc research are essential to achieving a better understanding of underlying pathophysiology and for the development, evaluation, and translation of more effective treatments. To this end, in vivo animal and ex vivo organ culture models are both widely used by spine researchers; however, the relative strengths and weaknesses of these two approaches are a source of ongoing controversy. In this article, members from the Spine and Preclinical Models Sections of the Orthopedic Research Society, including experts in both basic and translational spine research, present contrasting arguments in support of in vivo animal models versus ex vivo organ culture models for studies of the disc, supported by a comprehensive review of the relevant literature. The objective is to provide a deeper understanding of the respective advantages and limitations of these approaches, and advance the field toward a consensus with respect to appropriate model selection and implementation. We conclude that complementary use of several model types and leveraging the unique advantages of each is likely to result in the highest impact research in most instances. respect to appropriate model selection and implementation. We conclude that complementary use of several model types and leveraging the unique advantages of each is likely to result in the highest impact research in most instances. K E Y W O R D S in vivo, intervertebral disc, models, organ culture, spine | INTRODUCTION The prevalence of musculoskeletal conditions is growing worldwide, and low back pain (LBP) is significant among these as the leading cause of disability. 1 It is estimated that approximately 80% of adults are affected by LBP at some point in their lifetime. 2 LBP impacts individuals in both developed and developing countries alike, affects all age groups from children to the elderly, 2,3 and thus represents a significant burden for patients, health care systems, and the economies of many countries. Approximately 40% of LBP cases are attributable to degeneration of the intervertebral discs (IVDs), making this the most common cause of chronic LBP. 4 Intervertebral disc degeneration (IVDD) is a progressive, cell-mediated cascade involving each of the IVD's three main anatomical regions: the central, proteoglycan-rich nucleus pulposus (NP); the peripheral, fibrocartilaginous annulus fibrosus (AF), and the two cartilage endplates (CEPs) that interface with the adjacent vertebrae. The earliest manifestations of IVDD commonly occur in the NP, where proteoglycan loss compromises the distribution of loads leading to structural and mechanical derangement of the entire spinal motion segment. While IVDD commonly occurs with increasing age, risk factors for accelerating its progression include genetics, smoking, lifestyle, obesity, trauma, and mechanical stress. [5][6][7] IVDD may lead to LBP through direct compression of adjacent neural elements or by innervation of the IVD structures themselves, which combined with increased nerve sensitizing agents leads to increased pain. [8][9][10][11][12][13][14] The complexity of IVDD pathophysiology poses great challenges for effective long-term treatment of associated LBP. 15 Current clinical treatments are predominantly focused on managing symptoms (e.g., alleviation of pain) rather than addressing underlying causes. These treatments may involve medications such as nonsteroidal anti-inflammatory drugs (NSAIDs), which can address acute symptoms but carry the risk of increased internal bleeding during long-term use. 16 For more severe symptoms, opioid-based medications may be prescribed, 17 but they pose a serious risk of addiction, exacerbating the opioid epidemic and associated morbidities. [18][19][20][21] Where conservative treatments do not appear to modify disease progression, surgical interventions such as spinal fusion or total disc arthroplasty may be employed, but these fail to preserve disc structure or mechanical function long-term and may result in progression of IVDD in adjacent levels. 15 Therefore, there is a significant clinical need for improved treatment options for patients suffering from IVDD and LBP that directly target the underlying causes. The successful development, evaluation, and translation of new treatments for IVDD require the use of appropriate preclinical models that recapitulate the structural, functional, and biological characteristics of the clinical condition as closely as possible. Therefore, a deeper understanding of the benefits and limitations of various approaches to implementing currently available preclinical models is critical for advancing investigations of IVDD pathophysiology and treatment. Despite wide-ranging attempts to develop both in vivo (large and small animal) and ex vivo (organ culture of viable postmortem tissue) models, controversies remain regarding the selection of appropriate models for IVD research. The objective of this article is to contrast and debate the respective advantages and limitations of in vivo animal models versus ex vivo organ culture models for studies of IVDD and its treatment. To achieve this, we have leveraged the broad expertise of the members of two leading groups focused on basic and translational spine research-the Spine and Preclinical Models Sections of the Orthopedic Research Society (ORS)-coupled with a comprehensive review of the current scientific literature. We begin with arguments in support of in vivo animal and ex vivo organ culture models, respectively, for studies of IVDD and its treatment, and conclude with recommendations for incorporating these models into experimental designs to address specific research questions most effectively, with an emphasis on the complementary use of multiple models in order to generate the highest impact results. | Introduction Preclinical research studies are commonly conducted on the cervical, thoracic, lumbar, and caudal spines of research animals. Animal models have played a critical role in advancing understanding of the temporal evolution of IVDD, including how constitutive, environmental, or biomechanical risk factors may initiate, promote, or otherwise regulate degenerative changes, and how therapeutic strategies may ameliorate, resolve, or prevent IVDD. 22 Currently, in vivo studies of IVDD are conducted in small animals such as mice, rats, and rabbits; as well as larger animals such as dogs, pigs, goats, sheep, cows, and nonhuman primates. 23 However, given the complexity of human IVDD, a perfect animal model does not exist. 24 In this section, we outline key advantages that in vivo models have over ex vivo organ culture models, including pain evaluation, nutrition and blood supply, systemic effects related to the immune system, crosstalk with surrounding tissues, imaging, the requirements from regulatory agencies for clinical translation of new treatments for IVDD, and as prerequisites for clinical translation. | Pain evaluation Pain can be defined as cortical interactions that initiate changes in behavior. 25 Pain behaviors may be influenced by physiological and immunological factors, cognition, and conduct. In human patients, LBP as a result of IVDD results in significant morbidity, preventing patients from completing their daily routine, removing individuals from the workforce, and resulting in stress, anxiety, and depression. 26 This pain is the main driver for patients seeking care, and a paramount factor in the diagnosis of IVDD. Importantly, studies have shown that IVDD does not always directly correlate with pain, and that IVDD may often be present in asymptomatic individuals. 27 While the direct connections between IVDD and pain remain complex, animal models have been and continue to be essential research tools for understanding physical and metabolic pathways of symptomatic IVDD (discogenic pain), and in the development of new therapeutics aimed at mitigating and preventing the onset of degeneration and pain. Put simply, only in vivo models can recreate the complex processes of pain resulting from disc degeneration, and permit assessments of behavioral and functional changes as outcome measures. This is not without its challenges, as each species has unique physical and behavioral manifestations of pain, and species-specific, repeatable, and standardized pain scores must be used. 28 Among large animals, dogs provide an interesting model for discogenic pain as distinct and appreciable behavioral changes make these animals particularly valuable when assessing analgesics. [29][30][31][32][33] Nonetheless, the optimal way to measure pain in both preclinical models (and patients) is still the subject of extensive debate. Important aspects such as the nociceptive response generators, pain thresholds, and clinical and behavior manifestations need to be contemplated before selecting an animal model. 28 Validated methods of pain measurement include physical performance (e.g., grimace scales, lameness examinations, gait measurements), 34 behavioral changes (e.g., decreased burrowing and rearing), 35,36 and response to mechanical stimuli (e.g., hind-paw mechanical hyperalgesia test). A recent study has shown that the Grimace scale (a subjective pain assessment method based on facial expressions) is highly reliable in mouse and rat models, and moderately reliable in rabbits, piglets, and sheep. 37 Large animal models have also led to the identification of molecular biomarkers of discogenic pain. 38 Biomarkers not only represent potentially powerful, noninvasive diagnostic tools for evaluating IVDD progression and response to therapeutic intervention, but also provide mechanistic insights into how local pathophysiological changes lead to the manifestation of clinical symptoms, informing the development of new therapies. This simply cannot be accomplished using ex vivo models where clinical manifestations of IVDD (e.g., pain) cannot be measured. | Nutrition and blood supply The IVDs are largely avascular structures. During human development, blood vessels penetrate deep into the lamellar structure of the AF from around 35 weeks gestation. 39,40 Vessels then recede, and by the second decade of life remain only at the margins. At no point do blood vessels penetrate the central NP; instead, blood vessels terminate within the subchondral bone adjacent to the CEP. These locationsthe AF margins and the vertebral endplates-are the sole sources of nutrition for cells within the IVD itself, with the latter considered the most important. 41 Physiological nutrition via these routes is therefore critical for IVD cell survival, and alterations to the adjacent vasculature that disrupt nutrient supply are considered to play an important role in the onset and progression of IVDD. Importantly, the role of vasculature in IVDD can only be investigated using in vivo animal models with an intact circulatory system and cannot be achieved using ex vivo organ culture models. At a fundamental level, in vivo models have been used to establish mechanisms of nutrient flow into the IVD. For example, historically, in vivo large animal models were used to establish that vertebral endplate vasculature is the primary nutrient diffusion pathway into the IVD. 42,43 More recently, a rabbit model was used to demonstrate how alterations in microvasculature that occur with degeneration affect nutrient supply to the IVD. 44 In vivo models have also been essential for studies investigating how certain drugs impact the vasculature supplying nutrients to the IVD. For example, in vivo models have been used to show how vasoactive agents such as acetylcholine and nicotine, as well as cigarette smoking itself, may alter vasculature and nutrient supply to the IVD, implicating smoking in the etiology of IVDD. [45][46][47][48] In vivo models are also essential for evaluating the efficacy of therapeutic agents administered systemically to treat IVD pathologies, such as intravenous stem cells and antibiotics. 49,50 | Long-term evaluation Irrespective of the factors initiating or driving IVDD, it is most often a long-lasting process with changes in the cellular environment and the different structures of the IVD occurring over months or years, before leading to the gross structural and functional alterations that are associated with the manifestation of clinical symptoms. 51 As such, in vivo models have been important tools for elucidating the long-term natural history of IVDD. 52 Furthermore, in vivo models are crucial for evaluating the long-term efficacy of novel treatments for IVDD. 53 The primary goal of IVDD treatments is to both restore IVD function and structure, and alleviate painful symptoms. Acute toxicity and initial structural (e.g., an increase in cellularity and extracellular matrix [ECM] or IVD height) and functional changes can be assessed ex vivo and in vivo; however, potential therapeutic agents may have either a short half-life or may diffuse out of the IVD, so their long-term effects must be determined. Furthermore, initial treatment success may be diminished by the unfavorable degenerative environment of IVDD. In vivo models allow an observation period of several weeks (small animal models) to months or even years (large animal models), facilitating confirmation of sustained or permanent therapeutic effects. Furthermore, the same animal may be assessed over time using noninvasive, gold-standard imaging techniques such as magnetic resonance imaging (MRI), radiographs, or computed tomography, increasing the clinical relevance of findings and reducing the number of experimental animals required. In addition, it is crucial to ensure both acute (i.e., toxicity) and long-term safety (e.g., tumorgenicity) of novel biological treatments, which is only possible using in vivo models. | Systemic factors A major advantage of using in vivo animal models for IVDD and LBP research is the ability to assess the contributions of systemic biological processes such as the immune system, or co-morbidities such as diabetes or obesity on IVDD progression and treatment. Immune cell infiltration of mast cells, macrophages, neutrophils, and T lymphocytes has been identified in the painful human degenerate IVD following rupture of the AF or CEP [54][55][56][57] ; however, the mechanisms underlying the roles of these cells in IVDD are underexplored. The healthy IVD is largely avascular and immune-privileged, yet with degeneration, there is evidence that these immune cells can infiltrate the disc from the bone marrow via lesions in the vertebral endplate and/or via aberrant blood vessel ingrowth into the endplate and AF. 58 In vivo animal models of IVDD and LBP are valuable tools with which to investigate the recruitment, invasion, and function of immune cells in pathological environments, which cannot be readily investigated ex vivo. For example, transgenic mice over-expressing the pro-inflammatory cytokine TNFα demonstrate increased infiltration of Tryptase-expressing (mast) cells or CD68+ (macrophage) cells in IVD tissue regions associated with higher risks of herniation. 59 The increased presence of immune cells, specifically macrophages in herniated IVD tissue, has been corroborated using a novel in vivo mouse model of IVD herniation-induced radiculopathy. 60 Green Fluorescent Protein (GFP) transgenic bone marrow chimeric mouse models of IVD injury have been used to determine the origin of M1 macrophages and demonstrated that following IVD injury, M1 macrophages are recruited specifically from outside the IVD. 61 A subsequent study verified these findings by demonstrating increased recruitment of macrophages to the dorsal region of the IVD together with neo-innervation in an IVD injury model for up to 12 months. 62 These studies highlight the importance of in vivo models for investigating the role of the immune system in IVDD. Systemic inflammatory diseases such as obesity and diabetes demonstrate a strong association with IVDD and back pain, 63,64 and animal models (rodents in particular), demonstrating these disease phenotypes are useful tools to conduct mechanistic and therapeutic studies in which changes in whole IVD joint structure/function and pain behaviors can be investigated. Obesity and diabetes co-exist and can be readily investigated simultaneously using in vivo animal models. Male and female leptin receptor-deficient mice fed with a control (low fat) or high-fat diet to mimic the effects of obesity and diabetes on disc health have been used to examine the effects of obesity and type-2 diabetes on healthy intervertebral IVDs. 65,66 Sexdependent effects have been described, with only females developing diabetes and the most pronounced changes in IVD and bone structure, pointing toward a sex-dependent role for leptin in the spine. 65 In a type-2 diabetic rat model, several changes have been identified in the IVD joint compared to healthy control and obese rats. Specifically, decreases in the glycosaminoglycan (GAG) and water contents of the IVD, increases in mechanical stiffness, advanced glycation end-products (AGEs) and catabolic markers, as well as increases in vertebral endplate thickness and decreased porosity were found, suggesting a reduction in nutrition to the IVD. 67 Similarly, AGE-fed mice demonstrated age-accelerated IVDD together with ectopic calcification of the spinal tissues and insulin resistance, highlighting a role for AGEs in promoting diabetes-induced IVDD. 68 To further validate the role of AGEs in diabetes-induced IVDD, diabetic mice were treated with oral anti-inflammatory and anti-AGE drugs. These drugs mitigated pathological effects observed on disc height, GAG content, and catabolic markers in diabetic mouse models, demonstrating broad clinical applications of anti-AGE drugs on spinal health. 69 Together, these studies highlight the critical role of in vivo animal models in evaluating the effects of systemic co-morbidities on IVDD progression and response to treatment. | Crosstalk with surrounding tissues Investigating crosstalk with surrounding tissues is essential for a comprehensive understanding of IVDD progression and the development of LBP, and this is best achieved with the biological complexity inherent to in vivo models. For example, tissue crosstalk is important to consider when studying nociception. The dorsal root ganglion (DRG) has been suggested to interact with the NP in IVD herniation to elicit pathological consequences. This involves induction of proinflammatory signaling pathways, 70,71 activation of microglia, 72 and modulation of the AMPK-mTOR axis 73 in the DRG. Ex vivo co-culture models may be able to model some tissue interactions. For example, a gene-editing study using ex vivo co-culture systems suggested that inflammatory signals from degenerative IVDs can sensitize nociceptive neurons, 74 and such sensitization can be manifested under mechanical stress, 75 suggesting that IVDD may play a role in pain sensitization. 76 However, even in ex vivo work, different results can be obtained depending on the study design. For example, differential effects of hypoxic stress on neurite outgrowth in DRGs were reported between the single cell and tissue levels. 77 Therefore, the study of neural activity in context of IVDD in vivo may yield contrasting results from those obtained ex vivo. There are numerous examples of the importance of tissue crosstalk in IVDD pathophysiology. In a rabbit cornea implantation model, cartilaginous endplate explants may inhibit neovascularization while AF explants may promote it. 78 This implies compartmental crosstalk can craft the nutritional pathway of the discs. On the other hand, loss of vertebral bone integrity, for example, due to vertebroplasty 79 or bone loss in ovariectomized mice 80 may affect IVD health. Schmorl's nodes, an endplate defect, have been associated with IVDD, 81 and is consistent with findings that experimental injury to the endplate can initiate disc degeneration in large animal models. 82 IVD herniation may initiate at the endplate-annulus interface in aged rats 83 and involves systemic TNF-α upregulation. 59 These studies support that endplate and vertebral bone have major influences on IVD tissue homeostasis. At the cellular level, IVDD is associated with remodeling of the NP, which transitions into a fibrocartilaginous tissue composed of chondrocyte-like and fibroblastic cells. NP ECM remodeling may in part be mediated by cell types originating in adjacent tissues. [84][85][86][87] Thorough interrogation of such dynamic cellular exchange among tissue compartments/systems requires in vivo models. | Physiologically relevant imaging To ensure the physiological relevance of IVD imaging findings, it is important to consider tissue interaction. Imaging of whole IVD motion segments in live animals can better reflect the physiological status of the IVDs. For example, when performed in vivo, radiographic assessments of disc height (an important surrogate of IVDD progression and response to treatment) can be normalized to adjacent vertebral dimensions to account for variation across spine levels and individual animals. Moreover, in vivo imaging accounts for the mechanical constraints of para-spinal tissues such as muscles and ligaments when evaluating IVD geometry. Sedation or anesthesia can be used to ensure proper positioning and muscle relaxation. 28 Animals with altered muscle activity such as GDF-8 mouse mutants and botulinum toxin-treated monkeys exhibit reduced IVD height. 88 Lastly, in vivo imaging permits long-term, longitudinal imaging evaluations that cannot be achieved ex vivo. | Regulatory requirements and prerequisites for clinical translation In vivo animal models provide superior preclinical platforms to address regulatory requirements and accelerate clinical translation by answering critical questions regarding both the safety and efficacy of novel IVDD treatments. Regulatory agencies such as the U.S. Food and Drug Administration (FDA) oversee the approval process of any drug or medical device aimed at IVDD treatment, with the exception of human-derived, minimally manipulated tissues. Preclinical studies must demonstrate that the benefits of the treatment outweigh its risks before approval for clinical use. Although animal testing is not required by the FDA, it is the most effective way to demonstrate the biological response in a living system and, therefore, is rarely excluded from the Investigational New Drug (IND) application process. The FDA has recognized this and has issued draft guidance to ensure such studies are rigorously conducted. 89 Also recognized is the need to refine, reduce and replace animal models in device and therapeutic testing where possible. 90 In most cases, preclinical animal study results are used to support an IND application and are followed by human clinical trials prior to FDA approval; however, in some specific instances, animal study results alone may be used for approval. This type of approval is covered by the FDA Animal Rule in situations where human efficacy trials may not be ethical or feasible. 91 Several preclinical in vivo animal models may be utilized in combination to satisfy regulatory requirements. For example, initial discovery of pathological mechanisms and screening of therapeutic targets may be carried out in rodent models that permit genetic manipulation, while subsequently, large animal models provide platforms for longterm evaluation of safety and efficacy where IVD size and geometry are closer to that of humans. While organ culture models may also play a role in this process, ex vivo models are largely supportive of in vivo studies. Intermixed with the FDA approval process is the concept of and strategy surrounding commercialization and translation to the clinic. Commercialization of a drug or device for the prevention or treatment of IVDD relies heavily on acceptance by medical physicians such as spine surgeons. A therapy could be groundbreaking with a high impact on affected patients but never realize its potential as a gold standard treatment if it is not considered sufficiently clinically relevant or if efficacy data is unconvincing. Preclinical in vivo animal models, and large animal models, in particular, are vital to the commercialization process of any groundbreaking therapy by more closely recapitulating the human condition, anatomy, IVD size and geometry, and life span. With respect to novel device development, large animal models mimic the surgical application requirements of such devices, providing practical feedback in the development of instrumentation and delivery systems, which may be as impactful to the overall success of the therapy as the therapy itself. If a surgeon cannot safely or consistently instrument an implant or deliver a therapy, then said therapy is irrelevant. Organ culture models do not provide this realistic, clinically relevant scenario. Additionally, advanced diagnostic imaging, specifically MRI, has grown to be the gold standard modality for assessing IVDD severity. As such, clinicians rely on MRI as an essential diagnostic tool for IVDD patients. Unlike organ culture models, MRI can be utilized in in vivo animal models to follow IVDD progression as well as to assess treatment efficacy, which is highly impactful with respect to the goal of achieving acceptance of therapies by clinicians and eventual commercialization. Ultimately, for a device or therapy to be useful, it must integrate seamlessly into the clinical environment, and leveraging clinically relevant in vivo animal models throughout the product development and translational process is the best way to achieve this. Unfortunately, no model of IVDD mimics the human condition in all aspects. Despite their important role in the assessment for a new device or therapy, ethical considerations also impact the choice and use of in vivo models. For example, dog and primate models with spontaneously occurring IVDD closely translating to clinical findings in humans undergo increased public scrutiny making these models less accessible and more expensive. On the other hand, preclinical models utilizing livestock animals such as sheep, goats, and pigs are more widely accepted by the general public, although some are more limited for investigating human IVDD due to the retention of notochordal cells (pigs). There is evidence that animals that retain notochordal cell-rich NPs, such as nonchondrodystrophic dogs, exhibit different biomechanical properties to animals that do not retain notochordal cells. 92 While organ culture models carry little ethical stigma, it is currently unusual for a therapy to move from benchtop to the affected patient via solely the use of organ culture models. Even if organ culture models were acceptable by regulatory agencies to provide safety and efficacy, it would be challenging to translate those results into the clinical situation without additional analysis in living systems. | Introduction Organ culture models are distinguished by the culture of whole multitissue organs, under sterile conditions, over various periods of time from short-term (hours or days) to longer-term (weeks or months). In IVD research, organ culture models have been used for basic and translational studies for several decades. 93 In 1998, one of the first reports on long-term IVD culture described the maintenance of entire rabbit IVDs embedded in alginate to preserve their structure and prevent excessive swelling. 94 In the ensuing years, methods have been advanced by the introduction of organ-specific culture systems and bioreactors, with cultured IVDs originating from several different species including rodents, rabbits, large animals (e.g., sheep, goat, bovine), and humans. 93,[95][96][97][98][99][100][101] Organ culture models for IVD research are popular for several reasons. First, the interaction between the IVD's tissue components is crucial for the functionality of the IVD, thus the culture of the whole organ is important for the study of the IVD in both healthy and diseased states. Second, whole organ culture means that the cells of the IVD, especially those of the NP, are naturally exposed to physiological nutrition, oxygen, pH, and hydrostatic pressure. Moreover, IVD tissues are characterized by a low cell density within an extensive ECM. Isolating the cells from this unique environment may alter their phenotype and behavior. Single-cell cultures are therefore reduced from the true physiological environment, while three-dimensional cell cultures and the use of specially tailored culture media are somewhat more representative in this respect. Third, the IVD with intact AF and CEP is considered a largely avascular, immune-privileged organ; blood vessels and infiltrating immune cells are minimally present in the healthy IVD, and thus isolated whole organ studies are appropriate. Fourth, most of the existing in vivo animal models of IVDD still do not entirely recapitulate the pathophysiology of human IVDD, and their limitations must therefore be taken into account for addressing certain translational research questions. 23,52 Organ culture models can be precisely controlled in terms of the biochemical and biomechanical environment; they are flexible with respect to study design and, depending on the throughput of the specific model, are suitable as a screening platform. Moreover, the biological response, such as the production of cytokines, local inflammation, and structural changes, 102 can be directly attributed to the experimental variables with the appropriate control groups, due to fewer covariates compared to in vivo models. They avoid unnecessary use of animals by utilizing surplus tissue from donor animals or human cadavers. Finally, organ culture models have the advantage of a favorable cost-benefit profile. The design, development, manufacturing, and set-up of custom organ culture systems and bioreactor devices may be initially cost-intensive; however, once the method is established, numerous different studies can be performed in a standardized manner, ensuring reproducibility. For example, it has been estimated that the expenses for the set-up of an IVD bioreactor system capable of culturing and loading four large animal IVDs simultaneously, are approximately equal to the costs of one typical large animal (e.g., sheep) study, involving 10 animals in total in Switzerland. 103 Moreover, in vivo studies, especially large animal studies, require a significant contribution from highly trained professionals (e.g., veterinary surgeons) and specialized animal facilities (e.g., surgical suites, animal care, and monitoring) that necessitate significantly more specialized infrastructure investment than organ culture models. These factors make in vivo models less accessible to diverse sets of researchers worldwide. Given the vast burden of LBP due to IVDD, rapid and rigorous research can be more easily achieved with organ culture models. Organ culture models allow for higher throughput analysis of disease-simulating or therapeutic agents, including crosstalk between the disease state and therapy. One major advantage of organ culture models is the ability to examine ECM-related changes (integrity and content) sooner than through in vivo models. This is of paramount importance given that the IVDD phenotype is often defined by ECM degradation. For example, organ culture models exposed to inflammatory cytokines exhibit GAG loss within 1-2 weeks. 109 Such effects in animal models require evaluation over weeks and months, 23 Thus, in this section, we present arguments outlining key features that make organ culture models more advantageous compared to in vivo animal models for IVD research, including the capability to use both human and animal IVDs, controllable physical and biochemical environments (i.e., nutrition, mechanical loading, and immune and inflammatory factors), flexible model types (i.e., diabetes, rapid degeneration, etc.), the ability to study IVDD mechanisms and crosstalk between tissue structures, the ability for both short and long-term evaluation with numerous time points, and improved imaging outcomes compared to in vivo imaging. Furthermore, we address regulatory concerns, and question the need for in vivo models as a prerequisite for clinical translation. | Species differences Organ culture models can employ either nonhuman animal or primary human tissues. Several species differences that differentiate human versus animal IVDs are highlighted below, such as size limitations when using small rodent models, and the presence of notochordal cells in some animals (i.e., porcine, mouse), whereas notochordal cells are not present in the skeletally mature adult human IVD. 52,110 As the clinical prevalence of LBP is in humans, the use of human tissue in organ culture may offer the most immediately relevant insights compared to animal models. | Molecular mechanisms of pain evaluation Evaluation of pain as an outcome measure in studying therapeutics for IVDD is critical. While in vivo models may be useful for studying behavioral characteristics, the translatability of pain behaviors assessed in animal models, especially small rodents, to the human condition requires further validation. 111 Furthermore, the induction of IVDD using AF puncture in animal models does not necessarily recapitulate the initiating mechanisms of IVDD in humans. Nevertheless, there are many similarities in the degenerative changes in IVD structure and chronicity of inflammatory and pain-associated cytokines. 36,62,102 In humans, LBP in the presence of an intact degenerate IVD is associated with nerve ingrowth and neurotrophic factor release. In other cases, following AF or CEP rupture, exposure of local nerves to disc material, released factors, and induction of inflammatory responses become important. These pathophysiological mechanisms of pain are not fully replicated in all in vivo models of LBP, which, combined with limited validated methodologies to accurately measure pain in such models, limits the relevance of investigation in vivo. Meanwhile, pain-related molecular factors can be studied in organ culture models; for example, neurotrophic factor expression, which reduces the need to provoke pain behavior in animal models, in alignment with the 3Rs. 77 In addition, these cellular and signaling mechanisms in organ culture models can be deterministically attributed to the IVD, and the results are specific to the biology of the IVD. | Biochemical environment Due to its largely avascular nature, the environment of the IVD is characterized by hypoxia, acidic pH, and low nutrient supply. Additionally, the consumption of glucose and oxygen, and the production of lactate by the IVD cells are interdependent. There is, however, great variation in the reported intra-discal oxygen and nutrient concentrations in vivo. The reason for this variation is the complex regulation of metabolites as a combination of nutrient supply, access, and demand, whereby the latter depends on the individual IVD cell density and activity. In an experimental study, oxygen concentrations were measured in IVDs of patients during discography or spine surgery. 112 The levels ranged from 5 to 150 mmHg ($0.7%-20% O 2 ) in the center of the NP, whereby no correlation with age or degeneration state was found. While the in situ measurements are challenging, different numerical models have calculated the concentration gradients of oxygen, lactate, and glucose within the IVD. Most studies estimate oxygen concentrations between 0.3 and 1.1 kPa ($0.3%-1.1% O 2 ) in the center of the IVD 113,114 ; while glucose concentrations of around 1-2 mM were predicted for the IVD center, with levels of less than 1 mM in degenerated IVD or due to endplate calcification. 113,115,116 Finally, high lactate levels are correlated with a low intradiscal pH. There are only a few reports on in vivo pH levels; pH values of $6.7 and $6.9 were measured in lumbar IVDs from patients with severe and moderate LBP, respectively. 117 Interestingly, these values lie between the values for IVDs with impermeable endplates and IVDs with 50% permeable endplates as predicted from numerical models, 118 stressing the importance of the endplate permeability for IVD metabolism. Organ culture models should mimic in vivo human conditions as closely as possible. Studies show that physiological glucose, oxygen, and pH levels can be reproduced in organ culture systems to simulate healthy and degenerate IVD conditions. This implies a balance between sufficient nutrition to maintain cell viability and activity, while avoiding supra-physiological levels of nutrients and oxygen. Interestingly, around 70% of previous organ culture experiments have been carried out under high glucose (4.5 g/L or 25 mM) medium conditions. 119 Computational and experimental models show that high glucose media results in glucose levels between $5-15 mM in the center of an organ-cultured bovine caudal IVD, depending on the size of the IVD. 119 In general, these high glucose conditions are referred to as a "physiological" culture environment. Indeed, a significant drop in cell viability by 40%-50% has been observed in both NP and AF of ovine IVDs cultured in low glucose media containing 2 g/L (11 mM) glucose compared to the standard high glucose (4.5 g/L) condition. 120 The reduction in cell viability was evident after 7 days and was stable until 21 days of culture under simulated physiological loading conditions in a bioreactor. Moreover, limited glucose culture can be implemented as a degeneration organ culture model, simulating compromised nutrition in combination with high-frequency loading, which showed additive effects on cell death. 121 Studies with bovine IVDs confirmed the findings from ovine explants, demonstrating a decrease in AF and NP cell viability under low glucose (2 g/L) medium and highfrequency loading conditions. 104,122 Meanwhile, low glucose concentration is viable for culturing human cells due to the low cell concentration, further contributing to the clinical advantage of human organ culture. [123][124][125][126] In view of the physiological blood glucose level of approximately 5.5 mM, the level of 25 mM necessary to keep the IVD cells viable seems highly supra-physiological. In fact, high blood glucose levels in vivo have been shown to be detrimental to IVD homeostasis. Similarly, the predicted physiological intradiscal in vivo glucose levels are 5-10 times lower than the computed and measured ex vivo levels (see above). 113,115,116,119 This discrepancy may result from differences between the ex vivo and in vivo situations, such as the absence of capillaries in the IVD explants, the different mechanical loading, and osmotic pressure conditions. Several studies have shown that low oxygen concentrations of 1%-5% are beneficial for the maintenance of the NP cell phenotype. 127,128 Most reported IVD organ culture experiments have been conducted under normal oxygen conditions externally, implying 20%-21% oxygen tension to the outer regions of the disc. According to computed or experimental data, this would correspond to an approximate oxygen tension of 1%-5% in the center of a bovine IVD, 119 which is similar to the in vivo oxygen tension. The removal of the CEP significantly alters the diffusion into the center of the IVD. Therefore, oxygen levels of 1%-5% are in line with the physiological levels that are known to promote the phenotype and function of IVD cells, and this can be reproduced using organ culture models that retain the CEP. The experimentally determined and predicted pH values of standard cultured bovine IVD organ cultures have been reported to range between $6.7 and $6.9; hence, they are quite consistent with measured values from patients. 117 An increase in oxygen concentration and pH level was however predicted in a numerical model when dynamic axial compression was applied to the disc, 118 emphasizing the importance of mechanical bioreactors for culture of whole IVD organs. IVD cell nutrition equally depends on the diffusion of nutrients through the CEP and/or the AF. In most ex vivo organ cultures, the vertebral bone part is removed, whereas the CEP is maintained. Care should be taken to clean the CEP from blood clots and debris to facilitate the diffusion of molecules into and out of the IVD, 93 since the central endplate region has been recognized as the major area of nutrient exchange. 129 In this context, the species-and age-related differences in the CEP thickness and the presence of a growth plate in young animals need to be considered, as these parameters can markedly influence the diffusion rate. There are also organ culture systems where the bony endplate and some vertebral bones are maintained as well. These cultures require a special preparation that ensures the preservation of both the bony structure and long-term IVD cell viability. 130 Furthermore, it has been suggested that nutrient exchange through the AF plays a more prominent role in organ-cultured IVDs, because of the increased lateral surface area surrounding the AF which permits more nutrient transport through the periphery compared to the in vivo situation. 119 Taken together, by varying the glucose concentration, oxygen tension, pH, and nutrient transport, various metabolic states can be induced in organ-cultured IVDs, which may represent different degrees or types of degeneration. Current numerical models provide a relevant indication of the intra-discal nutrient gradients under defined circumstances. More experimental and clinical data are required to adjust each organ model to a particular clinical situation. Importantly, however, in ex vivo organ culture, there is consistency and control over all these biochemical influences, which are poorly controlled in in vivo models: levels can be measured and maintained in a predictable fashion, removing confounding factors from studies. | Mechanical loading Another major advantage of organ culture models over in vivo animal models is the ability to control mechanical loading at the tissue level, and even present models with the desired mechanical properties and level of tissue damage to mimic physiological or disease conditions and relevant forces. Most animal models, with the exception of primates, are quadrupeds, which may differ in load transfer throughout the spine compared to bipedal humans. In addition, the sizes and geometries of animal IVDs exhibit differences compared to human IVDs as highlighted previously, which may confound the ability to study IVDD under physiological human conditions. The use of organ culture permits researchers precise control of the mechanical forces presented to the IVD, including physiological and injurious loading similar to that experienced by the human spine. The human IVD is normally exposed to multimodal loading (compression, tension, shear, HP, and osmotic pressure) ranging up to 4Â body weight. [131][132][133][134][135][136][137][138] Organ culture models have provided significant insights into the response of the IVD to loading. Zonal biological responses have been observed that depend on tissue location, magnitude, and frequency of loading. 118,[139][140][141][142][143][144][145] A maintenance stimulus of approximately 0.1-0.5 Hz applied at moderate stress levels (e.g., 0.2-0.5 MPa) promotes steady-state IVD metabolic responses. Compressive loading above this level (e.g., high-frequency loading) or below this level (e.g., static loading) typically results in remodeling or degeneration. Occupational exposures to high-frequency vibration can also cause LBP 146 and IVDD. 147,148 Lying in recumbency promotes rehydration, increasing IVD height and volume, and normalization of intradiscal hydrostatic pressure, 149 which can be simulated in organ culture with diurnal loading profiles. Exercise can be beneficial for the IVD, with specific moderate-frequency exercise protocols providing the greatest improvement in IVD material properties. 150 These loading factors can be simulated in organ cultures with the use of dynamic mechanical loading profiles. Indeed, dynamic loading is favorable for promoting mechanotransduction in IVD cells and for maintaining physiological nutrition, whereby at least a diurnal cycle, representing daily IVD compression and decompression (recovery, or swelling) can be applied to organ cultures of isolated whole IVDs. 93,118,151 Advanced bioreactor systems will allow researchers to apply controlled multiaxial loading to the IVD under long-term culture conditions. 152 In contrast, the application of controlled, physiological loading using in vivo models is extremely challenging, and has only been successfully accomplished in rodents and rabbits. [153][154][155] | Systemic effects Since the healthy IVD is a largely avascular, immune-privileged organ, unless structural defects expose it to the systemic environment of the body, infiltrating immune cells generally do not penetrate the intact healthy IVD, and thus isolated studies within an organ culture setting are physiologically appropriate. Furthermore, the influence of systemic co-morbidities such as diabetes can be investigated at a mechanistic level, for example by identifying influences of increased glucose or the presence of damage-associated molecular patterns (DAMPS) without compounding factors such as obesity and poor circulation, which occur in vivo and manifest differently in different animal models. Thus, where systemic effects such as inflammation and diabetes are described as advantages of in vivo models in the presence of immune cell migration/infiltration, in an intact IVD, this has little relevance or appropriateness; however, organ culture models have a unique advantage with respect to assessing specific mechanisms, by controlling the presence of specific immune cells to simulate interactions of the local or systemic immune system with rupture or disease. 156,157 Other specific soluble factors such as catabolic enzymes and cytokines, and DAMPs, can also be simulated with organ culture models, [158][159][160] in addition to environmental factors (e.g., glucose) which allows for a more precise mechanistic evaluation than in vivo. | Tissue-specific responses and cross-talk A key argument for in vivo, as opposed to organ culture studies, is that tissue cross-talk cannot be investigated in organ culture studies. On the contrary, organ culture models allow for the disambiguation of different tissue types within the IVD and surrounding bone structures and muscles, providing the capability for studying tissue-specific responses. They can also be co-cultured in the presence of multiple associated tissues, enabling carefully controlled tissue crosstalk investigations to be undertaken. The dissection of tissue-specific roles and interactions cannot be studied easily in in vivo models. While using co-culture systems, specific cross-talk investigations can be investigated, where IVDs complete with CEPs can be maintained within a loaded bioreactor improving nutrient flow and maintenance of IVD and bone cell viability. IVDs could also potentially be co-cultured with muscle, ligament, nerve, and fat to investigate tissue cross-talk in a controlled environment, enabling mechanistic interactions between these tissues to be understood. | Rapid degeneration models In vivo models generally require long-term time points (anywhere from weeks to months) in order to generate IVDD comparable to the human condition. In comparison, rapid degeneration can be induced in organ culture models, which allows the study of IVDD under accelerated conditions, thus reducing the time needed for respective studies. For example, using enzyme induction of degeneration in a large animal goat or sheep model, 3 months is required for induction of degeneration, 161 while a similar degeneration process can be induced within 1 week using organ cultures of enzyme degradation followed by physiological loading. 162 | Imaging While imaging, such as micro-CT (with the use of contrast agents) and MRI can be conducted in vivo or ex vivo; the resolution and fidelity of the acquired data are typically superior in the ex vivo scenario where the surrounding tissues are removed and thus do not obscure the IVD. Additional advantages of ex vivo imaging include the ability to conduct longer imaging sessions (thereby improving the signal-tonoise ratio), the lack of motion artifacts from breathing, and not having to handle and administer anesthesia. The improvement in resolution and imaging quality enables more sophisticated biochemical and detailed structural analyses of the IVD and enables more mechanistic studies to be conducted. Likewise, parallel, clinically relevant imaging parameters, such as IVD hydration, IVD height index, and bone parameters, can also be obtained from the higher-resolution ex vivo imaging. 97,[163][164][165] Another advantage in terms of imaging organ culture models is the application of molecular imaging to track changes in the biological activity of cells in organ culture over time (e.g., cell metabolism). One example of this is the use of fluorescence molecular tomography (FMT) which is capable of retrieving the 3D bio-distribution of fluorescent molecular markers noninvasively, thus offering higher molecular sensitivity than microCT or MRI. 166 One key feature of FMT is the use of near-infrared (NIR) fluorescence probes, which have been shown to be the most effective for deep tissue imaging. In the NIR spectral range, the attenuation of living tissue is minimal, allowing the use of sufficient laser power for fluorescence excitation and detection without causing tissue damage under prolonged illumination. Moreover, molecular-based imaging findings can also be coupled with analyses of changes in the culture media, to inform coupled in situ and surrounding microenvironmental changes. | Regulatory requirements and clinical translation A major argument put forward by in vivo model proponents involves the regulatory requirements for in vivo animal evaluations prior to human clinical trials. However, numerous studies have shown critical differences between animals and humans, and not just solely in the spine field. With the further development of highly functional and systemically controlled organ culture systems, the use of animals could be reduced, and regulatory pathways limited to more ethical and clinically relevant ex vivo human organ culture testing. | APPROPRIATE MODEL SELECTION It is clear that the selection of a model system for any project must be driven by the research question. Just as an inappropriate sample size can invalidate the results of a project, so too can the use of an inappropriate model system. Therefore, an understanding of the strengths and weaknesses of the various ex vivo organ culture and in vivo models available is a critical step in study design. The relative strengths of in vivo animal and ex vivo organ culture models, as outlined in the preceding sections and summarized in Figure 1, are not necessarily universal; again, they are driven by the question that is being asked and the outcome measures that best answer that question. A particularly obvious example would be that it would not be possible to test a new spinal implant intended for human use in a rat or rabbit, but it would be achievable in a sheep, pig, or calf model. Both organ culture and in vivo models have their limitations, but both also play a vital role in the overall successful understanding of the disease process and the development of potentially life-changing therapies for human patients. The types of available in vivo animal models used for spine research have ranged from non-mammalian vertebrates (such as zebrafish) to small mammals (such as rodents) to large mammals (such as dogs and livestock). With the increasing complexity of the model, there is more likely to be translation to human disease; however, the increased complexity may complicate the mechanistic understanding across multiple tissues. Furthermore, the cost of larger models is higher, both in dollars and potentially in negative public perception. In general, single-cell organisms, invertebrates, and nonmammalian vertebrates have the most utility in investigating the cellular or molecular basis of disease. Non-mammalian vertebrates and some rodents are amenable to genetic manipulation, allowing for the creation of genetic models that display a particular phenotype (which may include susceptibility or resistance to disease). This type of manipulation is not currently possible in most large mammalian models, but these species are highly useful for the study of naturally occurring and induced models of disease. It should be noted that modern geneediting technology is making genetic manipulation of larger animals more feasible. 167 No animal model can perfectly recapitulate human disease, and the ability to use cadaveric human tissue in organ culture must be considered a potential advantage of that approach. Organ culture models may offer the benefit of systematic control of the biomechanics and metabolics of the experimental system that more closely mimic the human condition. There are many advantages to naturally occurring models of disease. Because they are closest to the "real mechanism," they can give the best insight into disease biology and the best evaluation of diagnostics and therapeutics. Furthermore, if companion animals can be used-for example, when studying IVDD in dogs-then researchers may be able to recruit client-owned cases, which may reduce costs and reduce unnecessary animal usage. Such investigations are also of dual benefit, with advances made for the treatment of the species being studied as well as potential translational benefits to humans. However, there are also possible disadvantages to studying naturally occurring diseases, including the fact that variables beyond the researchers' control may affect results (such as genetic diversity within highly outbred species), and appropriate cases may be difficult to find. In contrast, experimentally induced models have the advantage of enhanced reproducibility of the intervention/injury, in as many animals as needed and when they are needed. The downside is that induced disease may not exactly recapitulate natural disease and therefore response to therapeutics might not translate perfectly. Furthermore, there are significant costs and ethical concerns to navigate. When considering induced models of disease for spine research, surgical models are most common; however, other methods of inducing disease may be considered, including genetic manipulation, dietary manipulation, and chemically induced disease. 23 There is no single "gold standard" model for IVD research precisely because different research questions lend themselves to different approaches. Thus, the path to selecting the right model starts with the research question. This will lead to the outcomes of interest, and the selection of the methods that will be used to measure them. These, in turn, will drive the selection of a specific model. In some cases, the elimination of clearly unsuitable models may be the easiest first step. From there, the strengths and weaknesses of potentially suitable options can be weighed. It may turn out that two models could answer your question equally well, and in this case, other factors such as cost and convenience will certainly play a role. It is possible (even likely) that a broad research question cannot be answered effectively by a single model, and that multiple models must be used in sequence or simultaneously to address different aspects of the question. Indeed, complementary use of several IVD model types and leveraging the unique advantages of each is likely to result in the highest impact research in most instances. For example, taking the development of a novel biologic for IVDD treatment as a general case study, a study may commence by establishing and characterizing an IVDD phenotype in a naturally occurring or transgenic rodent model, F I G U R E 1 Summary of the respective advantages of in vivo animal versus ex vivo organ culture models for studies of the intervertebral disc. Figure created using BioRender.com by Shirley N. Tang with license to publish and identifying a putative therapeutic target. Subsequently, potential therapeutic agents could be screened in organ culture models under controlled experimental conditions and utilizing cadaveric human discs to confirm relevance to the human condition. Short-term safety and efficacy studies could then be undertaken in rodent or rabbit models, followed by longer-term studies in large animal models using goldstandard clinically relevant outcome measures. As access to such a wide array of model systems may be beyond the capabilities of a single laboratory, financially and/or logistically, such studies could be undertaken through collaborations across laboratories and institutions. In conclusion, in this article, we debate the relative advantages of in vivo animal and ex vivo organ culture models for studies of the IVD. In doing so we also identify their respective limitations, and the continued need to strive for improved experimental platforms in order to achieve the best possible treatment outcomes for LBP patients. Many reviews of different IVD model systems are available in the published literature, 23,93,[168][169][170][171] and these can serve as valuable resources for researchers seeking the best model system for their research question. Consideration should also be given to the development and use of standardized outcome measures for various models, 172,173 which makes comparing results across studies easier and more valuable. AUTHOR CONTRIBUTIONS Lachlan J. Smith conceived the original idea for the manuscript.
2022-12-06T17:07:12.492Z
2022-11-28T00:00:00.000
{ "year": 2022, "sha1": "4ab5dd8967780bbc1ad0eeb403cf9989fa0cb2ac", "oa_license": "CCBYNCND", "oa_url": "https://shura.shu.ac.uk/31084/1/jsp2.1235.pdf", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "93bef1df66de7be1253587a95e530d88a414b821", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
17404735
pes2o/s2orc
v3-fos-license
Prophylactic Leukotriene Inhibitor Therapy for the Reduction of Capsular Contracture in Primary Silicone Breast Augmentation: Experience with over 1100 Cases Background: The role of leukotriene inhibitors used immediately postoperatively to potentially influence the development of capsular contracture is unknown. The purpose of this study was to evaluate the incidence of capsular contracture among women undergoing primary smooth silicone gel breast augmentation, with or without postoperative leukotriene inhibitor therapy. Methods: Between 2007 and 2013, 1122 consecutive women undergoing primary silicone gel breast augmentation were evaluated retrospectively. All underwent augmentation with smooth, Mentor Memory Gel implants, using a dual-plane technique, with periareolar or inframammary approaches. Patients were treated voluntarily with either no leukotriene inhibitor, montelukast (Singulair), or zafirlukast (Accolate) for 3 months. All patients received informed consent for the off-label use of leukotriene inhibitors. Liver function studies were obtained for all patients undergoing Accolate therapy after 1 month of therapy. The presence of capsular contracture was measured by the Baker scale at 1 year postoperatively. Results: Patients receiving Accolate therapy (n = 520) demonstrated an encapsulation rate of 2.19 percent. Women receiving Singulair therapy (n = 247) had an encapsulation rate of 3.27 percent. Patients not receiving leukotriene inhibitor therapy had an encapsulation rate of 5.02 percent. There were no long-term complications among patients evaluated. Conclusions: Accolate therapy used for 3 months postoperatively was associated with significantly lower capsular contracture rates compared with untreated patients at 1-year follow-up (p < 0.05). Patients treated with Singulair demonstrated lower contracture rates compared with controls, but the differences were not statistically significant. The findings suggest that Accolate therapy, with monitoring and consent, reduces the incidence of capsular contracture following primary smooth silicone gel breast augmentation. CLINICAL QUESTION/LEVEL OF EVIDENCE: Therapeutic, III. shown lower capsular contracture rates compared with smooth round devices. Nevertheless, despite these advances, a significant number of women develop capsular contracture following breast augmentation and require revision surgery or live with discomfort, deformity, or suboptimal results. The use of leukotriene inhibitors for the treatment of capsular contracture was reported as early as 2002, 7,8 and multiple studies have shown benefits in softening breasts and reducing the severity of capsular contracture with either montelukast (Singulair; Merck, Kenilworth, N.J.) or zafirlukast (Accolate; AstraZeneca Pharmaceuticals, Wilmington, Del.). [9][10][11][12][13] However, the effects of using these medications immediately postoperatively, before any evidence of capsular contracture may be present, is unknown. Currently, there is no clear standard of care for the use of these off-label medications, and little information is available about which medication may be more or less beneficial. A high-volume aesthetic breast practice with a single surgeon performing a standardized procedure provided an excellent opportunity to evaluate the effects of leukotriene inhibitor therapy. The author timely implemented the advances advocated by research in our specialty with regard to reduction of biofilm exposure, using tripleantibiotic/povidone-iodine irrigation and the use of skin barriers/nipple shields. Despite these techniques, which lowered the encapsulation rates among our patients to well below national averages reported, 14 we desired to explore the potential benefits of leukotriene therapy used prophylactically in the early postoperative period. PATIENTS AND METHODS The study was performed with a retrospective review of 1122 consecutive women undergoing primary, silicone gel breast augmentation. Over time, the author added leukotriene inhibitor therapy to the postoperative treatment of patients. The first group of patients were treated without leukotriene therapy (2007 to 2009). Consecutive patients agreeing to the off-label use of leukotriene inhibitors were then studied. The second group of consecutive patients were offered Singulair therapy (2009 to 2010) postoperatively, whereas all other aspects of the surgical technique and care remained the same. The author then offered Accolate therapy (2010 to 2012) to the final group of patients undergoing breast augmentation. All patients were between the ages of 22 and 60 years and gave informed consent for the use of silicone gel breast implants. All patients were provided with informed consent for the use of Mentor Memory Gel silicone implants (Mentor Worldwide, Irvine, Calif.) and the off-label use of triple-antibiotic irrigation containing povidoneiodine (Betadine; Purdue Frederick Co., Norwalk, Conn.). In addition, a detailed informed consent was provided for the off-label use of either Singulair or Accolate. All patients were adequately informed of the risks, potential unknown benefits, cost, and potential side effects of leukotriene therapy with both verbal and written consent. Patients were informed that taking leukotriene inhibitor medications was voluntary and that they could discontinue these medications at any time for any reason. In particular, patients were counseled on the potential significant risks of leukotriene inhibitors, including the uncommon risk of chemical hepatitis, liver failure, and even death, all of which were reported in Accolate U.S. Food and Drug Administration postapproval studies for the treatment of asthma. Patients chose augmentation with either a periareolar or inframammary approach. Silicone breast augmentation procedures were performed by the author, using a standardized, dual-plane technique. All patients were treated with preoperative intravenous antibiotics, either a 1-g dose of cephazolin or 600 mg of clindamycin, selected based on allergy profiles. Before insertion of implants, triple-antibiotic irrigation [50,000 U of bacitracin, 1 g of Ancef (GlaxoSmithKline, Middlesex, United Kingdom), and 80 mg of gentamicin] with the addition of 50 ml of povidone-iodine in 500 ml of normal saline was used. All implants were placed with powder-free glove changes and inserted through a skin barrier/nipple shield using Tegaderm (3M, St. Paul, Minn.) dressings. A standardized breast implant massage program was initiated on postoperative day 2. Patients treated with leukotriene inhibitors began medication the day after surgery. Patients with a history of hepatitis or liver disease were excluded. Patients were treated with the standard dosing for each medication, recommended by the manufacturer for on-label use. Singulair 10 mg/ day or Accolate 20 mg twice daily was used. Patient compliance with the recommended dosage was encouraged. Patients were asked to report perceived side effects of the medications, and medications were discontinued at any time the patient felt that the side effects were significant. All patients receiving Accolate therapy underwent liver function studies 4 weeks after the initiation of Accolate therapy. Elevation of transaminases resulted in discontinuation of the 381e medication. Follow-up liver function studies were performed at 2-week intervals, until transaminases normalized. Patients who demonstrated Baker grade III or IV capsular contractures at 3-month follow-up were offered 3 additional months of either Singulair or Accolate therapy. Liver function studies were repeated after 3 months of Accolate therapy for patients choosing to continue Accolate for a 6-month course. All patients were evaluated at frequent followup appointments by both the author and a plastic surgery nurse specialist, including early postoperative visits, 1 month, 3 months, 6 months, and 1 year postoperatively. Capsular contracture was evaluated by the Baker scale. Patients with grade III or IV capsular contractures were considered clinically encapsulated. Statistical comparison of the groups was performed with the Barnard exact test. 15 Table 1 shows the groups of consecutive patients who completed the study protocol with 1-year follow-up. A high degree of compliance with the study was achieved, with 84.8 percent of Accolatetreated patients, 82.2 percent of Singulair-treated patients, and 79.4 percent of no leukotriene inhibitor-treated patients completing the 1-year followup. Of the patients evaluated, 72 percent underwent a periareolar approach and 28 percent chose an inframammary incision. There were no differences in the percentage of patients choosing a particular incision type between the groups. For patients offered leukotriene therapy following breast augmentation, 22 patients declined to participate and were not included in the study. For patients treated with Singulair, 2.5 percent of patients claimed to discontinue the medication because of side effects or cost. The percentage of patients discontinuing Accolate was 6.2 percent for minor side effects or cost. Three of 520 patients treated with Accolate therapy (0.58 percent) demonstrated mild elevation of transaminases and discontinued the medication. Transaminases returned to normal within 2 weeks after discontinuing the medication for two patients and within 4 weeks for one patient. A summary of the most common side effects reported by patients for each leukotriene inhibitor is listed in Table 2. RESULTS The rate of capsular contracture for patients undergoing primary silicone gel breast augmentation with or without leukotriene inhibitor therapy is listed in Table 3. Women undergoing breast augmentation followed by 3 months of Accolate therapy demonstrated capsular contracture rates significantly lower compared with women not treated with leukotriene inhibitors postoperatively (p < 0.05). Patients treated with Singulair therapy showed capsular contracture rates lower than patients not treated with leukotriene inhibitors, but the differences were not statistically significant. For patients treated with Singulair for 3 months, two patients developed grade III/IV capsular contractures that improved (to grade II) with an additional 3 months of therapy. For patients treated with Accolate for 3 months, five patients demonstrating grade III or IV capsular contractures at 3 months postoperatively improved with an additional 3 months of therapy (to grade II). Table 4 demonstrates the distribution of capsular contractures based on incision location for each of the groups studied. All groups of patients demonstrated a greater percentage of capsular contractures with the periareolar incision compared with the inframammary incision. For patients treated with Accolate, the capsular contracture rates were similar comparing both incision locations. However, there were no statistically significant differences between the groups Tables 3 and 4 were calculated using Jeffreys confidence interval analysis. 16 DISCUSSION The results of this study demonstrate that Accolate used for 3 months postoperatively after primary silicone gel breast augmentation with smooth surface gel implants is associated with significantly lower capsular contracture rates at 1-year follow-up compared with patients not treated with a leukotriene inhibitor postoperatively. This is the first report that demonstrates that prophylactic leukotriene inhibitor therapy used for patients undergoing primary silicone gel breast augmentation is correlated to a lower capsular contracture rate at 1-year follow-up. Recently, Graf et al. 17 reported reduction in capsular contracture following the prophylactic use of Singulair in patients undergoing textured silicone breast augmentation procedures including primary augmentation, augmentation mastopexy, and augmentation revision. The study involved a small group of patients, 84 total, in which 37 were treated with Singulair. Two surgeons performed the procedures, with one surgeon's patients receiving Singulair and the other surgeon's patients not receiving a leukotriene receptor antagonist. They reported a reduced severity of capsular contracture in patients treated with Singulair. However, the study of Graf et al. reports a small group of patients undergoing multiple types of operations including revisions. Several different access incisions were used with variations in surgical pocket location and two different surgeons, only one of whom treated patients with Singulair. In the present study, with a much larger sample size, and a greater degree of uniformity, we did note a lower capsular contracture rate compared with untreated patients, but the difference was not statistically significant. The difference in findings may have been related to the fact that different implant surfaces were used comparing the two studies, all the patients in the present study were primary augmentation procedures, and the current study is a single-surgeon series compared with the two-surgeon series of Graf et al. In the current study, Singulair was found to be helpful in capsular contracture reduction, but the differences were less impressive than with Accolate and not statistically significant. Because both medications are known to be leukotriene inhibitors, why would one drug be more useful than the other? The clinical pharmacology and recent research studies suggest possible explanations for these findings. Three cysteinyl leukotrienes, leukotriene C 4 , leukotriene D 4 , and leukotriene E 4 , are products of arachidonic acid metabolism and are released from cells associated with the inflammatory response. These compounds bind to cysteinyl leukotriene receptors that are found on smooth muscle cells and inflammatory cells. When leukotrienes bind to the cysteinyl leukotriene receptor, multiple effects, including cellular contraction, edema, and altered cellular activity associated with inflammation, may occur. Montelukast (Singulair) inhibits the actions of one leukotriene, leukotriene D 4 , at the cysteinyl leukotriene receptor. 18 Zafirlukast (Accolate) is a competitive receptor antagonist for leukotrienes, and is known to antagonize the contractile activity of three different leukotrienes, including leukotriene C 4 , leukotriene D 4 , and leukotriene E 4 . These leukotrienes are associated with the inflammatory process, smooth muscle contraction, and cellular contraction. 19 Zafirlukast (Accolate) competitively inhibits three different leukotrienes, rather than the one leukotriene inhibited by Montelukast (Singular). Although there is no evidence that Accolate is more potent in asthma treatment than Singulair, it may be that it may offer a more robust effect on inhibiting the encapsulation process. Furthermore, the leukotrienes that Singulair does not inhibit, leukotriene C 4 and leukotriene E 4 , may be important in the pathogenesis of capsular contracture. Further studies will be necessary to characterize the differences between these medications on capsular contracture. Studies have supported the biomolecular basis for capsular contracture involving leukotriene receptors. 20,21 Investigators have shown significantly increased levels of leukotriene receptor activity in patients undergoing capsulectomy for severe capsular contracture compared with controls without encapsulation. These findings support a possible role for antileukotriene drugs, which may interfere with the activation of leukotriene receptors. Numerous investigators have demonstrated the benefit of leukotriene inhibitors in the treatment of small numbers of patients with capsular contracture. 8,9,11,12 Huang and Handel reported benefit with the use of Singulair in a group of 19 patients seen with capsular contracture. 10 The authors noted that more than half of the patients improved with Singulair therapy, with a reduction or resolution of capsular contracture. Several patients treated prophylactically with Singulair did not have recurrence. The effect of Accolate on early capsular contracture on primary saline breast augmentation was evaluated in 37 patients demonstrating early capsular contracture. 11 Patients were treated for up to 6 months with Accolate therapy and liver function studies were followed. The results demonstrated that 75 percent of patients had improvement or resolution of early capsular contracture with a mean follow-up of 6.3 months. These studies and others have demonstrated the benefits of leukotriene inhibitors in the treatment of early capsular contracture. Little is known about the effects of leukotriene inhibitors on the formation and pathogenesis of capsular contracture when used prophylactically. As the inflammatory reaction to trauma, bacteria, blood products, and/or inflammatory mediators progresses, it is believed that myofibroblasts and macrophages form in the immature implant capsule. 22 These cells are known to possess receptors for many inflammatory mediators, including a rich population of cysteinyl leukotriene receptors. It is at this early stage when leukotriene inhibitors may block the receptor and interfere with contractile and inflammatory processes associated with clinical capsular contracture. This hypothesis supports the findings of this study and also suggests why clinicians have reported more success with leukotriene inhibitor therapy when used early in capsular contracture cases. The studies of capsular contracture and leukotriene inhibitors in the plastic surgery literature have not, as of this time, documented mortality or serious morbidity from the use of leukotriene inhibitors, including Accolate therapy. Many plastic surgeons currently use leukotriene inhibitors to some extent in their practices for the treatment of capsular contracture, but the prevalence, side effects, and benefit of this treatment have not been reported in depth by plastic surgeons. Outside of our specialty, there are more data regarding Accolate and the risks associated with its use. There is variability in the reports of safety using Accolate as an on-label medication. Although postmarketing data reported by Gryskiewicz 23 demonstrated morbidity and mortality in a small number of asthma patients treated with Accolate, an English landmark study of approximately 8000 patients demonstrated that Accolate was a welltolerated drug for the treatment of asthma, with few associated adverse effects. 24 All medications prescribed by physicians for the management of medical conditions carry with them risk of adverse side effects. The statin drugs, although usually well tolerated and extensively prescribed, may cause elevation of transaminases, rhabdomyolysis, increased risk of new-onset diabetes in postmenopausal women, and rarely death from liver failure. However, the potential benefits of this class of medication is felt by patients and physicians to outweigh the risks, and therefore statins are used extensively. It is difficult to compare our patients with capsular contracture risk to patients at risk for cardiovascular disease. For our patients, the concern is about quality of life. Our capsular contracture patients are not suffering a life-threatening disease. However, this condition subjects patients to pain, deformity, asymmetry, and significant costs associated with treatment. Many patients live with severe capsular contracture for many years. This does affect quality of life. It is certainly reasonable to discuss the potential benefits and risks of leukotriene inhibitor therapy with patients and the options for off-label use of this class of medications. It is important to review in detail the risk of serious liver injury, as rare as it may be. Furthermore, the author believes that a detailed written informed consent discussing the off-label, voluntary use of leukotriene inhibitor medications, including their associated risks, should be a prerequisite for offering these medications to patients. The hesitation of some plastic surgeons to treat patients with leukotriene inhibitors may relate to a lack of information regarding the efficacy of treatment and concerns regarding patient safety. Other than a few studies published in the past 10 years with small groups of patients, little is known about options, outcomes, and risks of leukotriene inhibitor therapy when used for the treatment of capsular contracture. In 2003, an investigation by our Society was published that reviewed U.S. Food and Drug Administration data from postmarketing reports. 23 In this report, in the majority of patients, elevated liver enzyme
2018-05-08T18:18:41.365Z
2017-01-26T00:00:00.000
{ "year": 2017, "sha1": "928abfee6cb0a751c0d18499d5a1ca95eef284a1", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5327859", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "928abfee6cb0a751c0d18499d5a1ca95eef284a1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16976232
pes2o/s2orc
v3-fos-license
The pion form factor from lattice QCD with two dynamical flavours We compute the electromagnetic form factor of the pion using non-perturbatively O(a) improved Wilson fermions. The calculations are done for a wide range of pion masses and lattice spacings. We check for finite size effects by repeating some of the measurements on smaller lattices. The large number of lattice parameters we use allows us to extrapolate to the physical point. For the square of the charge radius we find=0.441(20) fm^2, in good agreement with experiment. Introduction For some time now it has been possible to explore the structure of hadrons from first principles using lattice QCD. Since the pion is the lightest QCD bound state and plays a central role in chiral symmetry breaking and in low-energy dynamics, a thorough investigation of its internal structure in terms of quark and gluon degrees of freedom should be particularly interesting. We have started to explore the structure of the pion in a framework using generalised parton distributions, or more precisely their moments [1]. As a generalisation of parton distributions and form factors they contain both as limiting cases. In this work we restrict ourselves to results for the pion electromagnetic form factor F π from N f = 2 lattice QCD simulations, based on O(a) improved Wilson fermions and Wilson glue. Initial studies on the pion form factor by Martinelli et al. and Draper et al. [2,3] were followed by recent simulations in quenched [4,5,6,7] and unquenched QCD [8,9]. In this work, we improve upon previous calculations by extracting the pion form factor for a much larger number of β, κ combinations, which allows us to study both the chiral and the continuum limit. Furthermore, two finite size runs make estimates of the volume effect possible. The pion form factor in lattice QCD The pion electromagnetic form factor F π describes how the vector current couples to the pion. Writing p and p ′ for the incoming and outgoing momenta of the pion, it is defined by where the momentum transfer is q µ = (p ′ µ − p µ ) and its invariant square is q 2 = −Q 2 . For our lattice calculation we want to simplify the flavour structure of Eq. (1). Invoking isospin symmetry one finds It is hence sufficient to limit the calculation to a single quark flavour in the vector operator. We use the unimproved local vector current on the lattice; the corrections due to the improvement term [10] are quite small and will be discussed later. Since this current is not conserved, renormalisation has to be taken into account. Because in the forward limit (Q 2 = 0) the form factor is simply the electric charge of the pion, we can normalise our data appropriately. We can also use the known renormalisation constant Z V (taken for example from [11]) as a cross-check for our simulation. To compute the matrix elements in Eq. (2) on the lattice, one has to evaluate pion threepoint and two-point functions. We then apply a standard procedure to extract the pion form factor F π , where one constructs an appropriate ratio for the observable [12,13]. Let us start by looking at the three-point function. The general form is given by the correlation function C 3pt (t, p ′ , p ) = η π (t sink , p ′ ) u(t)γ µ u(t) η † π (t source , p ) (4) and depicted in Fig. 1. Here we denote the sink and source operators for a pion with given momentum and at given time-slice by η π (t sink , p ′ ) and η † π (t source , p ), respectively. Using the transfer matrix formalism and inserting complete sets of energy eigenstates, the three-point function is then of the form where T is the time extent of our lattice and Figure 1: A sketch of the three-point function with the pion source at time 0, pion sink at t sink , and the operator acting at time t. Note that we have omitted excited states in Eq. (5) and already inserted our choice for the time-slice of the pion source, t source = 0. We choose the sink of the three-point function as t sink = T /2, so that the correlation function is symmetric or antisymmetric with respect to this time, We can then separate the correlation function into contributions from t to the left and to the right of t sink (referred to as l.h.s. and r.h.s. in the following) and neglect either the second or first term in Eq. (5) since it is exponentially suppressed in the regions of t from which we will extract the form factor. The two-point function has the form where again we omitted higher energy states. Comparing the two-and three-point functions (8) and (5), a ratio can be constructed that eliminates the overlap factors such as 0| η π ( p ′ ) |π( p ′ ) and partially cancels the exponential time behaviour appearing in Eq. (5). This technique also has the advantage that fluctuations of the correlation functions tend to cancel in the ratio and we thus obtain a better signal. With our choice t sink = T /2, such a ratio is Similar ratios have already been used in earlier works on pion and nucleon structure. Here we take the somewhat more complicated ratio (9), which was used for the nucleon in [12], because we use momentum combinations with | p | = | p ′ |. Contributions to this ratio from excited states with energy E ′ are suppressed as long as t where E is the pion energy. A potential problem is that, due to the exponential decay of the pion two-point function, the signal at t = t sink for non-vanishing momenta is poor. For finite statistics the two-point function can then take negative values, which prevents one from evaluating the square root. We try to overcome this difficulty by shifting the two-point functions C 2pt (t, p ) that enter with t = t sink . Using the identity valid for t sink = T /2, we shift by t shift = 6, which significantly reduces the number of negative two-point functions. Nevertheless there are still momentum transfers Q 2 for which the argument of the square root in the ratio (9) is negative. Those values are discarded when we evaluate the form factor. For Q 2 = 0 the ratio (9) does not exhibit a proper plateau that could immediately be used for fitting. This is due to our choice for t sink , for which the time dependence of the pion two-point function cannot be approximated by a single exponential in the t regions we use to extract the form factor, see Eq. (8). In fact, we now show that the ratio is approximately antisymmetric around the central point t = t sink /2 = T /4 of the l.h.s. (as well as around t = 3T /4 on the r.h.s.). Defining δ ≡ t − t sink /2 and expanding the ratio and its exponentials in Eq. (9) around δ = 0 we find where When averaging R(t) in a symmetric interval around t = T /4, the antisymmetric piece proportional to c δ in (11) drops out. However, such an averaged signal also includes unwanted symmetric contributions. Fortunately, for our pion masses and lattice momenta already the leading symmetric term is negligible, because with the lattice spacing a we have c 2 δ ∼ 10 −4 a −2 and δ 2 ≤ 4a 2 in our fits. We hence obtain a good signal for the averaged ratio. The same is true for the r.h.s. ratio and its central point t = 3T /4. A typical ratio at non-zero momentum transfer is shown in Fig. 2 for one of our data sets, along with the familiar plateau for zero momentum transfer. Note that the ratio (9) does not exhibit a plateau for arbitrary momenta. To visualise the absence of possible contributions from excited states one has to consider the ratio for | p | = | p ′ |. In this case the time dependence of the three-point function (5) should vanish. We have checked that this is indeed the case in the region we average over, within the expected increase of noise for higher momenta or lower pion masses. From Eqs. (11) and (12) we see that the lattice ratio (9) can be used to extract the form factor F π (Q 2 ). Using then several combinations of momenta p and p ′ that all give the same Q 2 provides an over-constrained set of equations, from which we determine F π (Q 2 ) by χ 2 minimisation. We increase the quality of our signal by averaging the ratio over the contributions on the l.h.s. and r.h.s. This requires the additional sign factor (−1) n4 between the two sides, as can be seen in Eq. (7). The energies E p and E p ′ appearing in (10) and (12) are calculated using the lattice pion masses and the continuum dispersion relation. We also performed a test of the dispersion relation for some of our lattices. It was increasingly difficult to extract a signal for higher momenta, especially for the lowest pion masses. However, we found that the continuum dispersion relation can be used to describe the data and that a lattice dispersion relation is not favoured. Simulation details We perform our simulations with two flavours of non-perturbatively clover-improved dynamical Wilson fermions and Wilson glue. Using these actions, the QCDSF and UKQCD collaborations have generated gauge field configurations with the parameters given in Table 1, where we have used the Sommer parameter with r 0 = 0.467 fm (see [14] and [15]) to set the physical scale. This large set of lattices enables us to extrapolate to the chiral and the continuum limit. For two sets of parameters (β = 5.29, κ = 0.1355, 0.1359) we also have a choice of lattice volumes (12 3 × 32, 16 3 × 32 and 24 3 × 48) in order to study finite volume effects. Starting with the lattice version of the three-point function, Eq. (4), we follow [16] and find that it is sufficient to calculate with x 4 = 0, y 4 = T /2, z 4 = t. Here G(y, z) is the fermion propagator, the average is taken over the gauge fields, and the trace is over the suppressed Dirac and colour indices. The matrix Γ represents the Dirac structure of the pion interpolating field η π , while the Fourier transformations ensure that we have fixed momenta at the operator insertion and the sink. We use two different pion interpolating fields to create the pions on the lattice, namely a pseudo-scalar and the fourth component of the axial-vector current, which both have the correct quantum numbers. For a given momentum p they read with x 4 = t. We apply Jacobi smearing [17] at the source as well as the sink to increase the overlap of the lattice interpolating fields with the physical pion states. The three-point function (13) is then evaluated by applying the sequential source technique as indicated in Fig. 1. This makes it efficient to use a large number of momentum transfers, as required for calculating form factors. A large set of momenta is necessary to assess the Q 2 dependence, and having several combinations of p ′ and q belonging to the same Q 2 makes the fits more reliable. We use three final momenta p ′ and 17 momentum transfers q, giving a total of 51 combinations for an over-constrained fit for F π at 17 different values of Q 2 . In units of 2π/L the momenta are given by where · · · stands for all possible permutations w.r.t. the components. The errors we quote for our results are statistical errors obtained by the jackknife method. Experimental data for the pion form factor Let us now take a brief look at the experimental measurements of F π (Q 2 ) to which we compare our lattice results. Very accurate data up to Q 2 = 0.253 GeV 2 have been obtained in [18] from elastic scattering of a pion beam on the shell electrons of the target material. At higher Q 2 the pion form factor has been extracted from ep → enπ + , which is considerably more involved (see [19] for a recent discussion). We only use here data from [20,21,22], where the cross sections for longitudinal and transverse photons have been experimentally separated by the Rosenbluth method. 1 Together these data span a range from Q 2 = 0.35 GeV 2 to 2.45 GeV 2 . We find the experimental data on F π well described by a monopole form with a fit of the combined data from [18,20,21,22] giving M = 0.714(4) GeV at 27. This is remarkably close to the result M = 0.719(5) GeV at χ 2 /d.o.f. = 1.13 obtained when fitting only the data of [18] with its much smaller range in Q 2 , which illustrates the stability of a monopole form up to 2.45 GeV 2 . [18,20,21,22] 0.458 (5) The low-Q 2 behaviour of F π is characterised by the squared charge radius For a monopole form (16) one has In Table 2 we list the values obtained from a number of fits to F π . The PDG average [24] uses results from form factor data at both spacelike and timelike virtualities. The three fits to the Amendolia data [18] illustrate that different fitting procedures can give results with a variation much bigger than the quoted statistical and systematic errors. Fit 1 (whose result is the one retained in the PDG average) is based on a representation of F π as a dispersion integral. Fit 2 was also given in [18] and assumed a monopole form (16) with a normalisation factor allowed to deviate from 1 by ±0.9%, which corresponds to the overall normalisation uncertainty of the measurement. Fit 3 assumes a monopole form with normalisation fixed to 1, as does the fit to the combined data of [18,20,21,22]. Fits to lattice data and extrapolation in m π We start the discussion of our results by explaining our fitting procedure, including combined fits to all data sets. In the next subsection we will argue that lattice artifacts are small. To obtain the physical form factor we have to renormalise our lattice result, F ren π = Z V F bare π . As mentioned in Section 2, we can do this by using the electric charge of the pion as input, i.e. so that F lat,ren π (0) = F phys π (0) = 1. We then use a monopole ansatz to fit the actual data for the form factor 2 F lat π (Q 2 ) = where we have M lat as a fit parameter for each of our lattices at its lattice pion mass m π,lat . The quality of this fitting ansatz will be discussed below. Using this fitting function, we compare the results obtained with the two pion interpolating fields (14) and observe several differences. In general, the matrix elements for pions using Γ = γ 4 γ 5 display a slightly cleaner signal with more data points in Q 2 , i.e. less contamination due to negative two-point functions. Fitting the monopole form (20) to the form factor for both pion interpolators we find that the χ 2 /d.o.f. differs on average by about a factor of 2, ranging from 0.18 -1.72 (0.23 -3.49) for the interpolator with γ 4 γ 5 (γ 5 ). The fitted monopole masses for the Γ = γ 5 pions lie consistently above the ones for Γ = γ 4 γ 5 but agree within errors for most lattices. In an exploratory extraction of the pion energies from the two-point functions with non-vanishing momentum on a sub-set of our lattices, we also found that the pseudo-scalars with Γ = γ 5 had a worse signal at higher momenta. A similar observation was made in [8] and may explain the difference in quality of the form factors extracted from the two pion currents. Because of the better signal, we will mainly discuss results for the pions created with Γ = γ 4 γ 5 in the remainder of this work. To obtain the pion form factor at the physical pion mass we extrapolate the values for M lat , given in Table 3, to the physical point. We tried different extrapolations in the square of the pion mass, see Table 4, including also a fit inspired by chiral perturbation theory and used in [9]. For the latter we chose the fit range of m 2 π,lat < 0.8 GeV 2 . Varying this fit range within reasonable bounds did not have a significant effect on the extrapolated value of M phys . We find the best χ 2 value for fit 2, where M 2 lat depends linearly on m 2 π . The extrapolations in the remainder of this paper are based on this ansatz. We will however include an estimated systematic error of ∆M ext = 35 MeV from the difference of fits 1 and 2 in our final result (this is bigger than the difference between fits 1 and 4, whereas fit 3 gives a significantly worse description of the data). Figure 3 shows the extrapolation to the physical pion mass based on fits 2 and 4. We remark that our lattice with the lowest pion mass, m π = 400 MeV, is completely consistent and increases our confidence in the fit and fit ansatz. However, due to the larger statistical errors it has little weight in this result: when leaving it out of the fit M phys changes only by 1 MeV. The corresponding run and several others at small pion masses are still in progress. It is obvious that one needs higher statistics for this point to be significant. We include the m π dependence of the monopole mass of fit 2 in a combined fit to all our lattice data available. This fit has the same monopole form as in (20) with one additional parameter to incorporate the m π behaviour, The two fit parameters, c 0 and c 1 , describe the relation between the monopole mass and the pion mass, and we immediately obtain the form factor F phys π (Q 2 ) = F π (Q 2 , m 2 π,phys ) in the physical limit. The fitted parameters are c 0 = 0.517 (23) Figure 4 shows experimental data along with the combined fit with its extrapolated curve. For this plot, our data at the lattice pion masses is shifted to the physical pion mass and plotted on-top of the extrapolation. We do this by subtracting from the individual lattice points, F lat π (Q 2 ), a value F π (Q 2 , m 2 π,lat ) − F π (Q 2 , m 2 π,phys ) calculated with the fit parameters of Eq. (21) at the respective pion masses. The errors are left unchanged. We find good agreement between our simulation and the experimental results. This is emphasised by (20) for each of our lattices. The last column gives an estimate for the shift ∆M lat = M (m 2 π , ∞)− M (m 2 π , L) of the monopole mass due to finite volume effects. It is obtained from the empirical fit (26) Table 4: Different forms used to extrapolate the monopole mass to the physical value of m π . In fit 4 we have L = 1/(4πf ) 2 log(m 2 π,lat /µ 2 ), where µ = 1 GeV and f π ≈ 92 MeV is the pion decay constant. We now investigate the validity of the monopole ansatz for our data. Instead of constraining the fitting function to a monopole form, one can also take a general power law, i.e. use a function F π (Q 2 , m 2 π ) = 1 + # extrapolation ansatz with an additional parameter, p. Note that the relation (18) Figure 4: Combined fit to (21) of our data for all lattices. We plot experimental data (diamonds) [18,21,23] and lattice results extrapolated to the physical pion mass as explained in the text. To avoid having a cluttered plot we do not show lattice results with errors bigger than 80%, which are nevertheless included in the fit. The insert shows the good agreement to the experimental data for a momentum transfer of up to 1 GeV 2 . Also included is an error band for the fit. We show such effective masses for some of our lattices in Fig. 5, where one can see that the effective monopole masses stay constant within errors over a large range of Q 2 and agree with the monopole masses given in Table 3. This again indicates that the monopole is a good description for our data. The validity of the fit over the whole Q 2 range is further tested by combined fits to Eq. (21) in a limited fitting range Q 2 ≤ Q 2 max or Q 2 min ≤ Q 2 . This is shown in Fig. 6, where we successively limit the fit to smaller (larger) momenta. Note that the increasing errors to the left or the right are due to the decrease in the number of fitted data points. Within these errors, the change in the monopole mass is consistent with statistical fluctuations. From Figs. 5 and 6 we can conclude that the monopole ansatz works well in the entire region for which we have lattice data, from Q 2 = 0 to about 4 GeV 2 . The results discussed so far have used the lattice data normalised as in (19). Using we can determine Z V from our (unrenormalised) data at zero momentum transfer. We find reasonable agreement with the values of Z V given in [11], albeit with errors that are larger by at least an order of magnitude. The bigger errors are likely due to our choice of t sink , which results in noisier two-point functions. decreasing Q 2 max Figure 6: Combined fits to (21) with reduced fitting ranges in Q 2 . For the left plot Q 2 max is decreased, while Q 2 min is increased for the right plot. We use bins of 50 MeV 2 and show only points where the number of data points in the fit of F π changed. Table 5: Overview of our finite size runs. Note that we use the pion mass and lattice spacing of the largest lattice also for the smaller ones. They are given in Table 1 and not repeated here. Finite volume and discretisation effects Let us now turn to the discussion of lattice artifacts. Apart from the extrapolation to the physical pion mass there are two more limits to be taken: the infinite volume limit and the continuum limit. The large number of lattices available allows us to investigate both. In order to study the volume dependence of our results, we make use of two sets of configurations that have the same parameters β, κ for the lattice action but different volumes (see Table 5). In Fig. 7a we show the monopole masses fitted according to Eq. (20) as a function of the lattice size L. We use the pion mass m π and lattice spacing a determined for the lattice with the largest volume also for the smaller ones. Figure 7b gives an overview of our lattices in the m π -L plane. To obtain some understanding of the volume dependence one may have recourse to chiral perturbation theory. The volume dependence of the pion charge radius has been investigated to one-loop order in various approaches of chiral perturbation theory [25,26,27]. In the continuum limit, the result of the lattice regularised calculation in [27] amounts to a finite size correction of where the sum runs over all three-vectors n = 0 with integer components and f π ≈ 92 MeV is the pion decay constant. Note that the finite size correction of the charge radius is not proportional to m 2 π , unlike for other quantities such as the pion decay constant or the nucleon axial coupling. The leading contribution in Eq. (25) for large values of m π L is proportional to K 0 (m π L) ∼ π/(2m π L) e −mπL . Unfortunately we cannot expect chiral perturbation theory to be applicable at the pion masses and lattice volumes used in our simulations. This includes the result (25), which we take however as a guide for the functional form of the volume dependence. We thus change the monopole mass in (21) We then perform a combined fit to the data of all lattices in Table 1 Fig. 7a (lattices number 10, 10a, 11, and 11a). We have not included the 12 3 × 32 lattices in the fit (26) since we cannot expect our simple ansatz to hold down to lattice sizes of 1 fm. Qualitatively, our fit is not too bad even in this region, as shown by the dotted lines in Fig. 7a. With the fitted parameters we can estimate the finite volume shift for each of our lattices as given in Table 3. Except for a few lattices we find very small effects. We do not expect that with the simple form (26) fitted to our finite volume data at m π = 591 MeV and m π = 769 MeV (the dotted lines in Fig. 7b) we can estimate volume effects for pion masses as low as 400 MeV. We therefore have excluded lattice number 12 from our finite volume investigation. Before discussing the scaling behaviour, let us briefly discuss the possibility of O(a) improving the local vector current. The improved current has the form The improvement coefficient c V is only known from lattice perturbation theory [10] because the only non-perturbative calculations to date are for quenched fermions (see e.g. [28]). However, even with tadpole improvement the perturbative value for our coarsest lattice is c V ≈ −0.027. This is so small that we expect no sizable effect on our results. To see this, we plot in Fig. 8 the ratio of the pion matrix elements for the two operators on the r.h.s. of Eq. (27). The dependence on the index µ cancels in this ratio. Note that here we use unrenormalised lattice data and that we still have to multiply with c V in order to obtain the effect of the improvement term in the current. This example plot is for our coarsest lattice (β = 5.20 and κ = 0.1342), where the improvement term should have the largest impact. To gain a feeling for the possible size of the effect, we used a fixed value of c V = −0.3 to compute the effect on a sub-set of our lattices (lattices number 2, 6,11,15). Although this improvement coefficient is more than ten times larger than the tadpole improved value for our coarsest lattice, the shift of the monopole mass was moderate with 6 to 10%. Given the size of our statistical errors on F π and the fact that a reliable value for c V is not known for our lattices, we decided to neglect operator improvement and use the local vector current. We now investigate the scaling behaviour by extrapolating our values for the monopole mass to the physical pion mass separately for each β (see the upper plots in Fig. 9). We again assume a linear relation between the squared monopole and pion masses. The extrapolated values can then be studied as a function of the lattice spacing a, using r 0 /a extrapolated to the chiral but not to the continuum limit [29]. 4 This is shown in the lower plot in Fig. 9. While the three rightmost data points in the lower plot of Fig. 9 strongly suggest that no discretisation errors are present within statistical errors, it requires additional simulation points to see if the leftmost data point in the lower plot of Fig. 9 represents a downwards trend or is just an outlier. From the discussion above and the overview in Table 3 we recall that some of the points at low pion mass may be affected by finite volume corrections. We have repeated the fits shown in Fig. 9 with squared monopole masses shifted upwards by c 2 e −mπL , where c 2 (in units of r −2 0 ) was taken from the global fit described after Eq. (26). Note that the pion mass of 400 MeV is excluded from this global fit for the reasons given above. The result shows an increase of M phys mainly for β = 5.20 and 5.40 but is again consistent with no a dependence. Given the lever arm in a 2 and the size of our statistical and finite size errors, we refrain from including an explicit a dependence of the monopole mass in our global fit (21). Conclusion We have calculated the electromagnetic form factor of the pion, using lattice configurations generated by the QCDSF/UKQCD collaboration with two flavours of dynamical, O(a) improved Wilson fermions. The corresponding pion masses range from 400 to 1180 MeV. The momentum dependence of the pion form factor was studied up to Q 2 around 4 GeV 2 . Within errors, the pion form factor is described very well by a monopole form (20) in this range, for all our lattice pion masses. A linear chiral extrapolation to the physical pion mass leads to a monopole mass of M = 0.727 (16) GeV. This corresponds to a squared charge radius r 2 = 0.441(19) fm 2 , in good agreement with experiment. Our extrapolated lattice data for the form factor is compared with experimental measurements in Fig. 4. Other lattice results are quoted in Table 6. The large parameter space of the gauge configurations we used makes it possible to explore artifacts arising from the finite lattice spacing and volume. An empirical fit allowing for a volume dependence leads to an increase of the monopole mass by 3% at infinite volume and the physical point. Within errors, our results show no clear dependence on the lattice spacing in the range a = 0.07 -0.11 fm of our simulations. Including estimates for systematical errors, our final result then is M = 0.727 ± 0.016 (stat) ± 0.046 (syst) + 0.024 (vol) GeV, which translates to a charge radius of r 2 = 0.441 ± 0.019 (stat) ± 0.056 (syst) − 0.029 (vol) fm 2 . The first error is purely statistical, followed by a systematic uncertainty due to the ansatz for the fitting function and the extrapolation to physical pion masses (for which we added in quadrature the errors ∆M ext and ∆M fit obtained in Section 5.1). The last error reflects a possible shift because of finite volume effects as just discussed. We have set the scale using the Sommer parameter with r 0 = 0.467 fm. We note that the analysis leading to our result for M is independent of the scale setting, so that a different value of r 0 would lead to a simple rescaling of the above values.
2014-10-01T00:00:00.000Z
2006-08-28T00:00:00.000
{ "year": 2006, "sha1": "1e710d7f995734ed373872f30a5500e9787605a4", "oa_license": null, "oa_url": "https://bib-pubdb1.desy.de/record/80305/files/0608021[1].pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "889761fff0296d16625049a1110adeeae4f00fd3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
53683725
pes2o/s2orc
v3-fos-license
Excitations and impurity dynamics in a fermionic Mott insulator with nearest-neighbor interactions We study analytically and with the numerical time-evolving block decimation method the dynamics of an impurity in a bath of spinless fermions with nearest-neighbor interactions in a one-dimensional lattice. The bath is in a Mott insulator state with alternating sites occupied and the impurity interacts with the bath by repulsive on-site interactions. We find that when the magnitudes of the on-site and nearest-neighbor interactions are close to each other, the system shows excitations of two qualitatively distinct types. For the first type, a domain wall and an anti-domain wall of density propagate in opposite directions, while the impurity stays at the initial position. For the second one, the impurity is bound to the anti-domain wall while the domain wall propagates, an excitation where the impurity and bath are closely coupled. I. INTRODUCTION A single particle, or a macroscopic quantum degree of freedom, coupled to a bath is a paradigmatic problem of many-body physics. It is naturally described, at least in three dimensions, as a composite object of the particle dressed by bath excitations, which is a quasiparticle. Dressing renormalizes some of the particle's properties such as the mass. This concept has been particularly fruitful for the polaron problem, where the bath consists of phonons [1,2], and for the Fermi-liquid theory where the collective action of all the indiscernible particles in an interacting Fermi gas renormalizes the parameters of a single-particle excitation [3]. In certain cases, such as the Caldeira-Leggett problem, the renormalization of the parameters can be strong enough to significantly change the behavior of the particle [4]. Similar effects occur in restricted geometries such as one-dimensional quantum systems where the effects of interactions are considerably reinforced. In particular, it was shown that an impurity can behave quite differently from a quasiparticle and the interaction with the bath can lead to subdiffusion [5]. This new phenomenon has triggered an intense theoretical and experimental activity, in particular with very controlled experimental realizations by cold atomic gases. Optical lattices in experiments with ultracold gases are devoid of phonons, and in order to study similar phenomena as in condensed matter physics, phonons can be incorporated via a bath of a different type of particles [6,7]. Mobile impurities in fermionic and bosonic baths have recently been studied to a large extent theoretically [8][9][10] and experimentally [11][12][13]. In one dimension in particular, interesting time-dependent phenomena have been predicted, such as a crossover from a bound molecule to a polaron * Electronic address: paivi.torma@aalto.fi [14], the damping of Bloch oscillations [8,15,16], the non-relaxation of a supersonic impurity [17], and an intriguing behavior of pair correlations with slow and fast driven barriers [18]. In the aforementioned studies, the bath is assumed to be homogeneous, and the only deformations of the bath are caused by its fluctuations and the interaction with the impurity. This restriction is quite natural for systems with a contact interaction such as cold atoms. In this article, we address the question of how an impurity behaves if the bath instead possesses an internal structure, such as a periodic arrangement of the bath particles. An impurity would see an external periodic potential but would also be able to create excitations, a situation not dissimilar to the presence of phonons and the electronphonon coupling in a solid. Such baths can for instance be realized by long-range interactions. We investigate the dynamics of an impurity interacting with a bath of fermions with nearest-neighbor interactions. Using the numerical time-evolving block decimation (TEBD) method [19,20] and analytic arguments, we show that the dynamics of the impurity drastically changes when the impurity can create excitations in the bath, leading to a bound state of the impurity and distortions in the bath. Nearest-neighbor as well as longerrange interactions are currently becoming feasible in experiments with ultracold bosonic [21][22][23][24] and fermionic [25][26][27][28] dipolar molecules, dipolar atoms and Rydberg atoms [29,30] both in traps and lattices. These advances make setups available for testing the predictions presented here. The motion of an impurity in an optical lattice can be recorded by single-site-resolved imaging [13]. Previous theoretical studies have considered spinless fermions with nearest-neighbor interactions in the context of interaction quenches [31], and the excited states created inside the Mott gap of a one-dimensional solid by applying a laser pulse [32][33][34], described by the extended Hubbard model with nearest-neighbor interactions among two spin species. Such excitations are delo-calized and have a well-defined energy unlike the initially localized domain wall excitations studied here. The model and the numerical method are introduced in Section II. In Section III, we show how the number of doublons evolves in time for different regimes of interactions and explain the case V U by an analytic model for the V → ∞ limit. The short-time dynamics are modeled by a three-site Hamiltonian in Section IV. Section V discusses the excitation dynamics in detail. We illustrate different possible excitation processes which explain the features observed in density distributions. In order to find a more precise picture of how the bath evolves in time, we also study the correlation of density differences as a function of the distance from the center of the lattice. These results are presented in Section VI. The possible experimental realization of the model with ultracold dipolar gases is discussed in Section VII. Finally, a brief summary is presented in Section VIII. II. THE MODEL AND THE NUMERICAL METHOD The impurity and the bath are described by the Hamiltonian Here, c j↑ annihilates a bath fermion and c j↓ the impurity, and n jσ = c † jσ c jσ is the number operator. The tunneling energy is denoted by J. Opposite spins interact on the same site with energy U > 0, and the bath fermions among nearest neighbors with V > 0. In the initial state, U = 0 and the impurity is localized at the center of the lattice (site j 0 ), as shown in Fig. 1. At half-filling and V > 2J, the ground state of the bath is a Mott insulator [35]. In order to find a non-degenerate ground state, we fix the number of lattice sites to L = 2p+1, where p ∈ N, and the number of bath fermions to N ↑ = p + 1. For even p, j 0 is occupied by a bath fermion, and for odd p, j 0 is empty. In our TEBD calculations, L varies from 79 to 81, the Schmidt number in the truncation of the state is fixed to 100, and a time step 0.02 1 J is used in the real time evolution. III. THE TIME EVOLUTION OF THE NUMBER OF DOUBLONS In the beginning of the time evolution, the on-site interaction is switched to U > 0 and the impurity is released. Due to energy conservation, the dynamics will be different for U close to V -at resonance -and for U and V far detuned. Off resonance, with U J, the total number of doubly occupied sites N ↑↓ (t) = ψ(t)| j n j↑ n j↓ |ψ(t) oscillates with a frequency close to U while the average value stays constant, as seen in Fig. 2 a). The behavior agrees well with the analytic solution for a free particle in a superlattice with a potential difference U between alternating sites, which is equivalent to the impurity problem when U J and V → ∞ (see Appendix A). Figure 2 a) shows that the analytic model describes the impurity problem well for an impurity created at either an occupied or an empty site. The oscillation frequency U is also seen for the parameters of Fig. 2 b) when |V − U | is sufficiently large. In Fig. 2 c), the frequency is very high and the amplitude small and therefore the oscillation is almost invisible. It can be seen at a shorter time scale in Section IV. For an impurity created at an occupied site with U and V far detuned, the energy U cannot be deposited into the bath. We find that the impurity propagates on occupied sites in a second-order process with velocity 4J 2 U , which is the superexchange coupling obtained from a mapping to the Heisenberg Hamiltonian at the U J limit [36]. Similarly, if the impurity starts at an empty site, it will not have enough energy to move to a site occupied by a bath fermion and will propagate on the empty sites. In contrast, an impurity at an occupied site with U ≈ V can deposit the energy U into the bath for example in a process where the bath fermion at j 0 moves by one site. This is seen as a decay of N ↑↓ (t) in Figs. 2 b) and c). It is of interest to ask what kind of excitations are created in this process. To unambiguously investigate the basic types of excitations in the bath, we focus on the case of Fig. 2 c) with very large U and V which suppresses pair tunneling processes. In Fig. 2 c), V is fixed to 100J and U varied around this value. Intriguingly, we find that the curves for which |V −U | ≥ 3.5J saturate to a nonzero constant whereas the ones for which |V − U | ≤ 2.5J decay to a value close to zero. For |V − U | = 3J, there is a decay within the simulation time but it is unclear whether N ↑↓ approaches zero at longer times. These results are discussed in detail in Section V. Essentially the same behavior is obtained for V = 10J in Fig. 2 b), which is closer to experimentally realizable values, as discussed in Section VII. oscillation with frequency close to |V − U |, in addition to the high frequency U explained in Section III. A similar initial behavior is given by a three-site model, illustrated in Fig. 3, where N ↑ = N ↓ = 1. The two particles have an on-site interaction U , and the spin-up particle has a potential V at sites 1 and 3 mimicking fixed spin-up fermions at the adjacent sites. In the basis the Hamiltonian is written as The initial state is a superposition of states |2 , |4 , and |7 . ... ... In the three-site model, the initial behaviour of N ↑↓ (t) during the first oscillation period is close to the many-body result, as shown in Fig. 4. For three sites, there is a revival of the oscillation after damping, which is not seen in the many-body case. This indicates that the excitations propagating away from the three central sites play a role in the dynamics after the initial stage. One can therefore conclude that the bath fermions at sites beyond the neighboring ones are responsible for the permanent damping. The average value to which N ↑↓ saturates agrees with the three-site model. A. Particle densities The decay of N ↑↓ for U close to V is connected to the creation of excitations in the bath. The excitations can be seen as a density difference n j↑ (t) − n j↑ (0) propagating from the center of the lattice in Figs. 5 and 6. In the off-resonant cases, the density differences are an order of magnitude smaller than in the resonant cases, and the impurity propagates more diffusively at resonance. Off resonance, the propagation is limited to occupied sites, which is a second-order process with velocity 4J 2 U [36]. In Fig. 5, the value U = 8J allows a propagation velocity of the impurity which can be observed within the simulation time, whereas for U = 90J in Fig. 6 the velocity is too small to be observed. B. Model for the resonance region The density differences n j↑ (t) − n j↑ (0) are largest for U = V . To explain this behavior, one can look for limits of |V − U | within which excitations can be created. For V → ∞, the mapping to a free particle in a superlattice gives two energy bands for the impurity, where k is the quasimomentum (see Appendix A). An impurity created at an occupied site is initially in the higher band. A simple way to derive the width of the resonance in U is to assume that the impurity transitions to the lower band by moving to an empty site, and that the dispersion relation of the impurity is unchanged in the transition. The energy released in such a transition would be absorbed by excitations created in the bath. In this scenario, the excitations in the bath are created far away and do not interact with the impurity. In such an excitation, a bath fermion moves by one site, which corresponds to creating a domain wall (DW) with two neighboring sites occupied and an anti-domain wall (ADW) with two neighboring sites empty. The bath can be mapped to an XXZ Hamiltonian, which has a two-domain-wall excitation contin- [37,38]. Assuming the same excitation spectrum as for two domain walls [37,38], the process of creating a DW and an ADW has minimally the energy cost V − 4J and maximally V + 4J. By energy conservation, the lower limit of U to create excitations is U min = (V − 4J) 2 − 16J 2 and the upper limit One can see in Fig. 2 c) that for U = V ± 2.5J, N ↑↓ decays to a value close to zero and for U = V ± 3J, there is a slower decay. On the other hand, for U = V ± 3.5J which is within [U min , U max ], N ↑↓ saturates to approximately 0.5. As seen in Fig. 2 b), a saturation to a value less than one also occurs in the analytic superlattice model where there can be no excitations. This suggests that creating a DW-ADW excitation far from the impurity does not have a high probability since the clear decay behavior does not persist to U = V ± 4J. Thus the bounds U min and U max derived above do not describe the numerical results accurately. It is notable and curious that this straightforward description does not agree with the simulations. C. Excitation processes and the observed resonance region Instead of the simple description above, we find evidence that the DW-ADW pair is created right next to the impurity, in which case the motion of the excitations is restricted by energy conservation. Two distinct processes which can occur, and configurations to which the system can branch later, are illustrated in Fig. 7. In one process, the DW and the ADW simultaneously propagate in opposite directions while the impurity stays at j 0 (configuration 1). In the other process, the impurity hops to the neighboring site (configuration 2). From configuration 2, the ADW can start propagating in the same direction as the DW, in which case the impurity will be trapped at j 0 ± 1 (2a). Alternatively, the impurity and the ADW can remain in place and form a bound state while only the DW propagates (2b). If the impurity stays at j 0 , the ADW cannot move to the left since this would require an additional energy U . This situation is different from creating the DW-ADW excitation far from the impurity as discussed above. Moreover, it is now possible for the impurity to hop within the effective double well formed by the ADW, associated with the kinetic energy J. We find that instead of a kinetic energy contribution ±4J, the simulations are compatible with ±3J, as seen in Table I. We have located the limits of the resonance region, U min and U max , by simulating the dynamics for different values of V J, and different values of U around V . The studied values of U between which a clear decay in N ↑↓ (t) starts and ends are indicated as intervals in Table I. These intervals are compared to U min and U max calculated by replacing the bounds of the bath excitation continuum V ±4J by V ±3J. The agreement with V ±3J may be related to the kinetic energy contribution from the impurity in addition to the bath excitations. VI. THE TIME EVOLUTION OF THE BATH The striped pattern of n j↑ (t) − n j↑ (0) in Figs. 6 and 5 corresponds to inverting the positions of empty and occupied sites in the bath, which is consistent with a DW or an ADW excitation propagating away from the center. At this time scale, the impurity on the other hand stays confined to the center of the lattice. The right top panel of Fig. 6 shows that for U = 90J, the impurity does not move from the central site. This is consistent with the very small change in the density of bath fermions in the left top panel, which indicates that excitations are essentially not created. For U = 100J, the density changes in the bath are larger by an order of magnitude and the impurity has a considerable probability ( n j0±1↓ ≈ 0.2) to move to the neighboring site. While for a DW and an ADW propagating in opposite directions, the impurity stays at the central site, in the other states presented in Fig. 7, the neighboring site will be occupied by the impurity with some probability. Since the expectation value of the density is an average over all states with excitations propagating in either direction, it does not give information on whether both the DW and ADW move simultaneously or if only one of them moves and the other one stays fixed. Instead, one can study the density correlation ψ(t)| ∆n i↑ ∆n −i↑ |ψ(t) , where ∆n i↑ = n i↑ − ψ(0)| n i↑ |ψ(0) and i, −i are indices from the center of the lattice. If the DW and ADW propagate in opposite directions, the density difference ∆n i↑ will be 1 or -1 symmetrically on either side of j 0 , giving a correlation that is maximally one. If only one excitation propagates, e.g. in the positive i direction, ∆n −i↑ will stay zero and ∆n i↑ ∆n −i↑ = 0. The correlation will also be zero if the DW and ADW propagate in the same direction, or if there are no excitations. Detailed examples of the calculation of ∆n i↑ ∆n −i↑ are shown in Appendix C. Figure 8 shows ∆n i↑ ∆n −i↑ at different time steps for U = V = 100J. The correlation at i = 1 seems to saturate to approximately 0.5. This implies that the system is in a superposition where in addition to the oppositely moving DW and ADW, there are other states present for which the correlator is zero. Note that the physical picture of DW and ADW excitations can also help to explain previous results on the interaction-independent velocity of density correlations in Mott insulators [31]. VII. EXPERIMENTAL REALIZATION WITH ULTRACOLD DIPOLAR GASES Polar KRb molecules and Rydberg atoms in optical lattices have recently been used for realizing the spinexchange interaction between nearest neighbors [39,40], and magnetic atoms have been employed for realizing a t − J-like model [41] and the extended Bose-Hubbard model [24]. The magnitude of the nearest-neighbor interactions between the magnetic atoms ranges from zero to approximately 2 in units of the tunneling rates in these experiments [24,41], but it is tunable by the lattice spacing and depth. The on-site interaction can be controlled by Feshbach resonances. The energy gap between the lowest and next-lowest energy bands is made an order of magnitude larger than the tunneling energies and interactions by tuning the lattice depth, preventing excitations to higher bands. In the t − J experiment [41], the largest band gap in units of the tunneling rate is in the z direction, ω z ≈ 57J z and the largest nearest-neighbor interaction quoted is approximately 1.7J z , where J z = 3 Hz. A system with interactions close to 10J and a band gap in this range could still be reasonably treated in the 7: A schematic figure of the time evolution where a DW-ADW excitation is created by moving a bath particle to the neighboring site from the impurity. Here, the bath particle moves to the left but the symmetric case is equally likely. The DW and ADW excitations can propagate in opposite directions (configuration 1) or the DW can propagate and the ADW form a stationary bound state with the impurity (configuration 2). From 2), the state can evolve for instance so that the DW and ADW propagate in the same direction (2 a) or the DW continues to the left and the ADW remains stationary, with the impurity oscillating between the two sites (2 b). In configurations 2 a) and 2 b), the DW excitation has moved to the left beyond the region drawn here. distance from the center i = j ! j 0 h"n i" "n !i" single-band approximation. This magnitude of interactions would be sufficiently high to observe the excitation dynamics studied here. The tunneling and interaction energies are dependent on the lattice parameters; in particular, V J can be tuned by changing the lattice depth [42]. In the extended Bose-Hubbard model experiment [24], lattice depths (s x , s y , s z ) = (15,15,15) in units of the recoil energies correspond to V ≈ J = 27 Hz in the (x, y) plane. The effect of the nearest-neighbor interaction is already seen in the energy gap of the Mott insulator state. We calculate, using a Gaussian approximation for the Wannier functions, that a value V ≈ 10J where J ≈ 2.7 Hz could be reached with lattice depths around (s x , s y , s z ) = (24,24,24) with the same laser wavelengths (see Appendix D). A sufficient duration of the experiment for the dynamics studied here is also realizable, since a coherent Bose-Einstein condensate can be preserved for around 1 s [24], corresponding to 2.7 1 J . For these low frequencies, temperature effects would play a role and should be taken into account. Note that post-selection techniques such as in Ref. [13] could help in that respect. Even larger dipole-dipole interactions can be attained by combining magnetic atoms into molecules [43] and with heteronuclear molecules which possess large electric dipole moments [27,44]. The stability of polar molecules against chemical reactions is still an issue to be solved and has been approached by confining the molecules to deep lattices which suppresses their tunneling [39]. Immobile polar molecules could be used for realizing the bath studied here, a Mott insulator at half filling with nearest-neighbor interactions, since it can be mapped to an XXZ model with spin exchange S + i S − i+1 + S − i S + i+1 and Ising S z i S z i+1 terms. The impurity is not included in such a mapping, and one could study the possibility of whether an impurity particle added to the system would create the excitation dynamics predicted in this work. VIII. CONCLUSIONS In summary, we have studied the dynamics of an impurity in a bath where the nearest-neighbor interaction leads to a periodic structure, in contrast to previous studies with homogeneous baths. While we consider parameters for which the bath is in the Mott insulator state, away from half filling a similar system can have a Luttinger parameter K < 1 2 . The dynamics of impurities in this regime are yet unexplored, and our results are an indication that also this parameter area may reveal novel types of dynamical phenomena. We find that structuring crucially affects the types of excitations in the system. An impurity which is initially localized at a site occupied by a bath particle can create an excitation where a DW and an ADW propagate in opposite directions. Since the ADW consists of two empty sites however, another excitation occurs where the impurity is coupled with an ADW and only a DW propagates. These new dynamical phenomena highlight that rich physics can emerge from a structured bath, and open interesting perspectives, for instance, to experiments on ultracold dipolar gases. In the case U J, U V , and V → ∞, the impurity problem is equivalent to a free particle in a superlattice with potential difference U between alternating sites. When there is a higher potential on odd sites, one can write the number of doubly occupied sites as N ↑↓ (t) = N odd (t) = j odd ψ(t)| c † j c j |ψ(t) . The time evolution can be solved analytically, which allows to compare the results obtained for more realistic values of U and V to a perfectly rigid lattice. The Hamiltonian can be written as H = H J + H U , where where the first term is an energy offset and the second one can be written We use periodic boundary conditions, which do not affect the result when the impurity is far from the edges of the lattice. A convenient way to diagonalize the Hamiltonian is to choose a new unit cell of length 2 (see [45]), which reduces the Brillouin zone from BZ (k = −π + 2πn L , n = 1, · · · , L) to BZ' (k = − π 2 + 2πn L , n = 1, · · · , L 2 ). One can replace the operators c k with new operators α k and β k defined in BZ', , π], The Hamiltonian where (k) = −2J cos(k) and ∆ = U 2 , can be diagonalized by a Bogoliubov transformation The diagonal elements are ∆ + E k± , where The wavefunction at time t = 0 can be written and at time t, In the exponents, E k+ has been denoted by E k . The number operator becomes and the expectation value where Here, j 0 odd corresponds to an occupied site at j 0 and j 0 even to an empty site. The number of doubly occupied sites in the many-body model is now equal to the total occupation of the odd sites, N ↑↓ (t) = A free particle in a superlattice with potential difference U between alternating sites has the two energy bands E k± = ±E k given by eq. (A1). They are illustrated in Fig. 9. An impurity created at an occupied site is in the higher band, and moving to an empty site would correspond to a transition to the lower band. The change in the energy of the impurity would be which has the maximum 2 4J 2 + ( U 2 ) 2 and the minimum U . The initial and final momenta are denoted by k i and k f . The energy released in such a transition would be absorbed by excitations created in the bath. The energy bands of a free particle in a superlattice with potential difference U between alternating sites. Energy of the bath The bath can be mapped to an XXZ Hamiltonian, which has a two-domain-wall excitation continuum ω ∈ [V − 4J, V + 4J] [37,38]. In our system, a bath fermion moving by one site corresponds to creating a domain wall (DW) with two neighboring sites occupied and an antidomain wall (ADW) with two neighboring sites empty. Assuming the same excitation spectrum as for two domain walls [37,38], this process has minimally the energy cost V − 4J and maximally V + 4J. Energy conservation A simple way to derive the minimum and maximum values of U for which excitations can be created is to assume that the dispersion relation of the impurity is unchanged in the transition to the lower band. This means that the excitations in the bath are created far away and do not have an effect on the fixed superlattice potential close to the impurity. By energy conservation, ∆E imp. = E DW +ADW . The lower limit for U is now obtained when the change in the energy of the impurity has its maximum value and the DW-ADW excitation in the bath is created at the minimum energy V − 4J, From this, one can solve U min = (V − 4J) 2 − 16J 2 . The upper limit is obtained when the energy change of the impurity has its minimum value U and the excitation in the bath is created at the maximum energy V + 4J, Appendix C: Density correlation In this Section, we illustrate with two examples the meaning of the density correlation ψ(t)| ∆n i↑ ∆n −i↑ |ψ(t) , where i is the distance to the center of the lattice i = j − j 0 . Figures 10 and 11 depict two possible ways in which the bath configuration can evolve in time, similar to Fig. 7 of the main text. The impurity is not drawn for clarity. In Fig. 10, the DW excitation propagates to the left while the ADW excitation stays localized at the center of the lattice. In Fig. 11, the DW propagates to the left and the ADW to the right. In Tables II and III, the expectation value of the density change ∆n j↑ (t) = ψ(t)| n j↑ |ψ(t) − ψ(0)| n j↑ |ψ(0) is calculated for these two cases. When the state of the bath is described by a single configuration at any t (and not a superposition of different configurations), as in these examples, ψ(t)| ∆n i↑ ∆n −i↑ |ψ(t) = ∆n i↑ (t) ∆n −i↑ (t) and the value of the correlation can be read from Tables II and III. For example, Table III shows that ψ(t 3 )| ∆n 4↑ ∆n −4↑ |ψ(t 3 ) = −1 · (−1) = 1. Figure 12 shows a schematic diagram of the density correlation as a function of i at the different times corresponding to the configurations in Fig. 11. For the stationary ADW excitation of Fig. 10, the correlation is zero at all times. Figure 8 is the main text therefore shows that the system evolves in a superposition of different configurations. The expectation values ψ(0)| n j↑ |ψ(0) and ∆n j↑ (t) for the configurations in Fig. 10. ∆n j↑ (t) = ψ(t)| n j↑ |ψ(t) − ψ(0)| n j↑ |ψ(0) for the configurations in Fig. 11. A schematic plot of the correlation ψ(t)| ∆n i↑ ∆n −i↑ |ψ(t) at different time steps for the case of a DW propagating to the left and an ADW propagating to the right. The dipole-dipole interaction U (r) is given by U (r) = C dd 4π 1 − 3 cos 2 θ r 3 for dipoles aligned in the z direction with an angle θ between the dipole orientation and the relative location of the dipoles. The coupling constant is C dd = µ 0 µ 2 . The magnetic moment µ of for example Erbium is 7µ B . We use the lattice spacings d x,y = 266 nm and d z = 532 nm [24], fit the coefficients in eqs. (D1) and (D2) to the values of J and V in the (x, y) plane given in [24], and calculate J and V using a different lattice depth.
2016-03-15T14:04:29.000Z
2015-11-13T00:00:00.000
{ "year": 2015, "sha1": "2f9a06ef184149637852c86c9dddc75657e1519e", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.93.125110", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "2f9a06ef184149637852c86c9dddc75657e1519e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
264753917
pes2o/s2orc
v3-fos-license
Assessment of self-reported adherence among patients with type 2 diabetes in Matlala District Hospital, Limpopo Province Introduction Complications associated with Diabetes Mellitus are a burden to health services, especially in resource poor settings. These complications are associated with substandard care and poor adherence to treatment plans. The aim of the study was to assess the self-reported adherence to treatment amongst patients with type 2 diabetes in Matlala District Hospital, Limpopo Province. Methods This cross-sectional study used convenience sampling with a standardised, validated questionnaire. Data were collected over 4 months, and Microsoft Excel was used for data capturing. Results We found that 137 (70%) of the participants considered themselves adherent to their diabetes medication. Younger age (p = 0.028), current employment (p = 0.018) and keeping appointment were factors significantly associated with adherence. Reasons given for poor adherence were that the clinic did not have their pills (29%), they had forgotten to take their medication (16%) and gone travelling without taking enough pills (14%). Reasons given for poor adherences to a healthy lifestyle were being too old (29%), 22% had no specific reason, 13% struggled to motivate themselves and 10% simply forgot what to do. Sixty-eight percent of the adhered participants recommended the use of medication at meal times, 14% set a reminder, and 8% used the assistance of a treatment supporter Conclusions and recommendations The study revealed a higher than expected reported level of adherence to diabetes treatment. Further research is needed to assess whether self-reported adherence corresponds to the metabolic control of the patients and to improve services. Introduction Diabetes mellitus is a silently progressive, but serious, health condition.Globally, 386.7 million people suffer from diabetes with a prevalence of 8.3%.This is expected rise to 592 million people by 2035. 1 A large portion of this increase will occur in developing countries, arising from growth and an increased life expectancy of the population, as well as the lifestyle changes associated with increasing trends towards urbanisation, including unhealthy diets, obesity and a sedentary lifestyle, resulting in late-onset diabetes.The International Diabetes Federation estimated that in 2014, 4.9 million people died of diabetes-related deaths worldwide, 1 and furthermore, diabetes is the sixth leading cause of death in South Africa. 2 Based on available epidemiological data, approximately 1-1.5 million South Africans are considered to suffer from diabetes. 3tient adherence to diabetes medication plays an integral role in reducing the healthcare costs resulting from poor adherence, including repeated hospitalisations, the management of complications as well as the subsequent rehabilitation process.Identifying which patients are at the greatest risk of non-adherence is an important first step towards developing interventions that could improve adherence. 4,5,6Diabetes is considered one of the most emotionally demanding chronic diseases, with its management requiring self-monitoring of blood glucose, dietary modifications, exercise and adherence to treatment regimens. 7Many of the complications associated with diabetes can be delayed or prevented by better management and self-care, with improved glycaemic control through diet, exercise and/or taking insulin or oral diabetes medications. 8,9o the outpatient register, more than a third of these patients are on medication for chronic conditions (excluding patients on antipsychotics) of which hypertension is the highest, followed by diabetes mellitus (7.6%).From April 2008 to March 2009, monthly averages of 139 patients with diabetes mellitus were seen at the hospital outpatient department, and of this number an average of eight were newly diagnosed patients with diabetes.Whilst working in the medical wards, the main researcher (S.A.A.) had the impression that many of the subsequent outpatient admissions were because of complications of diabetes, a likely consequence of poor or non-adherence to treatment.This prompted the researchers to investigate the level of adherence to treatment as reported by patients with type 2 diabetes seen at Matlala District Hospital and to identify the reasons for poor or non-adherence to treatment and lifestyle changes. Methods A cross-sectional study was carried out where all patients with type 2 diabetes attending the outpatient department of Matlala District Hospital, who gave written informed consent, were included in the study.Patients with type 1 diabetes and patients who had been diagnosed less than a month previously were excluded. A prospective sample size of N = 196 was calculated using the formula: to ensure a statistical power of > 95% [n = sample size; N = population size (500); Z = critical value (1.96); p = estimated proportion {30% = 0.3}; q = 1p and e = level of precision (±5% = 0.05)].The number of patients with type 2 diabetes at Matlala Hospital was estimated to be 500, and the adherence was estimated to be 30%.The low estimate was based on the low socio-economic environment and literature review that gave levels of 50%. Data were collected from 1 December 2009 to 30 March 2010 by a trained interviewer who administered a structured questionnaire in the local vernacular.The questionnaire, adapted from studies on adherence in hypertensive patients 10 as well as tuberculosis patients' reasons for defaulting treatment 11 , was limited to 22 questions and took an average of 10 minutes to complete.Adherence to medication and lifestyle changes was assessed by asking participants to recall the taking of medication and lifestyle changes (exercise) for the week preceding the visit to the hospital.The responses were categorised as 'always', 'frequently', 'only when I experienced diabetic symptoms' and 'never'.The translation of the questionnaire was verified by a language expert, and a pilot study was conducted before data collection to test the reliability and validity of the questionnaire.Data were captured on an Excel spread sheet and interpreted with cross-tabulation to determine association. Ethical considerations The study was approved by the MEDUNSA Research and Ethics Committee, University of Limpopo.Participants gave written consent and were assured of confidentiality. The majority of the unemployed participants (140) indicated a grant as a source of income, while 17 said that they received financial support from their families and 13 mentioned other sources of income.With regards to household income, 134 participants (68%) had a household income of R1000−R1999 per month, and 38 (19%) survived on an income of less than R1000 per month. Adherence Age below 50 years (p = 0.028), being employed (p = 0.018) and the keeping of appointments (p = 0.001) were significantly associated with being adherent to treatment, while gender (p = 0.441), marital status (p = 0.294), the level of education (p = 0.567), the distance travelled (p = 0.452) or the drug regimen (p = 0.928) were not significantly associated with adherence (Table 2). Table 3 presents reasons for poor adherence to diabetic medication.The most important reasons that emerged were the following: (1)the clinic did not have participants' pills (29%), (2) participants forgetting to take their medication (16%) and (3) participants who forgot to take enough medication with them while travelling (14%). Table 4 shows the reasons for poor adherence to healthy lifestyle recommendations.Most of the participants, 20 (29%), stated that the main reason for not adhering is that they were too old to do exercise, 15 (22%) had no specific reason, 9 (13%) struggled to motivate themselves and 7 (10%) simply said they forgot to follow the advice. Adherent patients stated that the following factors aided them in adhering to treatment: timing the taking of medication with meals (68%), setting cell phone reminders (14%) and using the assistance of a treatment supporter (8%), while the remaining respondents reported using other means. Discussion In this study, the level of self-reported adherence to diabetes medication was 70%, which is higher than that reported in most studies.For instance, a systematic review by Cramer 12 found a very wide range of adherence levels (between 36% and 93%), and adherence levels depended largely on the method of assessment used.This result was buttressed by another study, which estimated poor adherence to be between 30% and 50% irrespective of disease, prognosis or setting. 13urthermore, Manan et al., in their study of adherence among patients with type 2 diabetes, found a self-reported adherence level of 48%. 14Similar studies tended towards the high end of the spectrum: Kalyango et al. 6 and Gelaw et al. 15 placed adherence at 71.1% and 72.2%, respectively.The high adherence in our study was most likely because of the fact that it was self-reported; the sampling was biased as only patients who came to collect treatment -thus adheringparticipated in the study and, furthermore, a tendency to give pleasing answers could also have influenced the results. Patient adherence to a prescribed regimen of oral hypoglycaemic agents is generally low and difficult to maintain, even in populations with adequate access to healthcare and drug coverage. 4Therefore, it is most likely that, in our case, the assessment method used overestimated the true value of adherence in our setting.The subjective nature of the method of assessment, where the response was dependent solely on the power of recall, attitude and trustworthiness of the patient, might have contributed to this high value.This notion is reinforced by Prado-Aguilar et al. 16 who stated that patient self-reporting tends to overestimate adherence. According to Schectman et al., 17 adherence to appointments, independent of frequency of visit, was a strong predictor of diabetes metabolic control.Similarly, in our study, of the respondents who kept their appointments, 98% reported adherence to treatment.From our data, other significant predictors of good adherence were an age below 50 years (p = 0.028) and being employed (p = 0.018).This tendency is confirmed by the literature, where younger, employed and educated patients were found to adhere better to treatment, % of respondents The clinic did not have my pills 17 29 I forgot 9 16 I travelled to visit and did not have enough pills 8 14 My medication was finished 6 10 I do not have food to eat before I take my pills 5 9 I am taking care of a sick family member 4 7 I do not have to drink my pill if I feel better 3 5 I do not have transport money to go to clinic 2 3 I am not responsible for taking my medication 1 2 I am too old to go to the clinic by myself 1 2 The medicine makes me feel worse 1 2 I don't have to take my medication if am going to the hospital 1 2 Total 58 100 Source: Author's own work I am too old 20 29 There is no specific reason for me not to 15 22 I struggle to motivate myself 9 13 I forgot 7 10 The lifestyle changes make me feel worse 5 7 I do not have enough time for that 4 6 Work did not allow me to carry out the changes 3 4 I am not responsible for carrying out the changes 2 3 I do not have to adhere to lifestyle changes if I feel better 2 3 I do not believe that it will help me 1 1 Has an amputated foot 1 1 Total 69 100 Source: Author's own work although not always proven to be statistically significant. 12,13t is important to note that the majority of our patients were older than 50 years and unemployed, indicating that it is important to design appropriate health services to assist these patients in improving their adherence to treatment. Our study found that strategies to remember medication included taking it at meal times, setting a reminder on a cell phone and by soliciting the help of a treatment supporter.On the contrary, Littenburg et al. 18 showed that the most popular aid to treatment adherence was the day of the week pill box, keeping medicines in a special place and associating medicine with a daily event such as a TV show or a meal (the last mentioned being similar to our findings).However, differences in our respective study populations could account for this.Levels of education and socio-economic and cultural circumstances were amongst the main difference between respondents in our study and those of the study by Littenburg et al.Furthermore, the findings regarding taking medication with a meal are confirmed by Schectman et al., who also found that taking medication during meal times had an advantage. 15th regards to lifestyle factors, a study measuring adherence and barriers to lifestyle recommendations among patients with high cardiovascular risk factors in Kuwait found that 64.4% of participants were not participating in regular physical exercise.The main barriers to physical exercise programmes were a lack of time (39%), co-existing diseases (35.6%) and adverse weather conditions (27.8%). 19In our study, old age was the most common reason stated for not adhering to recommended lifestyle changes, and therefore, it is recommended that the exercise advice should be appropriate for patients' age and environment, an example of a suitable physical exercise activity being walking.It is worthy to note that in the Limpopo Province the aged patients and patients with chronic illnesses formed soccer teams to encourage exercise, practising two or three times per week and competing amongst each other.This strategy is keeping them motivated while enjoying the exercise. Limitations of the study The assessment of adherence was dependent on the memory and subjectivity of the participants, as our study relied on self-reported data.As self-reported adherence is usually higher than actual adherence, we suspect that actual adherence to diabetes management in these patients may be even lower than the reported level.Patients attending the hospital for medication constituted a biased study population as they were already complying with expectations.Further bias was possibly introduced during the interviews, as patients could have been culturally more likely to report what they perceived the interviewer preferred. Conclusions and recommendations Although our study revealed a possibly biased, higher than expected adherence level, it is clear that most of our older and unemployed patients are prone to struggle with adherence to the treatment and lifestyle adjustments required for diabetes.Patients who miss their appointments also need additional support to adhere to treatment. To address this, we recommend that the quality of services at primary care facilities should be improved by making access easier for older and unemployed patients living far from the hospital, by avoiding drug shortages and by involving a multidisciplinary team in the management of patients with diabetes and efforts should be made to improve patient education and to form appropriate support and exercise groups to encourage patients in their lifestyle adjustments. Further research is needed to assess whether self-reported adherence corresponds to actual adherence and to the metabolic control of the patients.Operational research to develop cost-effective ways to ensure optimal and quality care for patients living with diabetes and other chronic illnesses will contribute significantly to improved health outcomes. Source: Author's own work Table 2 : Adherence and the demographic characteristics of the participants. Source: Author's own work TABLE 3 : Reasons for poor adherence to diabetic medication. TABLE 4 : Reasons for poor adherence to healthy lifestyle recommendations.
2018-04-03T01:07:17.815Z
2016-07-28T00:00:00.000
{ "year": 2016, "sha1": "f34986d22ed6ee207fbaee912a5459ff11f32709", "oa_license": "CCBY", "oa_url": "https://phcfm.org/index.php/phcfm/article/download/900/1776", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f34986d22ed6ee207fbaee912a5459ff11f32709", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12084712
pes2o/s2orc
v3-fos-license
Come together: human–avatar on-line interactions boost joint-action performance in apraxic patients Abstract Limb apraxia (LA) is a high-order motor disorder linked to left-hemisphere damage. It is characterized by defective execution of purposeful actions upon delayed imitation, or verbal command when the actions are performed in isolated, non-naturalistic, conditions. Whether interpersonal interactions provide social affordances that activate neural resources different from those requested by individual action execution, which may improve LA performance, is unknown. To fill this gap, we measured interaction performance, behavioral and kinematic indexes of left-brain damaged patients with/without LA in a social reach-to-grasp task involving two different degrees of spatio-temporal interactivity with an avatar. We found that LA patients’ impairment in coordinating with the virtual partner was abolished in highly interactive conditions (where patients selected their actions on-line based on the behavior of the virtual partner) with respect to low interactive conditions (where actions were selected beforehand based on abstract instructions). Voxel-based-Lesion-Symptom-Mapping indicated that impairments in low-interactive conditions were underpinned by lesions of premotor, motor and insular areas, and of the basal ganglia. Our approach expands current understanding of the behavioral and neural correlates of interactive motor performance by highlighting the important role of social affordances, and provides novel, potentially important, views on rehabilitation of higher-order motor cognition disorders. Introduction Limb apraxia (LA) is a high-order action representation deficit that alters gesture performance (Rothi et al., 1991) and their spatiotemporal organization and kinematic profiles (Pramstaller and Marsden, 1996;De Renzi, 1986;Leiguarda and Marsden, 2000;Hermsdorfer et al., 2013). It typically occurs after lesions to a leftlateralized cortical (fronto-parietal, premotor, insular) and subcortical (basal ganglia) neural network (Buxbaum et al., 2014). LA has been associated with defective perception (Halsband et al., 2001), evaluation (Heilman et al., 1982;Pazzaglia et al., 2008b;Canzano et al., 2014) and comprehension (Rothi et al., 1985) of observed actions, strengthening the notion that partially overlapping neural substrates may support action perception and execution (Avenanti et al., 2013;Urgesi et al., 2014; but also see Mahon and Caramazza, 2008;Stasenko et al., 2013). Since its first description, LA has been studied in 'isolated' conditions where the patient is asked to perform an action upon verbal command or exposition to a tool (real-use or pantomime). As an effect of the so called automatic/voluntary dissociation (De Renzi et al., 1982;Trojano et al., 2007) LA deficits may be attenuated in every-day settings, where environmental and internal cues may facilitate the transformation of the intended act into proper motor plans (Freund, 2001;Randerath et al., 2011). Crucially, naturalistic contexts not only require acting upon static objects, but also interacting with other individuals by adapting online to them (e.g. jointactions). Accordingly, sensory-motor and cognitive systems of social species are developed in order to interact with other individuals and to efficiently couple observed actions and individual motor execution in time and space. Behavioral and kinematic studies suggest that the execution of individual movements may be radically different in inter-actions than when acting in isolation. Indeed, the kinematics of a given action is different when performed in isolation with respect to when observing another person moving (Kilner et al., 2003), when performing an action with 'interactive' aims (Sartori et al., 2009), or coordinating (Sacheli et al., 2012(Sacheli et al., , 2013Candidi et al., 2015) and competing with others (Naber et al., 2013). Interactive contexts also modulate brain activity in fronto-parietal areas typically recruited during action observation (Neuwman-Norlund et al., 2007 and in additional cortical and subcortical networks that may underpin the integration of individual goals with those of our partners (Kokal et al., 2009;Kokal and Keysers, 2010;Kourtis et al., 2013), a process that is essential to navigating the social world (Sebanz et al., 2006;Sacheli et al., 2015a). Using inhibitory rTMS in healthy individuals we provided the first evidence for a causal role of the left anterior Intra Parietal Sulcus in the execution of interactive actions (Sacheli et al., 2015b). Thus, because interactive and praxic functions are inherently linked to higher-order action related processes, testing apraxia in social contexts may be fundamentally important. Here, we explored whether interpersonal interactions may reduce performance deficits in left brain-damaged patients with (LAþ) or without (LA-) apraxia. Patients were tested in a modified version of a joint reach-to-grasp task (Sacheli et al., 2015b,c) that measures the ability to synchronize one's own movements with those of a virtual partner (Coordination task; Figure 1) in two experimental conditions characterized by high/low interpersonal interactivity (i.e. Interactive/Instructed conditions). The Interactive condition required participants to synchronize their movements with those of the virtual partner by performing the same or a different action, without knowing in advance which individual movement was to be performed. This condition captures the essential nature of realistic interactions where coordination in space and synchronization in time with the partner is fundamental. Conversely, in the Instructed condition, participants were pre-instructed about whether a power vs precision grip was to be performed (regardless of the partner's action) making the interaction depending only on temporal synchronization. Thus, the Interactive coordination is more demanding than the Instructed one. Yet, the presence of social affordances in the former may boost LA þ performance in the more complex situation. By analyzing patients' performance (synchrony and accuracy), behavioral and kinematic (Supplementary Material) indexes we provide a description of apraxics' ability to overcome the challenges of realistic interactions. Voxel Lesion Symptom Mapping (VLSM) was used to search for the lesional underpinnings discriminating the behavioral difference between Interactive and Instructed coordination. Patients Twenty eight left brain damaged patients (14 males) were included in the study. They were recruited from the Neurorehabilitation Units at the IRCCS Santa Lucia (Rome) and at the Sant'Andrea hospital (Rome). The procedures were approved by the IRCCS Ethical Committee and the study was carried out in accordance with the Declaration of Helsinki. A battery of standardized tests was used for neuropsychological screening. This involved tests on general cognitive abilities (Raven et al., 1988), executive functions (non-verbal subtests of the Frontal Assessment Battery; Appollonio et al., 2005) and spatial attention (Line Bisection;Wilson et al., 1987). Verbal comprehension and denomination subtests of the Italian Version of the Aachener Aphasia Test (Luzzatti et al., 1996) were used to assess language deficits. Patients were divided in limb-apraxic (LAþ, n ¼ 12, 6 females) and non-apraxic (LA-, n ¼ 16, 8 females) groups according to their scores on a widely used test for Upper Limb Apraxia (TULIA, Vanbellingen et al., 2010). This test consists of 48 items in which imitation and pantomime of meaningless/meaningful gestures is required. A 6-point scoring method (0 ¼ totally incorrect action execution, 5 ¼ perfect performance) generates performance scores ranging from 0 to 240 (pathological scores 194). All LA þ patients and no LA-patient scored lower than the cut-off for upper limb apraxia (Mann-Whitney U Test P < 0. 001). One LApatient was left out of the lesion analyses because no structural image of the lesion was retrieved (final sample for VLSM analyses: n ¼ 12 LAþ, n ¼ 15 LA-). Stimuli The virtual avatar was created in Maya 2011 (Autodesk, Inc.) by a customized Python script (Prof. Orvalho V., Instituto de Telecomunicações, Porto University) and the virtual scenario was designed in 3DS Max 2011 (Autodesk, Inc.). The avatar moved according to the kinematics of a real actor's upper body [SMART-D motion capture system, MoCAP (Bioengineering Technology & Systems, BjTjS)] (Tieri et al., 2015) recorded while the actor performed eight reach-to-grasp movements toward the upper part of the bottle (precision grip) and eight toward the lower part (power grip; see Supplementary Material Video S1 and S2). The duration of each clip ($3 s) was the same for the different conditions (up and down movements). Each stimulus started with the avatar being still, its hand on the table. After a variable amount of time (i.e. between 200 and 500 ms) the avatar started the movement. The timing of the avatar's hand-object contact was calculated by attaching a photodiode to the screen (where the videos were displayed) that detected the appearance of a black dot pasted on the frame where the avatar touched the bottle. Procedure Coordination task. Patients sat in front of a table and a bottleshaped object was placed 45 cm to the front of them. A monitor placed behind the bottle-shaped object showed a virtual partner facing the participant. In front of this virtual partner was a virtual object identical to that of the patient (Figure 1). Patients received a 'go' signal through headphones before the virtual partner started its reach-to-grasp movement toward either the upper or lower part of the bottle-shaped object. Grasping the upper part implied performing a precision grip, while grasping the lower part a power grip [factor Movement: Precision(Up)/Power(Down)]. According to trial-by-trial auditory instructions, patients were required to synchronize their reach-to-grasp actions with the movements of the virtual partner by performing either imitative or complementary interactions (factor Interaction: Same/Opposite). On top of this 2  2 design, in separated blocks, patients were required to: (i) online adapt to the partner's movement by performing the same or a different action (Interactive coordination condition), without knowing in advance whether this would imply performing a precision grip on the upper part or a power grip on the lower part of the bottle-shaped object; or (ii) grasp the upper or lower part of the bottle-shaped object regardless of what movement their partner performed (Instructed coordination condition). Patients performed 24 Same/Opposite interactions in both Interactive and Instructed conditions made of 12 Precision(Up)/ Power(Down) movements in random order. In both conditions the goal of the participants was to synchronize their grasping with that of their partner. Lower asynchrony values indicate better performance. Before starting the experiments, patients became familiar with the experimental set-up by performing reach-to-grasp movements toward the upper and lower part of the bottle, as well as with the auditory instructions and the experimental request, i.e. to be synchronous with the avatar in touching the object. After the practice trials, four separate 24trial blocks (two Interactive and two Instructed) were performed by following an across-patients counterbalanced order. In order to check for whether patients were properly able to code the instructions, a final block was always run in which patients were asked to perform six up and six down grasping movements according to randomized auditory instructions, while an immobile avatar was displayed in front of them. Thus there was no coordination between avatar and participants. This control condition ensured that any impairment in the Instructed condition could not be explained by the patients' inability to understand the auditory instructions. RTs, Movement Times (MTs; see Supplementary Material), Accuracy of response and Performance (i.e. patient-avatar touch-time Asynchrony) were calculated by having patients release a button on the working surface and by touching the bottle-shaped object on two copper-plates targets with their index and thumb finger where two other copper-plates were fixed (as in Sacheli et al., 2012Sacheli et al., , 2013Sacheli et al., , 2015bCandidi et al., 2015). Performance (i.e. asynchrony), accuracy and inverse efficiency index (i.e. Asynchrony/Accuracy) Asynchrony (absolute value of the difference between patient's hand-bottle contact time and the hand-bottle contact time of the virtual partner) and Accuracy data were analyzed with nonparametric tests to compare: (i) between-groups performance (Mann-Whitney U-Test), with condition-specific differences tested by using the exact probabilities for small samples (Dinneen and Blakesley, 1973); (ii) between-conditions performance (Friedman's ANOVA), with significance level for single comparisons between conditions (Wilcoxon sign test) Bonferroni corrected for the number of relevant comparisons. Furthermore, Asynchrony and Accuracy measures were combined together in an Inverse Efficiency index (Asynchrony/ Accuracy) and analyzed via a bootstrap ANOVA procedure. Bootstrapping creates a distribution of F-values based on the resampling of the original data and allows for running an ANOVA to compare the effects observed in the original data to the null hypothesis of this new bootstrapped F-value distribution. We randomly assigned each data to each condition 10 000 times, entered the data in a mixed ANOVA with factors Group and computed the F-value for each main effect and interaction. Then, we compared our original F-values with the distribution under the null hypothesis of the bootstrapped F-values (Berkovitset al., 2000;Panasiti et al., 2016;R Development Core Team, 2013). The bootstrap P-level was calculated as the proportion of bootstrapped F-values (included in the 95% confidence intervals) greater than the original F-value. Lesion drawing and analyses. For each patient, lesions were drawn on the T1-weighted template MRI scan from the Montreal Neurological Institute with the MRIcron software (Rorden et al., 2007a,b). Lesion drawing was performed by an examiner unaware of patients' clinical features and behavioral results. Superimposing each patient's lesion onto the standard brain allowed us to estimate the total brain lesion volume (in cc). Furthermore, a lesion's location was identified by overlaying the lesion area onto the Automated Anatomical Labeling template provided by MRIcron. LA þ and LA-lesion overlap and lesion subtraction were performed to highlight the lesional pattern of patients' profile. Only voxel lesioned in at least five patients are reported. VLSM. The VLSM analyses were performed using the Non-Parametric Mapping (NPM) software developed by Rorden et al. (2007a,b). Permutation based estimates of the non-parametric Brunner-Munzel statistics were obtained by performing 4000 permutations. In these analyses, we only included voxels that were damaged in at least five patients. We used this criterion to balance two separate requirements: to improve statistical power, achieved by testing only voxels that were damaged in a significant number of patients, and to detect the effect of regions that are reliable predictors of deficits, but lesioned in just a few patients. Colored VLSM maps were then produced and represent z statistics of the voxel-wise comparison between lesioned and non-lesioned patients. The maps indicate the voxels at which patients with a lesion performed worse than those without one. Two VLSM analyses were performed with two different behavioral predictors: (i) the difference between Interactive and Instructed performance (i.e. Interaction-D); (ii) patients' apraxic score (TULIA test; see Supplementary Material). Thus, the two resulting maps represent respectively: (i) lesioned voxels that predict poorer performance in the Instructed condition as compared with the Interactive one; (ii) lesioned voxels that predict stronger apraxic deficits (lower performance in the TULIA test; see Supplementary Material). False discovery rate (FDR) correction was applied to the Brunner-Munzel values associated to damaged voxels by using an alpha level at P ¼ 0.01 and P ¼ 0.05 threshold for the Interaction-D and TULIA predictor, respectively (Nichols and Hayasaka, 2003). Prediction task. In order to assess any perception deficit in predicting the action of the partner, patients were asked to complete a non-interactive prediction task using the same stimuli of the interactive experiment (see Supplementary Material). Participants were asked to passively observe action video clips and predict (by verbally communicating their prediction to the experimenter) whether the virtual partner intended to grasp the bottle-shaped object in the upper or lower location. In the prediction task, the video clips were interrupted at two-thirds or three-fourths of the action deployment time, thus creating short-or long-exposure stimuli. Table 1 shows LAþ and LA-patients' demographic information, the results of neuropsychological tests and between groups comparisons. Coordination task results Asynchrony. Group differences in Interactive vs Instructed coordination. We were primarily interested in finding group differences related to the level of interactivity implied by the Interactive vs Instructed cooperation conditions (Figure 2). A between-groups analysis of participant-partner grasping Asynchrony showed that LA þ patients were more asynchronous than LA-patients in all Instructed conditions (Mann-Whitney U, all Ps < 0.003, corrected P threshold ¼ 0.05/8 ¼ 0.006) except when performing Same-Power(Down) grip interactions, which differed significantly only if no statistical correction was applied (P ¼ 0.013). Conversely, during Interactive coordination the two groups did not differ (all Ps > 0.017, corrected P threshold ¼ 0.006). This analysis shows that LA þ were as good as LA-in performing the Interactive task, while being less able to solve the Instructed task. The two conditions that showed the smallest difference between the two groups were Same-Power(Down) grip (P ¼ 0.174) and Opposite-Precision(Up) grip (P ¼ 0.732). As classical null hypothesis testing is not the ideal statistical tool to make conclusions about non-significant results (Dienes, 2014), we calculated Bayes Factors (BF) for each of the eight between Groups comparisons and tested the null hypothesis that the two groups did not differ (BF 10 factors bigger than 1 indicate evidence for a significant difference between conditions). We run Bayesian Independent Sample T Tests on patients Asynchrony (JASP version 0.8.12, Love et al., 2015) Thus, the two conditions that resulted to be equally difficult for LA-and LA þ when applying non-parametric tests [i.e. Interactive-Opposite-Precision(Up) (and Interactive-Same-Power(Down)] also showed no evidence of group differences with a Bayesian approach. Across-condition differences between Interactive vs Instructed coordination. A Friedman ANOVA on participant-partner grasping Asynchrony performed on the entire sample (i.e. independently from group classification) revealed significant across-condition differences [ANOVA Chi Sqr. (N ¼ 28, df ¼ 7) ¼ 25.357, P < 0.001]. Follow-up Wilcoxon Matched Pairs Tests between Interactive and Instructed conditions revealed that Instructed coordination was more difficult (i.e. higher asynchrony) than Interactive coordination only when performing Opposite-Precision(Up) grips (P ¼ 0.009, corrected P threshold ¼ 0.05/4 ¼ 0.013) and Same-Power(Down) grips (P < 0.001) (all other Ps > 0.716). This result indicates that patients tended to be better at synchronizing during Interactive than Instructed coordination, showing a beneficial effect of maximally interactive conditions compared with the less interactive condition (i.e. Instructed) in which patients were not required to read the partner's behavior in order to program their own. Overall, these results show that apraxic patients improved their synchrony when acting in Interactive vs Instructed conditions, while non-apraxic patients did not. Accuracy of performance. Group differences in Interactive vs Instructed coordination. Direct comparisons between the two Groups in the different experimental conditions showed that LA þ and LA-accuracy did not differ in any condition (Mann-Whitney U, all Ps > 0.015, corrected P threshold ¼ 0.006). Across-condition differences between Interactive vs Instructed coordination. A Friedman ANOVA on patients' accuracy revealed significant across-condition differences [ANOVA Chi Sqr. (N ¼ 28, df ¼ 7) ¼ 25.357, P < 0.001]. Follow-up Wilcoxon Matched Pairs Tests between Interactive and Instructed conditions revealed that no Instructed vs Interactive coordination condition was significantly different after correction (all Ps > 0.018, corrected P threshold ¼ 0.013). Although this pattern of results was also found when testing LA þ (P ¼ 0.046) and LA-patients (P ¼ 0.001) separately, no post-hoc test was significant after correction (all Ps > 0.028). Bootstrap ANOVA on Asynchrony/Accuracy. In order to directly test the interaction between the Group and the within subject factors, and to account for possible speed-accuracy trade-offs, we combined together the two performance measures and ran a bootstrap ANOVA on the Inverse Efficiency index (i.e. Asynchrony/Accuracy). This analysis confirmed the pattern of results found with nonparametric tests highlighting a significant Group (LAþ/LA-)  Coordination (Interactive/Instructed)  Interaction (Same/ Opposite)  Movement (Precision(Up)/Power(Down) grip] interaction [F(1, 26) ¼ 5.909, bootstrapped P < 0.001). Post-hoc comparisons indicated that Interactive coordination was easier than Instructed coordination during Opposite-Power(Down) (P < 0.001) and Same-Precision(Up) (P ¼ 0.0135) movements in LA þ but not in LA-(P ¼ 1 for both comparisons). The significant two-way interaction between Group and Interactive/Instructed factors, suggested that only the LA þ group was sensitive to the interactive nature of the task, being able to perform the Interactive task as good as LA-(P ¼ 0.269) while performing worse than LA-patients during Instructed conditions (P < 0.001). The higher-level interaction explained all significant lower level effects (Group, bootstrapped P < 0.001; Interactive/Instructed, bootstrapped P < 0.001; Group  Interactive/Instructed, bootstrapped P < 0.001; Group  Interactive/Instructed  Same/Opposite, bootstrapped P ¼ 0.018). Neural underpinnings of impaired Instructed vs Interactive coordination performance. To determine the lesions that best predicted the patients' behavioral impairment in Instructed compared with Interactive coordination conditions, we performed a VLSM analysis (Rorden et al., 2007a,b) with an Interaction-D as continuous predictor. The Interaction-D was based on the results of the Coordination Task in order to index the conditions that proved to be most difficult when performed in the Instructed condition compared with the Interactive one [i.e. Opposite-Precision(Up) and Same-Power(Down) grips]. More specifically, the Interaction-D was computed by using the formula: The regions associated with impaired Instructed coordination performance are shown in Figure 3 and Table 2. VLSM showed that lesions to the left motor cortex, pars triangularis of the premotor cortex, insula and striatum (putamen and caudate) predicted poorer performance in Instructed as compared with Interactive coordination. Behavioral correlates of realistic cooperation The ability of LA-patients to synchronize their movements with those of an avatar was similar during Interactive and Instructed cooperation, suggesting that the two conditions were not different per se (see Supplementary Material for a similar evidence on movement kinematics). In the coordination task, performance was overall worse in LA þ than in LA-patients. Crucially, when engaged in Interactive cooperation LA þ performed like LA-patients. The positive correlation between the Interaction-D and patients' apraxic scores (TULIA) supports the link between apraxic deficits and impairment in performing Instructed vs Interactive coordination tasks (see Supplementary Material). It has been shown that during individual action execution, deficits of apraxics manifest when their reaching movements must adapt to increasing visuo-motor requests (Mutha et al., 2010). Thus, the reduction of apraxics' impairments observed during the more challenging interactive condition suggests that individual action execution may benefit from the cues provided by the movements of the partner in line with the automatic/voluntary dissociation (Trojano et al., 2007;Liepmann, 1900Liepmann, , 1905aBasso and Capitani, 1985;De Renzi et al., 1982;Pramstaller and Marsden, 1996). While coordination in the Instructed condition was based on auditory instructions specifying the target hand configuration and arm trajectory (i.e. a condition similar to standard apraxic tests), interactive coordination was based on the action of a partner (i.e. imitate and complement its actions). The improvement of LA þ in the interactive coordination task may thus be explained by the 'affordance competition hypothesis' (Cisek, 2007;Cisek and Kalaska, 2010) according to which the brain processes sensory information to specify, in parallel, several potential actions that are currently available and compete against each other. Anatomofunctionally, the hypothesis suggests that the dorsal visual system specifies competing actions within the fronto-parietal cortex, while a variety of biasing influences are provided by prefrontal regions and the basal ganglia. Here, the concept of affordances goes beyond action specification for object interactions and refers to the action of a partner (Cisek et al., 2007). Interaction-based approaches to rehabilitation of higher-order motor disorders Our behavioral results may provide important insights for devising interaction-based approaches for treating apraxia and possibly other higher-order motor disorders. More specifically, we show that apraxic motor deficits can be assessed by indexing the ability of patients to synchronize their actions with a virtual partner (i.e. our Instructed condition). This is radically different from how apraxia is tested in standard individual conditions that typically evaluate the ability to perform actions under verbal command, under delayed imitation or after exposition to a tool. Crucially, we show that apraxics' impairment is reduced when the movement of the partner needs to be taken into account in order to select the individual action. This suggests that integrating ones' own movement with that of a partner may engage additional neural resources in line with evidence showing that joint actions do not activate the very same brain regions that are activated by action observation or execution alone (Kokal et al., 2009). The Interactive condition used in the present study may possibly recreate the ecological conditions that are known to elicit the automatic/voluntary dissociation reported in apraxia studies (De Renzi et al., 1982;Trojano et al., 2007). Thus, the present data suggest that this interactional effect may be exploited for rehabilitative purposes. This seems very timely considering that current approaches to apraxia rehabilitation are based on strategies that either aim at restoring the impaired motor functions or compensate for them (i.e. restorative and compensatory strategies, Cantagallo et al., 2012) in acting-alone patients. Unfortunately, there is a general consensum on the fact that standard approaches are only partially effective (Worthington, 2016) and do not generalize, indicating that new approaches are needed (Buxbaum et al., 2008;Cantagallo et al., 2012). In their comprehensive review of rehabilitative approaches to apraxia, Buxbaum et al. (2008) list different procedures based on: (i) multiple cues, (ii) error reduction, (iii) six-stage task hierarchy, (iv) conductive education, (v) strategy training, (vi) transitive/intransitive gesture training, (vii) 'rehabilitative training', (viii) errorless completion þ exploration training. Furthermore, Buxbaum et al. (2008) provide a list of cognitive domains that might be used for interventions (e.g. mechanical problem solving, sequence planning and organization, the ability to develop and/or retrieve optimal motor programs, knowledge of how to manipulate an object, and knowledge of optimal hand position when real-world objects provide minimal cues). Tellingly, the list seems to neglect social accounts of motor control that are the basis of the present study and that might provide a useful approach for rehabilitation. It is worth noting that an interactive approach has been used in aphasic patients who showed an increase of performance, possibly due to the mechanism of entrainment, when seeing another person producing speech while attempting to mimic the same mouth movements (Fridriksson et al., 2012(Fridriksson et al., , 2015. Importantly, the present pattern of results suggests that the beneficial effect of motor interactions goes beyond the possible role of on-line movement imitation as the improvement was found during both imitative and complementary interactions. From a modeling point of view, the present findings suggest that motivational factors as well as resources activated for the processing of others' movements are intrinsic to social interactions and may improve interactive behaviors compared with individual action performance. New technologies, such as Virtual Reality might be promising tools to implement scenarios where patients are engaged in interactions with virtual partners embodying different movement kinematics which can be modulated according to the patient's needs. For example, exaggerating specific kinematic features (Sacheli et al., 2013;Candidi et al., 2015), slowing down the movements of the virtual partner, or even making the avatar responsive to the movement deficits of the patients might be efficient for people with different motor disabilities or in different learning stages. Brain lesions dissociating the performance of interactive vs instructed coordination The VLSM analyses on the entire sample of patients indicates that lesions to left motor cortex, pars triangularis of the Table 2. Regions associated with impaired performance in Instructed compared with Interactive conditions (i.e. Interaction-D). For each region, the MNI coordinates of the center of mass are provided along with the maximum Brunner-Munzel (BM) z statistic obtained in each cluster and the number (n) and percentage (%) of clustering voxels that survived the threshold of P < 0.01, false discovery rate corrected VLSM Instructed interaction impairment (Asynchrony) premotor cortex, insula and striatum (putamen and caudate) were predictive of poorer performance in the Instructed condition compared with the Interactive one. The present data suggest that these regions are needed for solving the Instructed task while social interaction might be underpinned by larger brain systems. Premotor and motor regions are well known for their role in action selection and implementation as well as in matching observed and executed actions (Rizzolatti et al., 2014;Urgesi et al., 2013;di Pellegrino et al., 1992;Gallese et al., 1996;Archer et al., 2016), a process that is fundamental in our task. Crucially, our study shows how precentral and premotor lesions, that are stable predictors of apraxia (Pazzaglia et al., 2008a,b;Buxbaum et al., 2007;Goldenberg et al., 2007), are predictive of worse performance in Instructed as compared with Interactive coordination. Importantly, while both the Interactive and Instructed coordination conditions imply predicting the timing of the partner's movements, only the former requires integrating the spatial content of the partner's movement into their own motor plan (e.g. only the Interactive condition requires patients to subordinate their behavior to that of their partner). Thus, premotor regions seem to be fundamental for performing actions when behaving in the Instructed task while not so for performing the Interactive task which might be based on other brain systems to scaffold performance. For example, these results are in line with our previous study showing that left parietal brain regions (the anterior intra-parietal sulcus) and not frontal regions, might play a crucial role in interpersonal coordination during motor interactions (Sacheli et al., 2015b). Insular lesions were associated with worse performance in the Instructed as compared with the Interactive condition. It is worth noting that the anterior insula, together with prefrontal, dorsolateral prefrontal, dorsomedial superior frontal and inferior parietal lobules, is part of a 'fronto-parietal control system' (Spreng et al., 2009) which detects the salience of stimuli (Menon and Uddin, 2010). Thus, we propose that lesions of the insula impaired coordination in the Instructed condition since in this condition the behavior of the partner is less salient compared with the Interactive condition. This supports the idea that others' behavior may represent a form of social affordance that facilitates the performance of individual movements. That lesions of basal ganglia and of a portion of the premotor cortex predict impaired performance in Instructed coordination is in keeping with the notion that higher order motor cognition may be underpinned by combined cortico-subcortical circuits (Leiguarda, 2001;Bhatia and Marsden, 1994;Pramstaller and Marsden, 1998;De Renzi, 1986). While apraxic deficits may selectively regard the kinematic features of movement execution (i.e. trajectory, timing and speed; Faglioni and Basso, 1985;Denes et al., 1998) we did not find clear differences between the kinematic pattern of LA þ and LA-patients in our task. This may suggest that the behavioral difference in performing the Interactive and Instructed coordination conditions between the two groups was not explained by differences in the implementation of kinematic features of the reach-to-grasp movements but rather that the role of the basal ganglia in our experimental task may have to do with signaling relevant cues that bias the fronto-parietal network toward a specific action by inhibiting unnecessary or competing ones (Cisek, 2007;Rounis and Humphreys, 2015). Conclusion By showing that apraxic patients are better at performing actions in an interactive context compared with isolated conditions, our study supports the notion that the social nature of action representations might be crucial for facilitating motor functions in patients suffering from higher-order motor deficits. Furthermore, by finding an interaction benefit when apraxic patients performed both complementary and imitative conditions, the present results suggest that realistic interactions may provide benefits above those of interpersonal imitation. This interaction-based approach to motor dysfunctions may thus be exploited to rehabilitate patients suffering from a variety of higher-order motor impairments.
2018-04-03T00:26:36.084Z
2017-10-09T00:00:00.000
{ "year": 2017, "sha1": "8073784ed80fbc8fea7b9f1fb1c6e38ca2e7b1de", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/scan/article-pdf/12/11/1793/27104051/nsx114.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8073784ed80fbc8fea7b9f1fb1c6e38ca2e7b1de", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
220699123
pes2o/s2orc
v3-fos-license
Multiple Thrombotic Events in a 67-Year-Old Man 2 Weeks After Testing Positive for SARS-CoV-2: A Case Report Patient: Male, 67-year-old Final Diagnosis: Acute cardiac injury • COVID-19 • pulmonary embolism • stroke Symptoms: Confusion • diarrhea • dysarthria • fever • myalgia • sore throat Medication: — Clinical Procedure: Mechanical ventilation Specialty: Critical Care Medicine Objective: Unusual clinical course Background: Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is the viral pathogen responsible for coronavirus disease 2019 (COVID-19), a pandemic respiratory illness. While many patients experience mild to moderate symptoms, severely affected patients often progress to acute respiratory distress syndrome (ARDS). Specific to COVID-19, abnormal coagulability appears to be a principal instigator in the progression of disease severity and mortality. In this report we summarize a case of COVID-19 in which extreme thrombophilia led to patient demise. Case Report: A 67-year-old man in New York presented to the hospital 14 days after testing positive for SARS-CoV-2 at an outpatient site. His initial presenting symptoms included sore throat, headache, fever, and diarrhea. He was brought in by his wife after developing sudden onset confusion and dysarthria. The patient’s clinical picture, which was unstable on presentation, further deteriorated to involve significant desaturations, generalized seizure activity, and cardiac arrest requiring resuscitation. Upon return to spontaneous circulation, the patient required intensive care unit admission, mechanical ventilation, and vasopressor increases. Comprehensive workup uncovered coagulopathy with multiple thrombotic events involving the brain and lungs as well as radiographic evidence of severe lung disease. In the face of an unfavorable clinical picture, the family opted for comfort care measures. Conclusions: In this case report on a 67-year-old-man with COVID-19, we present an account of extreme hypercoagulability that led to multiple thrombotic events eventually resulting in the man’s demise. Abnormal coagulation 14 days from positive testing raises the question of whether outpatients with COVID-19 should be screened for hypercoagulability and treated with prophylactic anticoagulation/antiplatelet agents. Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the novel coronavirus pathogen responsible for the global pandemic, coronavirus disease 2019 (COVID-19), has been gaining notoriety in the medical community for its high infectivity [1] and considerable mortality rate [2]. A hallmark of patient demise with COVID-19 is acute respiratory distress syndrome (ARDS), a life-threatening lung disease characterized by diffuse intrapulmonary inflammation necessitating intensive care and mechanical ventilation. The pathogenesis of ARDS-associated lung injury typically involves the clinical findings of noncardiac pulmonary edema, diffuse inflammatory cell infiltration, and abnormal coagulation. Coagulopathies, specifically intravascular microthrombi and thrombus formation leading to thromboembolic events, are of particular concern for patients with COVID-19. Based on recent studies, coagulation parameters appear to correlate with disease severity [3,4]. Prognosis appears to be improved when high-risk patients, identified by D-dimer and sepsis-induced coagulopathy score (SIC), are appropriately anticoagulated [5]. The International Society on Thrombosis and Hemostasis guidelines recommend considering prophylactic low-molecular-weight heparin on all patients admitted to the hospital with COVID-19 [6], while the Anticoagulation Forum issued a clinical guidance document recommending pharmacologic prophylaxis for patients with COVID-19 when hospitalized [7]. To our knowledge, no guidance is available for risk assessment and management of hypercoagulability in the outpatient setting. We present a case of hypercoagulability involving a COVID-19-positive patient that resulted in late thromboembolic events eventually leading to his demise. Case Report In early April 2020, a 67-year-old man from upstate New York presented to the hospital after developing sudden onset confusion and dysarthria. Two weeks earlier he tested positive for SARS-CoV-2 by real-time reverse transcriptase-polymerase chain reaction testing from a nasopharyngeal swab, after endorsing symptoms of sore throat, headache, fever, body aches, and diarrhea. His initial symptoms had improved until the day of hospital presentation. The patient had a significant past medical history of hypertension and dyslipidemia. On presentation, he was symptomatically hypoxic with an oxygen saturation of 75% on room air. On application of supplemental oxygen, via an Oxymask™ set at 5 L/min, his oxygen saturation improved to 95%. Initial laboratory tests in the emergency department showed lymphocytopenia, thrombocytopenia, and borderline lactic acidosis (Table 1). Initial arterial blood gas was concerning for mixed respiratory and metabolic acidosis. Pertinent laboratory values are reported in Table 1. Based on the patient's prothrombin time-international normalized ratio, platelet count, and sequential organ failure assessment score, his SIC score was calculated to be 5. Chest x-ray revealed bilateral airspace opacities in the patient ( Figure 1A). His clinical picture quickly declined, resulting in a generalized seizure that spontaneously resolved and cardiac arrest requiring resuscitation. Following return to spontaneous circulation, the patient was admitted to the intensive care unit where heparin anticoagulation, vasopressors, and mechanical ventilation were initiated. The patient's ventilator was set to airway-pressure-release-ventilation mode; however, on hospital day 2, due to poor ventilator synchrony, he was switched to pressure-control mode with a positive end-expiratory pressure of 5 cm H 2 O, FiO 2 at 50%, respiratory rate of 8 breaths/min, and inspiratory pressure of 10 cm H 2 O. Radiographic studies revealed a focal region or wedge-shaped low attenuation in the left temporal region, concerning for acute infarct; diffuse subpleural pneumonitis with regions of ground-glass opacities, consolidation, and paraseptal emphysema ( Figure 1B); and acute bilateral pulmonary emboli with evidence of right ventricular dysfunction ( Figure 1D). By hospital day 5, it was apparent his prognosis was grim. With increasing vasopressor INR -international normalized ratio; NT-proBNP -N-terminal pro-B-type natriuretic peptide; PaO 2 /FiO 2 -partial pressure of arterial oxygen/fraction of inspired oxygen. * Bold denotes abnormal lab value. requirements and oxygen demands in the face of an unchanged chest radiograph ( Figure 1C), his family opted for comfort care measures and the patient was terminally extubated. Discussion Here we reported a case of COVID-19 that would be classified as mild in the outpatient setting, which was complicated by a late massive thromboembolic phenomenon resulting in eventual patient demise. A combination of chronic illness, immobilization, viral-associated endothelial injury [8], and hypercoagulability secondary to immune-mediated factors likely precipitated this patient's thromboembolic events. In the face of the global pandemic, coagulability and thromboembolic events associated with COVID-19 have been persistent observations [6]. Prevention of thrombotic complications should be of utmost concern for public health, but there are few recommendations at this time regarding best practice for assessing risk of thrombosis and thromboprophylaxis in the outpatient setting. With large-scale community spread of SARS-CoV-2 across the United States, government-mandated stay-at-home orders, and a high frequency of chronic disease in an aging population, it is important to consider the use of prophylactic anticoagulation/antiplatelet agents in the outpatient setting for primary prevention of thrombosis in high-risk individuals with COVID-19 diagnosed in the outpatient setting. Outpatient use of inpatient thrombotic event prediction scores, such as the Padau Prediction Score for risk of venous thromboembolism [9] should be considered, and outpatient blood work (i.e., D-dimer, prothrombin time, complete blood count) may be necessary to identify people at higher risk for thrombotic complications. To our knowledge, there is only one identifiable clinical trial on the early use of low-dose acetylsalicylic acid in patients with COVID-19 in the outpatient setting [10], and none regarding prophylactic anticoagulant use. While the International Society on Thrombosis and Hemostasis guidelines recommend considering prophylactic low-molecular-weight heparin on all patients admitted to the hospital with COVID-19, the recommendations for outpatients are less well defined. Our case report demonstrates that outpatients with mild COVID-19 can develop life-threatening complications related to thromboembolism. In summary, it is important to further investigate the role of prophylactic anticoagulation and antiplatelet agents in outpatients with COVID-19, and further studies are required to identify the at-risk population. Conclusions Our case report demonstrates that life-threatening thromboembolism can occur in outpatients with mild COVID-19. It is important to identify the high-risk patient population, and further studies are needed to investigate the role of prophylactic anticoagulant and antiplatelet agents for COVID-19-positive patients in the outpatient setting. Conflicts of interest None.
2020-07-08T11:50:22.668Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "2f9916de0fdb618c0e84983c6a993bcdc1a2caaf", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7394557", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "84778cd1bdf01ff9f5a4156debacdfb7bceb1e9b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21748123
pes2o/s2orc
v3-fos-license
Is it melanoma-associated retinopathy or drug toxicity? Bilateral cystoid macular edema posing a diagnostic and therapeutic dilemma Purpose To report the clinical presentation, multimodal imaging, and management of a patient with metastatic melanoma who presented with cystoid macular edema (CME). Observations We report a case of a 71-year-old Caucasian male with metastatic melanoma who presented with bilateral cystoid macular edema after being on treatment with a programmed T cell death ligand 1 inhibitor, MPDL3280, for 1 year. Multimodal imaging techniques, including color fundus photographs, autofluorescence, spectral domain optical coherence tomography (OCT), fluorescein angiography (Spectralis, Heidelberg, Germany), and spectral-domain OCT angiography (Zeiss; California, USA) were performed to evaluate the etiology of his CME and to monitor his response to treatment. Clinical examination and multimodal imaging revealed 1 + chronic vitreous cells, an epiretinal membrane, and mild macular edema in both eyes. Fundus autofluorescence showed paravenous hypoautofluorescence in the right eye and scattered hypoautofluorescent spots in the left eye. Optical coherence tomography angiography (OCTA) revealed mild drop out of superficial vessels in the peri-foveal region bilaterally. These findings were concerning for melanoma-associated retinopathy, drug-related uveitis, or activation of a previous chronic autoimmune process. The patient was started on prednisone 30 mg oral daily and ketorolac tromethamine 0.5% 1 drop four times daily. He was then treated with bilateral sustained-release dexamethasone intravitreal implants (Ozurdex). He had complete resolution of CME, and was tapered off of oral steroids within 6 weeks. Conclusions and Importance Melanoma-associated retinopathy can be accompanied by CME, which presents a diagnostic and therapeutic dilemma in cases where a new drug has been recently initiated. By treating the condition locally, the ophthalmologist may be able to taper systemic immunosuppression more quickly. Introduction The use of monoclonal antibodies for the treatment of melanoma has resulted in higher survival rates but also substantial side effects and visual disturbances. 1,2 Here, we report a case of cystoid macular edema (CME) in a patient after initiation of treatment with the biological modifier MPDL3280A for which it was unclear if the CME was due to MAR or to the medication itself, resulting in a diagnostic and therapeutic dilemma. Case report A 71-year-old Caucasian male, with a history of metastatic melanoma that was in clinical remission with treatment under a clinical trial with a programmed T cell death ligand 1 (PD-L1) inhibitor MPDL3280A and vemurafenib, and with ocular history significant only for bilateral cataracts recently having undergone surgery, was referred with the complaint of increased blurry vision bilaterally. He had been on these medications for 1 year with gradual bilateral vision changes and floaters. Thirty days after cataract extraction with intraocular lens insertion in the right eye, the patient was found to have anterior chamber cells and flare in both eyes for which he was treated with oral prednisone 40mg daily, tapered over 1 week to 30mg four times daily, as well as topical prednisolone acetate 1% four times daily in the right eye and twice daily in the left eye, and cycloplegic drops. He had been off his anti-neoplastic agents for 6 days prior to referral due to the onset of these symptoms and because his clinical trial prevented use of antineoplastic agents while on prednisone. On presentation best-corrected visual acuity (BCVA) was 20/125 in the right eye and 20/50 in the left eye. Intraocular pressures, visual fields, and motility were within normal limits bilaterally. Slit lamp exam of the right eye demonstrated a well-centered posterior chamber intraocular lens implant in the right eye, while the left eye had rare pigmented cells in the anterior chamber with inferior posterior synechiae and a mild nuclear sclerotic cataract. Fundus examination revealed mild epiretinal membranes (ERM) and peripheral pigmentary changes bilaterally ( Fig. 1 A, E). Optical coherence tomography (OCT) confirmed the presence of the ERMs with mild cystic intraretinal fluid, an intact ellipsoid zone (EZ) and retinal pigment epithelium (RPE) in both eyes, distortion of the foveal contour in the left eye, and relative preservation of the foveal contour in the right eye ( Fig. 2A-D). Autofluorescence showed paravenous hypoautofluorescence in the right eye ( Fig. 1B and C) and scattered hypoautofluorescence in the left eye ( Fig. 1E and F). Fluorescein angiography showed bilateral window defects without vasculitis and weak signal likely secondary to media opacity from the cataract in the left eye. Optical coherence tomography angiography (OCTA) revealed mild drop out of superficial retinal vessels perifoveally ( Fig. 3A-F). Based on the clinical exam and multimodal imaging, the patient's differential diagnoses included panuveitis with bilateral CME concerning for melanoma-associated retinopathy, drug-related uveitis, pseudophakic macular edema, or activation of a previous chronic autoimmune process. As the panuveitis improved, prednisone was tapered to 25 mg PO daily for 1 week and then 20 mg PO daily. He was also started on topical ketorolac tromethamine 0.5% 1 drop four times daily in the right eye. Fourteen days later BCVA improved to 20/30 in the right eye and 20/70 in the left eye, without change to the bilateral CME. To treat the CME and allow for local immunosuppression and to reduce time spent off of his chemotherapy regimen, an intravitreal dexamethasone implant (Ozurdex, Allergan Inc., Irvine, CA) (IVO) 0.7 mg was injected into the right eye and one week later into the left eye. One week after IVO, the BCVA had remained stable at 20/30 in the right eye and improved to 20/50 in the left eye with improvement in CME bilaterally. His prednisone was lowered to 5 mg a day and tapered off within 5 weeks, and the ketorolac was also tapered and stopped over a period of two months. He underwent additional IVO ten weeks later bilaterally and again seven weeks after that in the right eye with near resolution of his CME within two weeks, and the decision was made with the patient's oncologist to restart his anti-neoplastic medications (Fig. 4A-D). After restarting the medications, he did not have a recurrence of CME, and blood testing revealed positive anti-retinal antibodies against 30-kDa (carbonic anhydrase II), 33-kDa (at very high titer), and 35-kDa (GAPDH) proteins. Prednisolone acetate 1% drops twice daily bilaterally were continued as a maintenance treatment. He then underwent cataract extraction with intraocular lens implantation in his left eye, and had a BCVA of 20/40 in the right eye and 20/30 in the left eye on last follow-up. Unfortunately, during this period of time, there was noted to be a relapse of his melanoma, for which he was treated with dabrafenib and trametinib. Systemic immunosuppression thus remained contraindicated, necessitating continued local intravitreal injections to control his uveitic process. Discussion CME is defined as macular thickening as a result of a dysfunctional blood-retinal barrier that allows for fluid accumulation within the retina. 3 CME is most commonly associated with cataract surgery, diabetes, retinal vein occlusion, and uveitis, but it can also be found as a showing peripapillary hypoautofluoresence and multiple round foci of hypoautofluoresence tracking along the vessels, with fewer areas than the right eye. F. Optos ultra-widefield autofluorescence of the left eye showing multiple round areas of well-circumscribed hypoautofluorescence, most predominantly paravenous and also in between vessels in the periphery, less prominent than the right eye. reaction to biologic modifiers such as fingolimod, 4 Of note, paravenous hypoautofluorescence, which was seen in our patient, has been found to be correlated with cancer associated retinopathy (CAR), MAR (often lumped together as paraneoplastic autoimmune retinopathy (pAIR)), and non-paraneoplastic autoimmune retinopathy (npAIR). 5 In patients with multiple risk factors for CME, such as the patient described in this report, it can be difficult to determine the cause of CME. In these circumstances, a thorough history and exam are needed to elucidate its etiology. In stable patients, such as our patient, stopping possible instigators of the CME is a reasonable approach, and subsequent re-initiation without worsening of the CME can intimate at other secondary causes of the CME. Certain laboratory findings, such as the presence of anti-retinal antibodies, are also highly suggestive of CAR or MAR causing the CME. Particular imaging findings, though not highly specific, may also be more suggestive of MAR versus a drug side effect. For example, findings of paravenous hypoautofluorescence, may point more to npAIR or pAIR. 6 In the case of our patient, CME did not recur after restarting his anti-neoplastic medications. Given his response to treatment and resolution of CME without worsening on reinitiation of treatment, the presence of anti-retinal antibodies, and the imaging findings in our patient it is clear that his CME was due to MAR, rather than a drug effect. The treatment of CME is generally determined by the underlying etiology. Historically, systemic steroids were used for many causes of CME, but they have unwanted side effects and must be used with caution in certain patients, including patients with history of malignancy, in whom non-targeted systemic anti-inflammatory medications may be relatively contraindicated. 7,8 This provides a treatment dilemma since the benefit of vision improvement and preventing lasting ocular damage must be weighed against the risks of relapse of cancer. The development of newer intraocular steroids, such as the intravitreal sustained-release dexamethasone implant, has helped address this dilemma, and local treatment has been found to cause relief of CME in some cases, particularly in CME caused by diabetes. 9 To the best of our knowledge, there are very few cases suggesting that local treatment can successfully treat CME from CAR or MAR. One case demonstrated successful local treatment of bilateral CME in a patient with CAR from squamous cell carcinoma (SCC) of the lungs. 10 After worsening visual acuity failure of response to systemic prednisone, mycophenolate mofetil, and four doses of IVIG, intravitreal triamcinolone was attempted in both eyes. BCVA and OCT findings subsequently improved. Moyer et al. also reported a similar case study in a patient with CME in the setting of CAR from SCC who was treated with subtenon's triamcinolone injection and diclofenac with improvements in visual acuity. 11 These studies, along with our patient who demonstrated significant improvement and stabilization of CME with local therapy, suggest that local therapy may have some advantages over systemic therapy, particularly for patients with cancer. Conclusions In the case of our patient, CME did not recur after restarting his antineoplastic medications. Given his response to treatment and resolution of CME without worsening on re-initiation of treatment, it is clear that Fig. 3. A. Spectral domain optical coherence tomography angiography (SD-OCTA) of the whole retinal thickness of the right eye, with possible perifoveal small vessel dropout. B. SD-OCTA though the superficial retinal layer of the right eye, from the internal limiting membrane to the superficial inner plexiform layer, showing a small amount of arteriolar and capillary dropout inferior and superior to the fovea. C. SD-OCTA through the deeper retinal layer of the right eye, from the deep inner plexiform layer to the superficial outer nuclear layer, demonstrating capillary dropout, mostly perifoveally. D. SD-OCTA of the whole retinal thickness of the left eye, with possible perifoveal and temporal small vessel dropout. E. SD-OCTA though the superficial retinal layer of the left eye, from the internal limiting membrane to the superficial inner plexiform layer, conveying a small amount of arteriolar and capillary dropout, again most prominent temporal to the fovea. F. SD-OCTA through the deeper retinal layer of the left eye, from the deep inner plexiform layer to the superficial outer nuclear layer, demonstrating more prominent capillary dropout, most prominent temporal to the fovea. Fig. 4. A. Infrared photo through the macula in the right eye, denoting the slice through which spectral domain optical coherence tomography (SD-OCT) imaging was performed. B. SD-OCT through the macula of the right eye after treatment with intravitreal dexamethasone implant. C. Infrared photo through the macula in the left eye, denoting the slice through which SD-OCT imaging was performed. D. SD-OCT through the macula of the left eye status after treatment with intravitreal dexamethasone implant. his CME was due to MAR, rather than a drug effect. MAR and biologic modifiers to treat metastatic melanoma can both lead to CME, which poses a diagnostic and therapeutic dilemma. By treating the condition locally, the ophthalmologist may be able to quickly taper or avoid systemic immunosuppression altogether. In concert with the oncologist, the ophthalmologist can monitor the patient's response as the biologic modifiers are reinitiated, and thus better understand the etiology of CME. Patient consent No personal information or identifiable images of the patient were used in this report. Consent to publish this report was not obtained.
2018-05-25T21:26:22.459Z
2018-01-31T00:00:00.000
{ "year": 2018, "sha1": "c0c3b4f6203243739fc2a76d340988ce88f85aaa", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ajoc.2018.01.030", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0c3b4f6203243739fc2a76d340988ce88f85aaa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259902082
pes2o/s2orc
v3-fos-license
Hypoglycemic Effect of Nelumbo Nucifera Seed Extract on GLUT-4 mRNA and GLUT-4 Protein in Streptozotocin-Induced Diabetic Rats ABSTRACT Background: To investigate the effect of N. nucifera hydroalcoholic seed extract on fasting blood glucose (FBG) levels, glucose transporter (GLUT)-4 mRNA, and GLUT-4 protein in the adipose tissue of streptozotocin (STZ)-induced diabetic rats. Materials and Methods: Male Sprague Dawley (SD) rats were first fed with a high-fat diet (HFD) for three weeks, and then, diabetes was induced by intraperitoneal injection of STZ at a dose of 35 mg/kg bw. Rats were divided into four groups: group 1: normal rats (NC), group 2: STZ-induced diabetic rats (DC), group 3: diabetic rats with N. nucifera hydroalcoholic seed extract at a dose of 400 mg/kg bw (NN), and group 4: diabetic rats with metformin at a dose of 100 mg/kg bw (MET) for 28 days. Results: FBG level was significantly lower in the NN group than in the DC group (P < 0.05). Also, the NN group increased GLUT-4 mRNA expression and GLUT-4 protein in the adipose tissue when compared to the diabetic group. Conclusion: We conclude that the observed hypoglycemic effect of N. nucifera seed extraction in STZ-induced diabetic rats could be due to insulinomimetic activity. Introduction D iabetes mellitus occurs to be a heterogeneous group of disorders that cause an increase in sugar levels in the blood (hyperglycemia), characterized by three "polys," that is, polyuria, polydipsia, and polyphagia. [1]ucose transporter (GLUT)-4, an insulin-dependent glucose transporter, is expressed in adipose tissue, skeletal muscle, and cardiac muscle cells.Genetic variation in GLUT-4 gene affects the expression and results in the synthesis of an abnormal protein that decreases peripheral glucose uptake by muscle and fat. [2]The defect in GLUT-4 translocation and targeting leads to the accumulation of GLUT-4 in the membrane compartment from which insulin is unable to recruit GLUT-4 to the cell surface. [3]lumbo nucifera (NN) is an important source of herbal medicine [4] and also is termed as the "National Flower of India." [5]N. nucifera possess hypoglycemic effects [6] by upregulating GLUT-4 receptor or insulin-secreting activity. [7] Animals For experimental work, male Sprague Dawley (SD) rats weighing between 100 and 150 gm were purchased from the Division of Laboratory Animals, Central Drug Research Institute (CDRI), Lucknow, UP.Rats were housed in polypropylene cages in a well-ventilated room at a temperature of 23 ± 2°C with a 12-hour light/dark cycle in the Central Animal House of BRD Medical College, Gorakhpur (CPCSEA Registration No. 603/02/a/CPCSEA).Rats were first fed with a high-fat diet (Vetcare, Bangalore) for three weeks.Type 2 diabetes mellitus (DM) animal model was prepared by intraperitoneal injection of freshly reconstituted STZ (Sigma-Aldrich®, SO130, Germany) at a dose of 35 mg/kg in overnight fasted rats, which developed obesity, hyperinsulinemia, and insulin resistance. [8]Rats were allowed to drink 10% glucose solution to reduce streptozotocin (STZ)-induced hypoglycemia and were monitored for the next 48 hours for any complications.Rats having fasting blood glucose (FBG) ≥250 mg/dl were considered diabetic and were included in the study.For the normal control group, rats were kept on a normal pellet diet and the rest of the rats were kept on a high-fat diet.Rats were randomly divided into four groups with five rats in each group.Metformin and extract of N. nucifera were given once daily in the morning for 28 days by oral gavage. Group 1: normal control (NC)-normal rats that served as normal control and had been given distilled water with the normal pellet diet (NPD). Group 2: diabetic control (DC)-STZ-induced diabetic rats that served as diabetic control and had been given distilled water with a high-fat diet (HFD).Rats were sacrificed on the 28 th day, and expression of GLUT-4 mRNA by reverse transcriptase polymerase reaction (RT-PCR) and expression of GLUT-4 protein by Western blotting in the adipose tissue were studied and compared with diabetic control as shown in Figure 1.Ribonucleic acid (RNA) was isolated from the homogenized tissue using the TRIzol reagent (Sigma-Aldrich) as per the manufacturer's instructions.Samples underwent reverse transcription using the One-Step RT-PCR Kit (Qiagen, Cat No. 210212).-Actin was used as an internal control.Western blotting for GLUT-4 protein was carried out as shown in Figure 2.For the analysis of data, the mean and standard deviation for the results were calculated. Results The effect on FBG (mg/dl) levels was recorded at day 0, day 7, day 14, day 21, and day 28.These effects were compared with DC and the standard drug MET. Table 1 shows a continuous rise in FBG levels of DC rats on successive days.NN rats showed a gradual and progressive fall of FBG levels as compared to DC rats from day 7 onwards.At the end of the experiment, NN rats showed a fall of − 40.95% (P < 0.05) in FBG levels when compared to DC rats.The administration of N. nucifera seed extract also showed a significant reduction in FBG levels by 33.79% (P < 0.01) at day 28 when compared to their initial levels at day 0. The administration of standard drug metformin significantly decreased FBG levels within 28 days. Figure 2 shows that GLUT-4 protein expression of DC rats was less than that of NC rats.NN rats showed a marked increase in the amount of GLUT-4 protein when compared to DC rats as indicated by the Western blot.MET rats showed increased GLUT-4 protein expression when compared to DC rats.Expressions of GLUT-4 protein were normalized with β-actin protein as an internal control. Discussion In this study, a high-fat diet (24%) was administered for 3 weeks followed by STZ intraperitoneal injection at a dose of 35 mg/kg, which has been considered one of the alternative models for pharmacological screening drugs for type 2 DM as described by Srinivasan et al. [8] STZ partially destroys pancreatic β-cells and a high-fat diet causes insulin resistance, thus resembling the clinical manifestation of type 2 DM as seen in humans. [9] rats showed a 40.95% reduction in FBG concentrations when compared to DC rats at day 28.A significant reduction in FBG levels by 33.79% (P < 0.01) was also noted when compared to their initial levels.The FBG-lowering effect of N. nucifera extract could be due to enhanced peripheral glucose utilization.MET rats showed 58.76% reduction in FBG amounts when compared to DC rats at day 28. Nallini Venkata et al., [10] reported a 40.95% (P < 0.05) increase in FBG levels on the administration of methanolic extract N. nucifera when compared to the diabetic group.Potential mechanisms of the hypoglycemic effect include improved blood glucose transfer to peripheral tissue and potentiated pancreatic release of insulin from islet cells. Figure 1 shows that RT-PCR in the adipose tissue revealed that DC rats showed 36% mRNA expression of GLUT-4 gene when compared to NC rats, which showed 100% GLUT-4 gene expression.NN rats showed 94% mRNA expression of GLUT-4 gene when compared to NC rats.MET rats showed 24% mRNA expression of GLUT-4 gene when compared to NC rats.Jeong S Y et al. [11] reported that after RT-PCR, the expression of GLUT-4 gene was slightly improved in the presence of N. nucifera seed extract.These results suggested that N. nucifera alters the mechanism of adipocyte differentiation. Conclusion Our findings suggest that N. nucifera seed extract is a good choice for controlling the blood glucose level in diabetic rats and can be prepared and utilized as alternatives for the cure of diabetic subjects. Financial support and sponsorship Nil. Group 3 : N. nucifera seed extract treated (NN)-STZ-induced diabetic rats that were treated with hydroalcoholic extract of N. nucifera seeds at a dose of 400 mg/kg body weight. Group 4 : Metformin-treated (MET)-STZ-induced diabetic rats that were treated with metformin (Franco-Indian Pharmaceutical) at a dose of 100 mg/kg body weight. Figure 2 : Figure 2: Western blot for the expression of GLUT-4 protein in the adipose tissue of different groups Figure 1 : Figure 1: Effect on GLUT-4 mRNA expression by RT-PCR in the adipose tissue of different groups.NC = normal control, DC = diabetic control, NN = Nelumbo nucifera, MET = metformin
2023-07-15T15:32:53.170Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "a272bc6eb1ee367cffec75ae24bcdb9dcfacc840", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jpbs.jpbs_226_23", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7edb5d958a31fee5c2f05237188d9eb3c65b1222", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
220425423
pes2o/s2orc
v3-fos-license
Order isomorphisms between bases of topologies In this paper we will study the representations of isomorphisms between bases of topological spaces. It turns out that the perfect setting for this study is that of regular open subsets of complete metric spaces, but we have achieved some results about arbitrary bases in complete metric spaces and also about regular open subsets of Hausdorff regular topological spaces. Introduction Back in the 1930's, Stefan Banach and Marshall Stone proved one of the most celebrated results in Functional Analysis. The usual statement the reader can find of the Banach-Stone Theorem is, give or take, the following: Theorem. Let X and Y be compact Hausdorff spaces and let T : C(X) → C(Y ) be a surjective linear isometry. Then there exist a homeomorphism τ : X → Y and g ∈ C(Y ) such that |g(y)| = 1 for all y ∈ Y and (T f )(y) = g(y)f (τ (y)) for all y ∈ Y, f ∈ C(X). The result is, however, much deeper. They were able to determine X by means of the structure of C(X), in the sense that X turns out to be homeomorphic to the set of extreme points of the unit sphere of (C(X)) * (after quotienting by the sign). Since then, similar results began to appear, as Gel'fand-Kolmogorov Theorem ( [13]) or the subsequent works by Milgram,Kaplansky or Shirota,([17,18,21,23]). In spite of this rapid development, after Shirota's 1952 work -which we will discuss later-a standstill lasts until the last few years of the XXth century. Then the topic forks in two different ways. On the one hand, there are mathematicians that begin to suspect that the proof of [23, Theorem 6] did not work, so they began to study lattice isomorphisms between spaces of uniformly continuous functions (see, e.g., [10]). On the other hand, it begins to appear a significant amount of papers that deal with representation of isomorphisms between other spaces of functions or, in general, between subsets of C(X) and C(Y ), see ( [2,12,11,14,16,22]). Anyhow, the papers where we find some of the most accurate results about isomorphisms of spaces of uniformly continuous functions ( [5,6]), Lipschitz functions ( [4]) and smooth functions ( [3]) have something in common: the result labelled in the present paper as Lemma 2.1; the interested reader should take a close look at [20], where the authors were able to unify all these results and find new ones. This lemma has been key in these works, and has recently lead to similar results, see ( [7,9]). In the present paper we study Lemma 2.1, generalising it in two ways and providing a more accurate description of the isomorphisms between lattices of regular open sets in complete metric spaces. In the first part of the second section, we shall restrict ourselves to the study of complete metric spaces and order preserving bijections between arbitrary bases of their topologies. Namely, we will show that given a couple of complete metric spaces, say X and Y , every order preserving bijection between bases of their topologies induces a homeomorphism between dense G δ subspaces X 0 ⊂ X and Y 0 ⊂ Y -subspaces that can be endowed with (equivalent) metrics that turns them into complete spaces. Later, we restrict ourselves to the bases of regular open sets on the wider class of Hausdorff, regular topological spaces and show that whenever X 0 ⊂ X is dense, the lattices R(X) and R(X 0 ) of regular open subsets are naturally isomorphic and analyse some consequences of this. Joining both parts we get an explicit representation of every isomorphism between lattices of regular open sets in complete metric spaces that may be considered as the main result in this paper. 1.1. Apart from this Introduction, the present paper has Section 2, where we prove the main results of the paper, and Section 3, that contains some remarks and applications of the main results. 1.2. In this paper, X and Y will always be topological spaces. We will denote the interior of A ⊂ X as int X A, unless the space X is clear by the context, in which case we will just write int A. The same way, A X or A will denote the closure of A in X. We will denote by R(X) the lattice of regular open subsets of X and B X will be a basis of the topology of X, please recall that an open subset U of some topological space X is regular if and only if U = int U . We say that T : B X → B Y is an isomorphism when it is a bijection that preserves inclusion, i.e., when T (U) ⊂ T (V ) is equivalent to U ⊂ V . The main result In this Section we will prove our main result, Theorem 2.14. Actually, it is just a consequence of Theorem 2.8 and Proposition 2.13, but as both results are more general than Theorem 2.14 we have decided to separate them. We have split the proof in several intermediate minor results. Lemma 2.1. Let (X, d X ) and (Y, d Y ) be complete metric spaces or locally compact metric spaces and B X , B Y , bases of their topologies. Suppose there is an isomorphism T : B X → B Y . Then, there exist dense subspaces X 0 ⊂ X and Y 0 ⊂ Y and a homeomorphism τ : Proof. The proof is the same as in [5,Lemma 2]. Remark 2.2. In the conditions of Lemma 2.1, we will denote What the proof of [5,Lemma 2] shows is that the subset X 0 is dense in X, where X 0 consists of the points x ∈ X for which there exists y ∈ Y such that R X (x) = {y} and R Y (y) = {x}. Once we have that X 0 is dense, it is clear that the map sending each x ∈ X 0 to the only point in R X (x) is a homeomorphism. The following Theorem is just a translation of the Théorème fondamental in [19]. Theorem 2.3. If there exists a bicontinuous, univocal and reciprocal correspondence between two given sets (inside an m-dimensional space), it is possible to determine another correspondence with the same nature between the points of two G δ sets containing the given sets, the second correspondence agreeing with the first in the points of the two given sets. A more general statement of Lavrentieff's Theorem can be found in [25, Theorem 24.9]: Theorem 2.4 (Lavrentieff). If X and Y are complete metric spaces and h is a homeomorphism of A ⊂ X onto B ⊂ Y , then h can be extended to a homeomorphism h * of A * onto B * , where A * and B * are G δ -sets in X and Y , respectively, and As for the following Theorem, the author has been unable to find Alexandroff's work [1], but Hausdorff references the result in [15] as follows: [1,15]). Every G δ subset in a complete space is homeomorphic to a complete space. Combining Theorems 2.4 and 2.5 with Lemma 2.1 we obtain: Proposition 2.6. Let X and Y be complete metric spaces and T : B X → B Y an inclusion preserving bijection. Then, there exist a complete metric space Z and dense G δ subspaces X 1 ⊂ X, Y 1 ⊂ Y such that Z, X 1 and Y 1 are mutually homeomorphic. Of course, if Z is an in Proposition 2.6 then every dense G δ subset Z ′ ⊂ Z fulfils the same, so it is clear that there is no minimal Z whatsoever. In spite of this, it is very easy to determine some maximal Z: Theorem 2.7. The greatest possible space Z in the preceding Proposition is (homeomorphic to) (X 0 , d Z ), where X 0 is the subset given in Lemma 2.1 and A more explicit, though less clear, way to state Theorem 2.7 is the following: For the first part, take a d Z -Cauchy sequence (x n ) in X 0 and let y n = τ (x n ) for every n. It is clear that both (x n ) and (y n ) are d X -Cauchy and d Y -Cauchy, respectively, so let x = lim(x n ) ∈ X, y = lim(y n ) ∈ Y , these limits exist because X and Y are complete. It is clear that any sequence (x n ) ⊂ X 0 converges to x if and only if y = lim(τ (x n )). This readily implies that R X (x) = {y}, so x ∈ X 0 and this means (X 0 , d Z ) is complete. Now we must see that every metric space Z ′ that embeds in both X and Y is embeddable in X 0 , whenever the embeddings respect the isomorphism between the bases. For this, as X 0 is endowed with the restriction of the topology of X and with the property that x ∈ U if and only if y ∈ T (U). By the very definition of X 0 and Y 0 this means that x ∈ X 0 , y ∈ Y 0 and τ (x) = y. Now, we approach Proposition 2.13, the main result about regular topological spaces. For this, the following elementary results will come in handy. Proof. It is obvious. Proof. Let x ∈ U X . This is equivalent to the fact that every open neighbourhood V of x has nonempty intersection with U. So, V ∩ U is a nonempty open subset of X and the density of Y implies that V ∩ U ∩ Y is also nonempty, so x ∈ U ∩ Y X and we have U X ⊂ U ∩ Y X . The other inclusion is trivial. Lemma 2.11. Let X be a topological space and U, V ∈ R(X) such that U ⊂ V . Then, there is ∅ = W ∈ R(X) such that W ∩ U = ∅ and W ⊂ V . Proof. Actually, U \ V is regular because U and X \ V are regular and U \ V = U ∩ X \ V . This set is nonempty because U ⊂ V , along with the monotonicity of the interior operator, would imply Remark 2.12. If in Lemma 2.11 X is regular and Hausdorff, then V can be taken as any open subset that contains U strictly. Proposition 2.13. Let X be a topological space and Y ⊂ X a dense subset. Then Proof. We need to show that T and S are mutually inverse. Let U ∈ R(X), the first we need to show is that T is well-defined, i.e., that Then, as the closure in X preserves inclusions, we have where the first equality holds because of Lemma 2.10. Taking interiors in X also preserves inclusions, so we obtain which readily implies that V ∩ Y ⊂ U ∩ Y and we obtain V ∩ Y ∈ R(Y ) from Lemma 2.9. It is clear that S(V ) ∈ R(X) for every V ∈ R(Y ), so both maps are well-defined. Furthermore, Lemma 2.10 implies that, for any regular U ⊂ X, As for the composition T • S, we have so, by Lemma 2 .11 there is an open G ⊂ X such that H = G ∩ Y and so and this is absurd. Indeed, the inclusion marked with ( * ) implies that we may substitute G by G ∩ int X W X , so both equalities in (2) hold for some open G ⊂ int X W X . As Y is dense and G and W are open, the last equality implies that G ∩ W = ∅. Of course, this implies G ∩ int X W X = ∅, which means G = ∅ and we are done. Now we are in conditions to state our main result: Theorem 2.14. Let X, Y and Z be complete metric spaces, φ X : Z ֒→ X and φ Y : Z ֒→ Y be continuous, dense, embeddings and X 0 = φ X (Z). Then, T : R(X) → R(Y ) given by the composition is an isomorphism between the lattices of open regular subsets of X and Y and every isomorphism arises this way. The "every isomorphism arises this way" part is due to Theorem 2.8, while the "the composition is an isomorphism" part is consequence of Proposition 2.13. Applications and remarks In this Section, we are going to show how Proposition 2.13 gives in an easy way some properties of βN and conclude with a couple of examples that show that the hypotheses imposed in the main results are necessary. But first, we need to deal with an error in some outstanding work. In [4] F. Cabello and the author of the present paper showed that some results in [23] were not properly proved. Later, in [5] the same authors proved that, even when the proof of [23,Theorem 6] was incorrect, the result was true. Now, we are going to explain what the error was. The following Definitions and Theorems can be found in [23]: Definition 2). A distributive lattice with smallest element 0 satisfying Wallman's disjunction property is an R-lattice if there exists a binary relation ≫ in L which satisfies: • If h ≥ f and f ≫ g, then h ≫ g. • For every f = 0 there exist g 1 and g 2 = 0 such that g 1 ≫ f ≫ g 2 . • If g 1 ≫ f ≫ g 2 , then there exists h such that h ∨ f = g 1 and h ∧ g 2 = 0. Immediately after Definition 2 we find this: The open regular set in X associated to f ∈ L is denoted by U(f ). With this notation, the next statement is: Our Proposition 2.13 contradicts the uniqueness of X in the statement of Theorem 2 and we may actually explicit a lattice isomorphism between R(X) and R(Y ) for different compact metric spaces X and Y . Namely, we just need to take the simplest compactifications of R and the composition of the lattice isomorphisms predicted by Proposition 2.13: is a lattice isomorphism whose inverse is given by It seems that the problem here is that the definition of R-lattice, Definition 2, does not include the relation ≫, but in Theorem 2 and its consequences the author considers ≫ as a unique, fixed, relation given by (L, ≤). It is clear that the above spaces generate, say, different ≫ X and ≫ Y in the isomorphic lattices R(X) and R(Y ). This leads to the error already noted in [4], Section 5. Actually, with the definition of R-lattice given in [23], in seems that the original purpose of the definition is lost. Indeed, the relation ≫ may be taken as ≥ in quite a few lattices. This leads to a topology where every regular open set is clopen, in Section 3.1 we will see an example of a far from trivial topological space where this is true. Given a lattice (L, ≥), the relation between each possible ≫ and the unique locally compact topological space given by Theorem 2 probably deserves a closer look. Anyway, if we include ≫ in the definition, then [23, Theorem 2] is true. So let us put everything in order. (1) For every a = b ∈ L, there exists h ∈ L such that either a ∧ h = 0 and b ∧ h = 0 or the other way round. (Una forma de Wallman's disjunction Property). (5) For every f = 0 there exist g 1 and g 2 = 0 such that g 1 ≫ f ≫ g 2 . 3.1. The Stone-Čech compactification of N. We will analyse the isomorphism given in Proposition 2.13 when Y = N and X = βN, the Stone-Čech compactification of N. This is not going to lead to new results, but it seems to be interesting in spite of this. These are very different spaces, so it could be surprising the fact that they share the same lattice of regular open subsets. In any case, as N is discrete, every V ⊂ N is regular and this, along with Proposition 2.13, implies that As our final comment in this Just for fun Remark, we have that βN is the only Hausdorff compactification of N that fulfils: ♠ If J, I ⊂ N are disjoint, then their closures in the compactification are disjoint, too, although this is just a particular case of a result byČech, see [24, p. 25-26]. 3.2. The hypotheses are minimal. In some sense, Theorem 2.14 is optimal. Here we see that there is no way to generalise it if we omit any of the hypotheses. Of course the same applies to any uncountable set endowed with the cocountable topology τ con , so (R, τ cof ) and (R, τ con ) have the same regular open subsets. Nevertheless, there is no way to identify homeomorphically any couple of dense subsets of R with each topology. In order to avoid this pathological behaviour we needed to consider only regular Hausdorff spaces since these spaces are the only reasonable spaces for which the regular subsets comprise a base of the topology. In other words, Theorem 2.14 will not extend to general topological spaces. Remark 3.8. Consider X = [0, 1] endowed with its usual topology and let Y be its Gleason cover. The lattices R(X) and R(Y ) are canonically isomprphic, but it is well known that no point in Y has a countable basis of neighbourhoods, so is never a singleton. It is remarkable that [8,Example 1.7.16] is the only place where the author has been able to find a statement that explicitly confirms that the Gleason cover of some compact space K is the same topological space as the Stone space associated to R(K), i.e., G K = St(R(K)). There is a lattice isomorphism between R(X) and R(Y ), say T , given by the composition of the following isomorphisms: In spite of this, it is intuitively evident that Remark 3.10. There is no "non-complete metric spaces" result. Indeed, I and Q have the obvious isomorphism between their bases of regular open subsets and they are, nevertheless, disjoint subsets in R. This means that when trying to generalise Theorem 2.14 the problem may come not only from the lack of separation of the topologies as in Remark 3.7, from the excess or points as in Remark 3.8 or from the points in X not squaring with those in Y as in Remark 3.9 but also from the, so to say, lack of points in the spaces even when they are metric.
2020-07-10T01:00:52.130Z
2020-07-08T00:00:00.000
{ "year": 2020, "sha1": "4a2f5c29fb1ede9ad69e3ce72a686d4dfd06be82", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f5b4fa188d463e4013dcbdef8fbc54636fcf0f46", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233563695
pes2o/s2orc
v3-fos-license
The Development of Environmental Human Rights Human rights and the environment are linked with each other in two ways. Firstly, the environment is seen as pre-condition of the realization of human rights. Because human beings are dependent on the environment. We all meet our basic needs including air, water and food from nature. Individuals cannot exist without the mother earth. For this reason, human rights may not be enjoyed at the absence of a clean environment. Secondly, human rights can be an effective way to achieve environmental safety. These two linkages are united under the umbrella of environmental human rights. Environmental human rights are the rights of people to protect the environment for the sake of human beings. There have been numerous studies investigating the scope and types of environmental human rights. However, how the linkage between human rights and the environment has been evolved has not been discussed sufficiently so far. Accordingly, this paper aims to explore how environmental human rights have been developed over history. This research finds that environmental human rights have been developed by international environmental law more than international human rights law. Introduction Environmental human rights are based on the relationship between the environment and human rights. Environmental human rights can be perceived in two distinct ways. Firstly, the environment has been regarded as pre-condition of basic human rights including the right to life and health since the 1972 Stockholm Declaration. This assumption derives from the fact that nature provides food, water and air that all human beings need for survival. People cannot exist without the environment because we all meet our basic needs from nature. For this reason, people are dependent on the environment. In other words, the environment appears the most important element of human lives. This results in the fact that people cannot enjoy or realize human rights at the absence of safe environment because the quality of nature affects people's lives. Due to the importance of nature on people's lives, environment is considered essential for the enjoyment of human rights. The second aspect of the connection between the environment and human rights is that human rights can be an effective way to address environmental issues and to influence environmental policy. There are four types of human rights which can be used by the concerned people in environmental matters. The first one is the right to safe environment which refers that people have a right to live in a clean and safe environment. The second one is the reinterpretation of existing human rights which means that internationally recognized human rights already require safe environment. The third one is civil and political rights including freedom of expression, right to association and right to assembly. The last one is procedural rights including right to access to information, right to participation in decision making process and right to access to justice. All these human rights can be used effectively by individuals to affect environmental policy and address environmental problems that people face. The connection between the environment and human rights has been discussed by previous studies sufficiently so far. There have been, however, a few researches investigating how environmental human rights have been evolved over history. The main purpose of this paper is to argue the development of environmental human rights. This paper consists of three main parts. It firstly argues what dynamics have caused the emergence of environmental human rights. It then argues how international and regional environmental law has contributed to the development of environmental human rights. Lastly, it explores how regional and international human rights law has contributed to the development of environmental human rights. The Emergence of Environmental Awareness Environmental degradation is an age-old issue. It has been occurring throughout human history, from the earliest settlements to modern society, as reflected in recent headlines (Karabıçak andArmağan, 2004: Gazioğlu, 2018). Similarly, concern for environmental matters is not a new phenomenon. Evidence of environmental concern, in particular concern related to air pollution, can be found in different periods in history. Review Article How to cite: Akyüz (2021). The Development of Environmental Human Rights, International Journal of Environment and Geoinformatics (IJEGEO), For instance, after Londoners complained about the smoke, Edward I, the fifth Plantagenet king of England, decided to ban the burning of sea-coal by proclamation in London in the 13th century (Urbinato, 1994). A closer look at the data indicates that laws which counter environmental problems are as old as public concern about an unsafe environment. However, it does not seem quite right to date the origins of environmental rights to Edward I's decision to curb the smoke in London because of two main reasons: firstly, this decision was not taken up by the international community and thus remained local and was not based on any specific environmental right; secondly, and more importantly, the key purpose here was to protect the public, not the environment. Environmental rights, as they exist today, make explicit reference to the protection of the environment itself. It may be useful to clarify why environmental rights are relatively new when environmental issues and public concerns date back to earlier times. On logical grounds, it seems fair to link the emergence of environmental issues with environmental rights, as environmental rights emerged as a response to increasing environmental contamination and its impact on human lives after the 1950s. There may be many different answers to this question, but one stands out; namely, that environmental issues became a global concern in the second half of the 20th Century (Anton and Shelton, 2011) when the world started to witness a rapidly growing population (the world population was 1 billion in 1800 and increased to 2,5 billion in the 1950s) (Our World in Data, 2019), the use of technology increased and the patterns of production and consumption started changing (Karabıçak and Armağan, 2004). This all triggered environmental degradation. The second half of the 20th Century was when the international voice on environmental protection was first strongly raised and this automatically triggered the development of environmentalism at an international scale. There is consensus in the literature that the 1960s marked the emergence of modern environmental thinking in the political, social and academic agenda and this greatly impacted the public's consciousness and marked the beginning of modern environmentalism (Karabıçak and Armağan, 2004;De Steiguer, 2006;Ülker et al., 2018. Indeed, a number of significant developments happened in the 1960s and 1970s, which led to the emergence of a strong public awareness on environmental matters and intensified the call for countries to cooperate to protect and conserve the earth's ecosystem, that is its land, air, water, animals, plants and entire habitats such as rainforests, deserts and oceans, at an international level (Karabıçak and Armağan, 2004). Environmentalism began as a movement after the 1950s (Hays, 1981). Also, at this time philosophers joined the debate and a new branch of ethics was born: environmental ethics. In the 1970s, the first international academic journal in this field (the US-based Journal of Environmental Ethics) emerged and environmental history was born as a new discipline in the United States. In addition, this era saw the publication of Rachel Carson's Silent Spring, which contributed to development of environmental movement, in 1962(De Steiguer, 2006; the publication of the Report for the Club of Rome, Limits to Growth, which focused considerable public attention on environmental matters, at the beginning of the 1970s (Colombo, 2001); and, the foundation of the Greenpeace movement in 1969 (Eden, 2004), which brought environmental issues to the public's attention and influenced both the private and the public sectors. These developments were not surprising as the public was not concerned only because of the contamination of nature, but also because of the contamination's negative impact on human well-being. The 'Great Smog' of 1952 (also known as the 'Big Smoke') is, for instance, accepted as being one of the worst air-pollution events in the history of the UK, resulting in 4000 deaths (Chauhan and Johnston, 2003). The 1952 incident, along with similar environmental issues which had an impact on public health, resulted in environmental protection being viewed during the 1960s as being a matter of public concern (Anton and Shelton, 2011). It would be more appropriate to define the 1950s and the aftermath of the Great Smog as a period of the emergence of an "environmental crisisˮ that threatened human beings in a deadly manner. Not surprisingly, this alarming situation led to the concept that a safe environment is necessary for the safeguarding of human well-being on a global scale. It seems reasonable to state that the inclusion of an environmental dimension in the human rights debate during the 1970s was in response to an urgent need to protect human well-being from being impacted by the effects of increasing environmental degradation, which started seriously to threaten people at an international level. International Environmental Law and Environmental Human Rights The conference on the Human Environment, held in Stockholm in 1972, represented major milestones in the evolution of environmental rights (Cramer, 2009: Gellers, 2012. The 1972 Stockholm Declaration on the Human Environment is the first authoritative instrument, which recognizes the close relationship between environmental protection and human rights at an international level (Gellers, 2012;Peters, 2018). The Stockholm Declaration can be considered a turning point in environmental human rights as the conference proposed a human rights approach to environmental protection and recognized its impact on human rights (Cramer, 2009;UN, 1972;Wisadha and Widyaningsih, 2018;Jankuv, 2019;Ahmetoğlu, and Tanık, 2020). The Declaration is significant in many ways, not least because it established the link between human rights and environmental protection (Olawuyi, 2014). It proclaims that "Both aspects of man's environment, the natural and the man-made, are essential to his well-being and to the enjoyment of basic human rights the right to life itselfˮ (UN, 1972). It clearly defines that the exercise of basic human rights inevitably requires environmental protection; this means that basic environmental standards are regarded as a pre-condition to the enjoyment of human rights (Olawuyi, 2014). This was a novel idea and it was the first time that the link between the environment and human rights was recognised at an international level. Controversy around the Declaration was not lacking. It was open to criticism from a number of angles. The main criticism directed at it was because the principles are hard to implement in practical terms within environmental policy. Much of the current debate revolves around such principles failing to go beyond advice or recognition and therefore remaining insufficient and ineffective due to weak institutional and compliance mechanisms, particularly as these rights are not legally enforceable (Sands, 1993;Anton and Shelton, 2011). This criticism comes across as unilateral, biased and pointless given that the main purpose of the Stockholm Declaration was to inspire and guide human beings in the preservation and enhancement of the human environment, not to establish enforcement mechanisms at an international level. The Stockholm Declaration is an advisory statement of purpose, a socalled "soft" law, which does not have any legally binding force (Wirth, 1995). Secondly, and maybe more importantly, there is a lack of clarity in the definition of the relationship between the environment and human beings. The principles enshrined in the Declaration are confusing. It leaves a number of controversial questions unanswered, the main one being "Why should we protect the environment?ˮ Is it only for human being's benefit, or is it also for nature's own value? If taken from an ethical perspective, it appears reasonable to say that the Declaration principally concentrates on human needs, and neglects the non-human world and other aspects of nature; which means that the value of the environment is determined by the rationality of human needs. Thirdly, this Declaration fails to identify a separate solidarity right, i.e. the right to a healthy environment. Given that the environment is essential for human well-being and the enjoyment of basic rights, why did the Declaration not recognize a substantive right to a healthy environment? Lastly, it can be argued that defining the environment as a pre-condition of realization of human rights is inherently risky because the lack of these alleged pre-conditions (a safe environment) might be used to deny human rights. This argument seems to hold water because if a safe environment is essential for the enjoyment of human rights and if this does not exist (one of the main concerns of the modern world being heavy environmental contamination such as air pollution), it can arguably be claimed that human rights do not exist because a safe environment does not exist. This is another unclear and confusing point that the Declaration fails to clarify. In sum, therefore, the recognition of the link between the environment and human beings' well-being by the Declaration was a significant development, but proved to be insufficient in terms of providing a comprehensive map and developing an elaborate framework to reveal the ways in which the two interact. The 1992 Rio Declaration, which was approved by 178 countries as an essential feature of environmental governance, is another important document that builds upon the basic ideas concerning the attitudes of individuals and nations towards the environment (UN, 2011). It reaffirmed the 1972 Stockholm Declaration and sought to build on it (UN, 1992). Arguably, it can be said that the 1992 Rio Declaration contributes to the development of environmental rights more than the 1972 Stockholm Declaration does as the Rio Declaration does not only focus on the recognition of environmental rights but also on the responsibilities of human beings to achieve a safe environment. While the Stockholm Declaration puts forward the narrow perspective that the environment is just a tool for the enjoyment of human rights (UN, 1972), in the Rio Declaration procedural human rights are, seen as an effective tool through which to address environmental matters (UN, 1992). This is an important turning point which saw environmental rights evolving from being a pre-condition for the enjoyment of human rights to the notion of the protection of the environment for its own sake. Another main difference between the two declarations is that, while the 1972 Stockholm Declaration does not go beyond recognizing the link between the environment and human rights, the Rio Declaration defines the right to oppose environmental contamination and emphasizes the responsibility of human beings to safeguard the common environment. For this reason, Porras (1992) describes the Rio Declaration as an unprecedented and ambitions event. This emphasis is understandable because Principle 10 of the Declaration, in particular, is unique in that it defines and fosters procedural environmental rights which have been commonly conceived as being more transparent, inclusive, and accountable in the decisionmaking progress concerning matters affecting the environment that people are dependent upon (Banisar, 2011;Peters, 2018). Principle 10 states that: "Environmental issues are best handled with the participation of all concerned citizens, at the relevant level. At the national level, each individual shall have appropriate access to information concerning the environment that is held by public authorities, including information on hazardous materials and activities in their communities, and the opportunity to participate in decision-making processes. States shall facilitate and encourage public awareness and participation by making information widely available. Effective access to judicial and administrative proceedings, including redress and remedy, shall be providedˮ (UN, 1992). This principle makes positive contributions to the development of environmental rights in two ways. Firstly, a closer look at the principle indicates that the Declaration constructs the relationship between the environment and human rights in the field of procedural rights including right to public participation in decision making process, right to access to information and right to access to justice (Olmos Giupponi, 2019). It is the first time that procedural environmental rights are recognised by a declaration at an international level and this inspired countries to adopt the principle in their domestic environmental policy. Not surprisingly, procedural environmental rights have been inserted in the legislation of an overwhelming number of countries, including countries in Africa, Asia and America (Banisar, 2011). Secondly, the Declaration can be considered a very useful guide in categorising procedural environmental rights including the right to information, a right to participate, and a right to access justice. Principle 10 sets out the fundamental elements for good environmental governance (UN, 1992), which is critical for the achievement of environmental sustainability. It would be subjective to mention only the positive sides of the Rio Declaration for environmental rights as this Declaration drew criticisms which were similar to the ones levelled at the Stockholm Declaration. The main criticism seems to be that it is not yet obvious how seriously state parties relate these principles to their national and local environmental policies. The main rationale behind this criticism stems from the fact that the principles set out in the Declaration are not legally binding. Indeed, the lack of legally enforceable environmental rights makes the Rio Declaration weak and ineffective when it comes to forcing states to implement environmental policy in accordance with its objectives. In terms of the establishment of an enforcement mechanism at an international level, the Rio Conference may be described as disappointing as it did not make a significant impact on the development of environmental rights since the Stockholm Declaration. Another criticism can be the fact that the Declaration does not mention a distinct right to a healthy environment. On logical grounds, there is no compelling reason to argue why the Rio Declaration did not recognise a distinct right to a healthy environment as it already provided environmental rights by recognising procedural rights. Arguably if procedural rights are an effective way to protect the environment, on logical grounds it seems reasonable to say that there is no need for a distinct right to a healthy environment and this makes discussion useless. Therefore, the Declaration is a useful reference for policy-makers, legislators and officials at all levels of government, but remains ineffective due to the lack of inherent enforcement mechanisms. The Aarhus Convention, which was adopted in 1998 and entered into force in 2001 (Koester, 2017;Mason, 2010;Baber and Bartlett, 2020;Berny, 2018), takes procedural environmental rights a step further and puts Principle 10 of the Rio Declaration on the Environment and Development into practice. It can be accepted as the world's foremost international instrument that links environmental and human rights and is regarded as a landmark in environmental democracy (Wates, 2005). It is the first multilateral treaty to specifically denote a human right to government information about environmental policy and decisions related to the environment (Cramer, 2009 (UNECE, 2011). The strength of this Convention lies in its legally binding obligations on public authorities Kravchenko, 2007;. It is the first international legally binding instrument which recognises citizens' procedural rights in environmental matters (Kravchenko, 2007). It not only recognises environmental rights as a key characteristic of good governance but also guarantees the rights of access to information, public participation and access to justice for effective environmental governance Toth, 2010). The Convention has 47 Parties (46 states and the European Union) which means that the Convention' scope of application is regional (Peters, 2018;Krämer, 2018). Each Party is obliged to guarantee the rights set out in the Convention (UNECE, 2005). What is unique about the Aarhus Convention is its Compliance Committee (Kravchenko, 2007;Koester, 2007). Article 15 of the Convention requires the Parties to set up arrangements of a non-confrontational, nonjudicial and consultative nature to review compliance with the Convention for the effective enjoyment of the Aarhus Convention rights by the public throughout the EU . It states: "The Meeting of the Parties shall establish, on a consensus basis, optional arrangements of a non-confrontational, non-judicial and consultative nature for reviewing compliance with the provisions of this Convention. These arrangements shall allow for appropriate public involvement and may include the option of considering communications from members of the public on matters related to this Conventionˮ (European Commission, 1998). In order to fulfill Article 15 of the Convention, the Aarhus Convention Compliance Committee was established in 2002 (Koester, 2007;Morgera, 2005). It is mandated to discuss and decide on possible violations of the Convention. The Aarhus Convention compliance mechanism can be triggered in four main ways: "(1) a Party may make a submission about compliance by another Party; (2) a Party may make a submission concerning its own compliance; (3) the secretariat may make a referral to the Committee; and (4) members of the public may make communications concerning a Party's compliance with the convention." (UNECE, n.d) This is one of the noticeable characteristics of the Aarhus Convention. It would, however, be amiss to limit the importance of the Aarhus Convention to only the recognition and protection of procedural environmental rights. Procedural environmental rights are not only important in their own right but are also essential to the successful realisation of substantive environmental rights. There are legally enforceable human rights such as the right to expression, which is well protected and guaranteed by national and international law-Article 19 of The Universal Declaration of Human Rights which states that "Everyone has the right to freedom of opinion and expression …" (UN, n.d). Right to expression may enable concerned groups to voice their objection to environmental protection and make effective claims for environmental protection. When a sufficient number of concerned people raise their voices about environmental matters governments can be forced to implement more sustainable environmental policies to meet their citizen's needs. However, the right to expression may remain ineffective if citizens do not have access to relevant environmental information. If the citizens are not well-informed, are illinformed or lack sufficient information, it is unreasonable to expect them to express opinions and concerns which may be relevant to environmental decisions. The effective implementation of all the procedural rights can be seen as a fundamental condition for realizing the substantive right to an adequate level of environmental quality. The Aarhus Convention, therefore, makes a valuable contribution to the realisation of environmental rights, as it protects procedural environmental rights, which are also a highly valuable tool to empower particularly vulnerable and excluded people to invoke their substantive rights. International Human Rights Law and Environmental Human Rights While procedural environmental rights have made major progress over the last few decades, international human rights instruments still do not include a distinct right to a healthy environment (Pathak, 2014). At the international level, there is no explicit right to environmental quality recognised by either the Universal Declaration on Human Rights 1948 (UDHR) or the International Covenant on Civil and Political Rights, which are two major human rights documents (Glazebrook, 2009;Pathak, 2014). However, although a legally enforceable human right to a healthy environment has still not been achieved through international supervisory mechanisms which are relatively strong since the UN first expressly linked human rights to the environment in the 1972 Stockholm Declaration, the notion has evolved in a way that has had a noticeable impact on international human rights and environmental policy, as can be seen in the Draft Declaration of Human Rights and the Environment (DDHRE), developed by the United Nations in 1994, which clearly recognised a substantive right to a healthy environment (University of Minnesota Human Rights Library, 1994). DDHRE is guided by the principles of the 1972 Stockholm and the 1992 Rio Declarations. While the Stockholm Declaration implied the close link between the two, DDHRE clearly defines environmental rights in broadly qualitative terms and recognises a distinct right to a healthy environment which is seen as essential to the enjoyment of all human rights. The second principle states: "All persons have the right to a secure, healthy and ecologically sound environment. This right and other human rights, including civil, cultural, economic, political and social rights, are universal, interdependent and indivisible.ˮ (University of Minnesota Human Rights Library, 1994). There is no doubt that recognition of the right to a safe environment as a human right by the United Nations is a key milestone, but this recognition remains ineffective as long as it remains non-enforceable by an international court. It is not clear how seriously the UN takes the safeguarding of recognised environmental rights. What is needed for the right to a healthy environment to be enforceable is evolution from recognition by soft laws (such as the 1972 Stockholm Declaration) to protection by hard law which involves legal norms that are legally binding. This has still not been achieved by the international community. There are 900 international legal instruments which deal with international human rights issues (Olawuyi, 2014). However, the international human rights and environmental law frameworks have not yet been integrated in spite of the interrelated and interconnected relations between the two fields as recognized by the 1972 Stockholm Declaration. Environmental degradation and human rights abuse are the two main concerns of the modern world but there is no common international law protecting both the environment and human rights. They remain incomplete until a common framework which links international human rights with environmental rights is created. Thus, it does not seem plausible to talk about the maturity of environmental rights when the right to a healthy environment is not enforceable through any enforcement mechanism at the international level. There have been a number of regional hard laws passed which recognise the right to a healthy environment including the African Charter on Human and Peoples' Rights and the Additional Protocol to the American Convention on Human Rights (Pathak, 2014: p. 20). One of the most important is the African Charter on Human and Peoples' Rights, which is a human rights instrument which intends to promote and protect human rights in Africa (Umozurike, 1983). It was adopted in 1981 by the Organisation of African Unity and it has notable unique characteristics as it is one of the precious few hard laws that guarantee a distinct right to a healthy environment at a regional level (Humphreys, 2015;Atapattu, 2002). More so than other comparable human rights laws, it clearly defines, recognises and protects a substantive right to a healthy environment. Article 24 of this document states that, "All peoples shall have the right to a general satisfactory environment favourable to their developmentˮ (ACHPR, 1981). This is a very positive development where the environment is not only seen as a pre-condition for the enjoyment of human rights but is also accepted as one of the fundamental human rights. However, the unique feature of the African Charter is that it goes beyond the recognition of a right to a healthy environment and imposes obligations on the individuals towards the State and the community (ACHPR, 1981;Gittleman, 1981). Article 1 states that: "The Member States of the Organization of African Unity parties to the present Charter shall recognize the rights, duties and freedoms enshrined in this Charter and shall undertake to adopt legislative or other measures to give effect to themˮ (ACHPR, 1981). Recognition of environmental rights is not new but the protection of environmental rights through an enforcement mechanism at a regional level is unique. Article 24, however, can be criticised as being too anthropocentric as it conceives the environment in a narrow way, seeing nature as a tool which can be used to satisfy human needs, ignoring the value of nature in itself. This is not surprising as all other human rights instruments put human beings at the centre of the planet and consider them to be the most significant species. Additionally, environmental problems cannot always be confined within the boundaries of a single country or region; they can sometimes cross boundaries and spread to a global scale (Karabıçak and Armağan, 2004;Cramer, 2009). For example, the Fukushima accident, which happened in Asia, also affected Greece in Europe (Kritidis et al., 2012). If an environmental contamination issue crosses boundaries a regional document may not be sufficient to address it. However, the African Charter remains a very positive contribution to the development of environmental rights as it guarantees a distinct right to a safe environment in Africa. Conclusion This paper has attempted to discuss how environmental human rights have been evolved over history. This research has three main conclusions. Firstly, the discussion above shows that environmental awareness about the environmental problems emerged after the 1960s worldwide which in turn triggered the emergence of environmental human rights. The 1960s marked the emergence of environmental thinking in the academic, social and political agenda which greatly impacted the public awareness about environmental issues and marked the beginning of modern environmentalism. All these developments led to the concept that a safe and clean environment is necessary for the safeguarding of human rights at the international level. The discussion on the connection between the environment and human rights emerged in the 1970s on a global scale as a result of the increase of public awareness about environmental problems. Secondly, the discussion in this paper indicates that environmental human rights are first recognized by environmental law rather than human rights law at the international level. The 1972 Stockholm Declaration is the first document which recognized the linkage between the environment and human rights. It proclaims that the environment is essential for the safeguarding of human rights, which trigged two basic assumptions. Firstly, people have a right to live in a safe environment. Secondly, existing human rights already require safe environment which is called reinterpretation of human rights. Another important environmental law document is the Rio Declaration which recognized procedural environmental rights including the right to access to information, the right to participation in decision making process and the right to access to justice. The first legally binding environmental law document which guarantees procedural rights is the Aarhus Convention. All these three environmental law documents have made a valuable contribution to the development of environmental human rights at the international level. Thirdly, the discussion above shows that environmental human rights are not guaranteed directly by the International Bill of Human Rights including Universal Declaration of Human Rights, International Covenant on Economic, Social and Cultural Rights and International Covenant on Civil and Political Rights. However, later human rights documents have contributed to the development of environmental human rights significantly. The first legally binding human rights document is the African Charter on Human and Peoples' Rights which guarantees the right to the environment. Similarly, the American Convention on Human Rights and Arab Charter on Human Rights recognize the right to the environment. However, there is no recognition of the right to the environment at the international level. Additionally, no human rights document recognizes reinterpretation of human rights.
2021-05-04T22:04:10.319Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "c72fae2350b54ce513e4c4e69a6a7c2be9592c50", "oa_license": "CCBYNC", "oa_url": "https://dergipark.org.tr/en/download/article-file/1444806", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2e12f6a668988f2a792ea38b0c076cfd6f0a79a3", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Political Science" ] }
17834151
pes2o/s2orc
v3-fos-license
Intussusception hospitalizations incidence in the pediatric population in Italy: a nationwide cross-sectional study Background Study to investigate the intussusception incidence background in the pediatric population and its temporal trend in Italy. Methods A cross-sectional study was conducted on the pediatric population aged 0 to 15 years, in the period 1 January 2002 to 31 December 2012. Intussusception cases were identified using the national hospital discharge database. The annual intussusception incidence, the incidence rate ratios (IRRs) and the related 95 % confidence Intervals (CI) were calculated. Results The overall intussusception incidence rate was 21 per 100,000 children aged ≤15 years, and was higher among boys than girls. The highest intussusception incidence rate occurred in infants <1 year of age (39 per 100,000 infants). Among infants, incidence varied with the geographical area, with higher rates in the central Italy (50 per 100,000 infants). The annual incidence rates in infants were stable since 2004 and up to 2012, ranging from 40.1 and 33.0 per 100,000 infants. Similar stable patterns were observed when conducting the analysis on children over 1 year of age. Conclusions This study provided the intussusception incidence background in Italy in different pediatric ages, including infants, over an 11-year period. This information is essential in post-marketing safety surveillance, to continuously monitor the benefit/risk profile of rotavirus vaccinations. Electronic supplementary material The online version of this article (doi:10.1186/s13052-016-0298-8) contains supplementary material, which is available to authorized users. Background Intussusception is the most common cause of acute intestinal obstruction in children under 2 years of age [1]. Less than 5 % of cases resolve spontaneously and if treated early, almost all cases can be reduced by enema or surgery [2]. Rotashield®, the first-generation rotavirus vaccine (RV) was found to be associated with intussusception in infants, leading to its withdrawal from the market in 1999 [8]. In Italy two different RVs (Rotarix® and Rotateq®) have been available on the market since 2007, being administered as a two-or three-dose schedule starting from 6 weeks of age [9,10]. Neither of the two licensed RV were found to be associated with intussusception in clinical trials [11,12]. However, post-marketing studies enrolling much larger population have suggested an intussusception risk following RV vaccination, which is in 2 to 5 excess cases per 100,000 vaccinated infants [13][14][15][16]. Although vaccination against rotavirus is not included in the universal National Immunization Program (2012-2014), some Italian regions introduced the RV targeting specific children subgroups and/or with different reimbursement schemes [17]. According to official data in Italy during 2013, approximately 76,000 vaccine doses were purchased within the Italian national health system (NHS), resulting in almost 37,000 vaccinated children (13,700 doses were purchased in 2010 by the NHS) [18,19]. This led the Italian Pharmacovigilance Network to capture the first spontaneous intussusception report in 2012 [18]. According to literature, the EU intussusception incidence background in the pediatric population ranges between 0.66 and 2.24 per 1000 children admitted to hospital, and between 0.75 and 1.00 per 1000 children admitted to the emergency ward [20]. This estimate was largely based on hospital discharge data collected before 1995. A more recent review estimated the intussusception incidence background in infants (aged ≤1 year) ranging between 20 and 66 per 100,000 infants [21]. The reported incidence in different countries varies largely, possibly due to different factors (age, patient settings and pathological conditions, socioeconomic status, and geographical area) [20,21]. Only one incidence study was conducted in Italy in a primary care pediatric setting (children aged <10 years) and provided an estimate of 5.0 per 100,000 person-years [22]. However, it is well known that intussusceptions cases are generally treated in a hospital setting [21]. The use of RV is expected to increase in Italy, thus any post-marketing surveillance would require intussusception incidence background data from reliable and stable sources. Therefore, a cross-sectional study was conducted based on the national database of hospital discharges in the period 1 January 2002 to 31 December 2012. The study was aimed to determine the overall intussusception incidence in the pediatric population in Italy, and to describe its temporal trends over a period of time in specific age groups. The study included the evaluation of incidence patterns according to intussusception severity, gender, geographical area, and pathological lead points. The potential changes of intussusception rates in different timeframes was also explored in infants, before and after 2007 (before and after the marketing authorization of RV in Italy). Population and study settings The study was conducted on the Italian pediatric population aged 0 to 15 years; cases were identified from hospitalization records. Only children with an intussusception discharge diagnosis in the study period were included. Two different sub-cohorts were identified for the main analyses, i.e. infants aged 0 to < 1 years and children aged ≥ 1 year of age. Data sources In Italy the NHS is provided universally free. A hospital database collects all hospital discharge records (including day-hospital/day-surgery admission). Only routinely collected information was used in the study and data were analyzed through a unique, anonymised, personal identifier. The following data were extracted for the present analyses from each record: age, gender, region of residence, admission and discharge dates, diagnosis (primary and secondary) and procedures [coded according to the International Classification of Diseases (ICD), 9 th Revision],status at discharge (deceased/non-deceased). Case definition Records with the following intussusceptionICD-9 codes, either as primary or secondary diagnosis, were selected from the national database hospital discharge records: 560.0 (intussusception) and 543.9 (other and unspecified appendix diseases). The analysis was restricted to incident cases identified from 1 January 2002 to 31 December 2012; thus, only the first intussusception-associated hospitalizations (considered as index dates) were included. Identification of risk factors potentially associated with intussusception To identify potential risk factors for intussusception, all hospitalization records in the 6-month period preceding the index date (the first intussusception diagnosis) for each identified case were retrieved. Hospitalization records in 2001 were used to collect the medical history of intussusception case incidents in 2002. A predefined list for the leading points pathology was used with the related ICD-9 codes (see Additional file 1: Annex 1). Case Severity As a proxy for the case severity, ICD-9 procedure codes were used to identify those intussusception hospitalizations requiring surgical or radiological intervention. Surgical intervention was defined by procedure codes 46.80 to 46.82, and radiological intervention by codes 96.29 and 96.39. All the procedure codes were coupled to an intussusception code, namely in the same hospital discharge form of the intussusception incident case. Three categories were defined: i) intussusception (identified by diagnosis); ii) intussusception requiring surgery (identified by diagnosis and surgical procedures); iii) intussusception requiring non-surgical intervention (identified by diagnosis and non-surgical procedures). The inhospital mortality was also evaluated (number of children who died in-hospital following the first intussusception episode). Intussusception Recurrence Consecutive intussusception hospitalizations for the same child were identified for up to 1 year following the first episode of intussusception (incident case). The same ICD-9 codes applied to identify incident cases were used to retrieve recurrence of intussusception episodes. Three categories of recurrence were defined: i) early recurrence: cases with hospital readmission ≤7 days from the first episode; ii) medium term recurrence: cases with hospital readmission between 8 and 30 days from the discharge; iii) late recurrence: cases with hospital readmission after 30 days from the discharge and ≤1 year. Statistical analysis The annual intussusception incidence per 100,000 infants/ children was calculated using as a reference the Italian population resident figure provided by the National Institute for Statistics, for the period 2002 to 2012 [23]. Births were assumed to be evenly distributed throughout the year, when hospitalization rates by age (in months) were calculated. Intussusception period specific incidence rates associated with hospitalization were estimated. Annual incidence rates were calculated after being adjusted by age and gender, through direct standardization methods, using the 2012 population as a reference to take into account the ongoing demographic variation. The incidence rate ratios (IRRs), and the 95 % Confidence Intervals (CI) were calculated through the Poisson regression. Statistical analyses were performed using STATA software (version 11. Results Overall, 20,524 children aged 0 to 15 years were identified as intussusception incident cases during the 2002 to 2012 timeframe, 2344 were infants aged 0 to < 1 year (Fig. 1). Intussusception incidence rates by age, gender, geographical area, severity and risk factors The description of the characteristics of the overall pediatric population enrolled is shown in Table 1. The intussusception incidence rate was 21 per 100,000 children and was higher among boys than girls (23 per 100,000 vs 19 per 100,000, respectively); thus boys had a statistically significant increased probability to experience intussusception hospitalization (IRR: 1.16; 95 % CI 1. 13-1.20). Rates of intussusceptions also varied substantially by age, with the highest incidence rate occurring in infants <1 year of age (39 per 100,000 infants). However, rates were low for infants <14 weeks (19 per 100,000 infants in the group aged 6 to 14 weeks), then increased rapidly, peaking at 60 per 100,000 for infants aged 25 to 32 weeks, then decreasing for children ≤6 years of age; a rise in the hospitalization incidence rates occurred also within age groups 10 to 12 years (26 per 100,000 children). The probability of intussusception hospitalization became statistically significant higher from 15 In the vast majority of cases (87.7 %) non-surgical, or -radiological procedures were reported, shown in Table 1. Overall, 8 children hospitalized for intussusception died in-hospital (3 in the first year of life), with an inhospital mortality rate of 0.39 per 1000, in the study population. The incidence rate of in-hospital mortality Fig. 1 Flowchart of the enrolled cohort of children with incident hospitalization for intussusception following intussusception in infants (<1 year of age) was 0.5 per million infants (Additional file 1: Figure S1). About 4 % of the children hospitalized with an intussusception diagnosis had a recurrence within 1 year after the first episode ( Table 1). The 41.6 % of children experiencing a recurrence were hospitalized ≤30 days from the intussusception incident episode (Additional file 1: Figure S2). Only 291 (1.4 %) children showed at least one known intussusception risk factor in the 6-month period prior to the incident episode ( Table 2). The most frequent risk factors were presented in Table 2 and included gastroenteritis (44.3 % of the cases) followed by inflamed appendix, and Henoch-Schonlein purpura (35.1 % and 9.6 %, respectively). Intussusception incidence rate temporal trends in infants and children aged at least 1 year The annual incidence rate in children <1 year of age was highest in 2002 and decreased steadily to 2004 falling from 50.2 to 39.2 per 100,000 infants, rates then remained stable from 2005 to 2012, ranging from 33.0 to 40.1 per 100,000 infants (Fig. 2a). Stable temporal trends were observed when considering incidence rates by gender, with constantly higher incidence rates in boys in each year. Similar temporal trends were observed when the analysis was conducted among children aged 1-15 years (Fig. 2b), although the incidence was lowest in this population (ranging from 17.6 to 22.6 per 100,000 children in the considered period). The annual trends of incidence rates by geographical areas remained overall stable (Figs. 3a,b). Specifically, when considering the infants cohort, the highest incidence rates were observed in each year in the central Italy: decreasing from 66.6 per 100,000 infants in 2002 to 58.5 in 2012. In the central Italy, the highest incidence rate was observed in Tuscany and Umbria; region specific incident rates in infants were showed in Additional file 1: Figure S3a. With regard to the pediatric cohort over 1 year of age, a constant higher incidence trend in the period 2002 to 2012 was observed in south of Italy which was 3 fold higher compared to the northern (Fig. 3b). The highest rates were observed in the Sicily, Puglia and Basilicata (Additional file 1: Figure S3b). The attempt in comparing intussusception hospitalization rates in infants across different time intervals, before and after 2007 (before and after RV use in Italy), stratified by age in months did not show any increases in the first year of age during post-vaccine introduction years, when compared with pre-vaccine introduction rates (Additional file 1: Table S1). Overall, the IRR of intussusception was 0. Discussion This is the first study providing the intussusception incidence background in Italy over an 11-year period in the whole pediatric population. The determined background rates in Italy in children <1 year of age (39 per 100,000 infants) is within the ranges reported at the EU level [21] and in line with those detected in Germany [24], Finland [25], UK and Ireland [26], and Switzerland [27], observed over comparable timeframes. The infant intussusception incidence measured in Italy is also closely similar to those detected at the US level [28][29][30][31][32]. Considering the entire pediatric population (aged ≤15 years) the incidence rates observed in Italy (21 per 100,000) is lower than those in other studies conducted in Norway [33], France [34] or Denmark [35]. However, it should be noted that all three studies in the mentioned countries were conducted in different timeframes, or with a limited enrolled population, which may explain the different intussusception incidence estimates when compared with those of Italy. In the Italian pediatric population, incidence rates were found stable over time when considering different age groups. Few studies have reported details of temporal trends for intussusception. A Danish study reported a constant decrease in the incidence rates of infants ≤1 year from 1980 to 2001 [35]; this was explained by a possible shift in the management of intussusception from in-patient to short stay hospitalization in out-patient settings [36]. In some other literature the annual incidence rate of intussusception hospitalization was stable in the different calendar years [28,37]. The higher incidence rate in boys was expected according to the already available data [20]. There was a strong variability in the rates for the Italian geographical regions (from 12/100,000 in the north to 32/100,000 in the south of Italy) that remained stable in the considered period. Moreover, geographical variability also appeared to be influenced by the age groups considered, being highest in the Centre in infants. Geographical and environmental variation in intussusception incidence is known [20,21]; however, it should be pointed out that organizationally different clinical practices among the Italian regions may have contributed to varying rates. The only Italian study which was conducted in a primary care setting did not reveal such geographical differences, as the majority of family pediatricians included in the study were in the north of Italy [22]. The percentages of serious cases (9.9 %) observed in this study namely, those requiring surgical procedures are in line with data reported elsewhere in Europe [21]. However, the estimate of intussusception cases in Italy requiring enema appears to be too low (2.4 %) compared to those expected at EU level (77 %) [21]. The inhospital intussusception mortality is a rare event in Italy (about 1 case per 2 million infants), and the rate is lower than that of the US study [38]. However, hospital-based data may underestimate the true mortality for intussusception since children dying before being hospitalized, or after being discharged were not taken into account. In this study the overall intussusception recurrence rate of 3.7 % was consistent with previously published data [37,[39][40][41]. In our setting, 24.2 % of incident cases showed intussusception recurrences, ≤7 days from the first episode. The timing of recurrence was evaluated only in a few trials included in a recent meta-analysis on intussusception enema reduction, which showed a low recurrence rate at 48 h (≤5 %) [41]. The risk factors potentially linked to intussusception cases identified in this study were consistent with available data, where gastroenteritis and appendicitis were frequently recognized as pathological leading points [24,27,[39][40][41][42]. The data in this study do not reveal any change in intussusception hospitalization rates among Italian infants (≤1 year of age) when comparing different time intervals before and after 2007 (before and after the marketing authorization of RV in Italy), although the coverage was very limited, <1 % of the birth cohort. The main strength of this study is the large cohort of children enrolled. Italian birth cohorts were in fact included over an 11-year timeframe, thus these findings should be considered as representative of the whole country. Moreover, in Italy health care is provided free to the whole population within the NHS. For this reason, This study has several limitations. Intussusception hospitalization was determined on the basis of ICD-9 diagnosis code at hospital discharge without any prior validation of the diagnosis. Therefore, the Brighton Collaboration (BC) case definition on intussusception requiring specific clinical examinations as well as sign or symptoms not retrievable through hospital records, was not applicable in the context of our study [43]. However, a study conducted by Ducharme et al. in Canada, reported ICD-9 codes to be sensitive (89.3 %) and highly specific (>99.9 %) in identifying patients with intussusception from administrative data [44]. In addition, a US study conducted in 3 hospitals showed that almost 90 % of the intussusceptions codes collected from the discharge forms had met the highest level of diagnostic certainty [28]. Therefore, the mentioned details give support and negate the possibility of the misclassification of intussusception cases of this study. Since the analysis included in this study is of only hospitalization data, the intussusception cases managed in an out-patient setting are not included, leading to a potential underestimation of the true incidence rate. However, in Italy pediatric patients with intussusception (especially infants) were directly admitted to hospital without any prior primary care referral [22]. Conclusions Although this study is based on routinely collected data, it still provides a robust and representative intussusception incidence background in Italy in the different age groups and evaluates its variability over an 11-year period. This knowledge is essential for post-marketing safety surveillance on rotavirus vaccinations and to provide information useful for vaccine-safety policies. Additional file Additional file 1: Annex 1. ICD-9 codes used in the identification of risk factors potentially associated to intussusception. Figure S1. In-hospital intussusception mortality incidence rate by age (2002-2012). Figure S2. Intussusception case incident distribution by recurrence time (1 year following the first episode) within the overall pediatric cohort. Figure S3. a Cumulative intussusception incidence rate among infants <1 year of age by region (2002-2012). Figure S3. b Cumulative intussusception incidence rate among children 1-15 years of age by region (2002-2012). Table S1. Intussusception hospitalization rate comparisons in different timeframes (before and after the marketing authorization RV vaccines in Italy) for infants aged <1 year. (DOCX 137 kb) Acknowledgments Not applicable. Funding Only public employees of the regional health authorities were involved in conceiving, planning, and conducting the study; no additional funding was received. Availability of data and materials All data generated or analyzed during this study are included in this published article and its supplementary information files. The authors are willing to collaborate in answering further research questions and to participate in systematic reviews or meta-analyses. No additional data are available.
2018-01-25T21:50:19.240Z
2016-09-27T00:00:00.000
{ "year": 2016, "sha1": "eef9cb3d97e0d7cdd4dd09184ffca36c9dbecad8", "oa_license": "CCBY", "oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-016-0298-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eef9cb3d97e0d7cdd4dd09184ffca36c9dbecad8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247237346
pes2o/s2orc
v3-fos-license
Management and Care of Patients With Invasive Cervical Cancer: ASCO Resource-Stratified Guideline Rapid Recommendation Update ASCO Rapid Recommendations Updates highlight revisions to select ASCO guideline recommendations as a response to the emergence of new and practice-changing data. The rapid updates are supported by an evidence review and follow the guideline development processes outlined in the ASCO Guideline Methodology Manual. The goal of these articles is to disseminate updated recommendations, in a timely manner, to better inform health practitioners and the public on the best available cancer care options. ASCO Rapid Recommendations Updates highlight revisions to select ASCO guideline recommendations as a response to the emergence of new and practice-changing data. The rapid updates are supported by an evidence review and follow the guideline development processes outlined in the ASCO Guideline Methodology Manual. The goal of these articles is to disseminate updated recommendations, in a timely manner, to better inform health practitioners and the public on the best available cancer care options. BACKGROUND In 2016, ASCO published a Resource-Stratified Guideline on the Management and Care of Women with Invasive Cervical Cancer. 1 A recent publication 2 constituted a strong signal for an update of the 2016 Invasive Cervical Cancer Resource-Stratified Guideline recommendations focused specifically on systemic therapy for patients with recurrent or metastatic cervical cancer in enhanced and maximal settings. METHODS A targeted literature search was conducted to identify phase III clinical trials pertaining to the systemic therapy recommendations in this patient population. No additional randomized trials were identified. The original Expert Panel was reconvened to review the evidence from the KEYNOTE-826 trial and to approve the updated recommendation. EVIDENCE REVIEW The KEYNOTE-826 investigators reported a first interim analysis of a double-blind, phase III randomized trial (617 patients) of pembrolizumab plus paclitaxel/ platinum chemotherapy with or without bevacizumab compared with placebo plus chemotherapy with or without bevacizumab in patients with persistent, recurrent, or metastatic cervical cancer who had not received prior chemotherapy, with a median follow-up of 22 months. 2 Patients with programmed death ligand 1 (PD-L1) ≥ 1 were 89% of each arm. Compared with a placebo/chemotherapy regimen, in all patients regardless of PDL-1 status, the progression-free survival (PFS) was significantly longer, 10.4 (95% CI, 9.1 to 12.1) versus 8. Adverse event (AE) results were reported with the median treatment duration of 10 versus 7.7 months. Grade (Gr) ≥ 3 AEs (reported by ≥ 20% of patients) were numerically greater, 81.8% versus 75.1%, with intervention, but statistically similar (Table 1). Most common Gr ≥ 3 AEs were anemia (30.3% v 26.9%) and neutropenia (12.4% v 9.7%). Potentially immunemediated AEs in the as-treated participants were greater with pembrolizumab (11.4% [Gr ≥ 3] v 2.9% [Gr 3-4]). In the as-treated participants analyzed by concomitant bevacizumab use, pembrolizumab plus bevacizumab had 83.7% Gr ≥ 3 AEs versus pembrolizumab alone 78.4%. RECOMMENDATION Prior to these data's publication, the Invasive Cervical Cancer Resource-Stratified Guideline Panel published this recommendation in 2016 for patients with persistent, recurrent, or metastatic cervical cancer: Chemotherapy 6 bevacizumab 6 individualized radiation therapy and/or palliative care (Type of recommendation: evidence based; Evidence: high; Recommendation: strong). Other recommendations depend on previous radiation therapy and central versus noncentral disease (space precludes full reprinting; see 2016 guideline's Table 4). UPDATED RECOMMENDATION The updated recommendation (plus the other 2016 options) for January 2022 is: clinicians may offer upfront pembrolizumab and chemotherapy with or without bevacizumab to eligible patients with persistent, recurrent, or metastatic cervical carcinoma (6 individualized radiation therapy and/or palliative care) in enhanced and maximal settings (Type: evidence based, benefits outweigh harms; Evidence quality: high; Strength of recommendation: strong). DISCUSSION Estimated OS and PFS were greater with pembrolizumab plus paclitaxel/platinum chemotherapy with or without bevacizumab versus a control with statistically significant difference at the time of this interim analysis (22-month follow-up). Although the results support use in all patients on the basis of intention to treat (ITT) analysis, investigators showed larger efficacy in the PD-L1 ≥ 1% participants. The subgroup analyses for both PFS and OS suggest that benefit may be less strong for patients with PD-L1 , 1% (HR 0.94). The investigators found safety similar in both arms, with exceptions, for example, higher Gr 3 neutropenia and all Gr hypothyroidism with pembrolizumab (Table 1). With bevacizumab, higher AEs suggest higher toxicity, with potentially increased efficacy; the Panel encourages further research on its role. The investigators did not find significant problems with quality of life. The Panel recognizes that this regimen is not routinely available in resourceconstrained settings and refers readers to the 2016 guidance. EMERGING EVIDENCE The Expert Panel reviewed the single-arm innovaTV 204 trial and will evaluate future results of this and other trials in future full guideline updates per standard ASCO processes. GUIDELINE DISCLAIMER The Clinical Practice Guidelines and Rapid Updates published herein are provided by the ASCO to assist providers in clinical decision making. The information herein should not be relied upon as being complete or accurate, nor should it be considered as inclusive of all proper treatments or methods of care or as a statement of the standard of care. With the rapid development of scientific knowledge, new evidence may emerge between the time information is developed and when it is published or read. The information is not continually updated and may not reflect the most recent evidence. The information addresses only the topics specifically identified therein and is not applicable to other interventions, diseases, or stages of diseases. This information does not mandate any particular course of medical care. Further, the information is not intended to substitute for the independent professional Recommendation Update judgment of the treating provider, as the information does not account for individual variation among patients. Recommendations specify the level of confidence that the recommendation reflects the net effect of a given course of action. The use of words like "must," "must not," "should," and "should not" indicates that a course of action is recommended or not recommended for either most or many patients, but there is latitude for the treating physician to select other courses of action in individual cases. In all cases, the selected course of action should be considered by the treating provider in the context of treating the individual patient. Use of the information is voluntary. ASCO does not endorse third party drugs, devices, services, or therapies used to diagnose, treat, monitor, manage, or alleviate health conditions. Any use of a brand or trade name is for identification purposes only. ASCO provides this information on an "as is" basis and makes no warranty, express or implied, regarding the information. ASCO specifically disclaims any warranties of merchantability or fitness for a particular use or purpose. ASCO assumes no responsibility for any injury or damage to persons or property arising out of or related to any use of this information, or for any errors or omissions. GUIDELINE AND CONFLICTS OF INTEREST The Expert Panel was assembled in accordance with ASCO's Conflict of Interest Policy Implementation for Clinical Practice Guidelines ("Policy," found at http://www. asco.org/rwc). All members of the Expert Panel completed ASCO's disclosure form, which requires disclosure of financial and other interests, including relationships with commercial entities that are reasonably likely to experience direct regulatory or commercial impact as a result of promulgation of the guideline. Categories for disclosure include employment; leadership; stock or other ownership; honoraria, consulting or advisory role; speaker's bureau; research funding; patents, royalties, other intellectual property; expert testimony; travel, accommodations, expenses; and other relationships. In accordance with the Policy, the majority of the members of the Expert Panel did not disclose any relationships constituting a conflict under the Policy.
2022-03-06T06:22:20.192Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "269488c20fd8fe2d7b128018e495ed67682af50a", "oa_license": "CCBY", "oa_url": "https://ascopubs.org/doi/pdfdirect/10.1200/GO.22.00027", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "996b3a491aa176e8c9e79eeb533c87b748deeee7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238957222
pes2o/s2orc
v3-fos-license
Exploring Exodus themes in the book of Amos ere is little doubt that the exodus event is regarded as a most important turning point for Israel’s understanding of itself. e aim of this article is to investigate the occurrence of the Exodus tradition in the book of Amos. Once the occurrence of the exodus tradition has been determined the second aim is to establish the function of the exodus tradition in the book. is is done by providing the reader with cursory overview of the exodus event as it is told in Exodus 1–15, followed by a careful reading of the relevant texts taking the literary and historical dimensions of the texts in consideration. e investigation came to mainly three results: the exodus tradition is utilized in the book of Amos as a motivation for the prophecies of doom directed to Israel. Secondly, the exodus tradition is in fact turned against Israel and thirdly, the exodus event is radically relativized as an event not unique to Israel alone. Introduction ere is little doubt that the Exodus event as recorded in Exodus 1-15 is foundational in the Old Testament/Hebrew Bible. Books on the theology of the Old Testament/Hebrew Bible con rm the importance of the exodus in the Old Testament/Hebrew Bible and for Israel's faith in YHWH. A quick survey of some books on Old Testament eology con rm this consensus in scholarly circles: • Von Rad (1975:176) regards the deliverance from Egypt as "Israel's original confession". • Zimmerli (1978:25) speaks of the Exodus event as a "fundamental confessional statement for the faith attested by the Old Testament". • In many ways the narrative of Exodus 1-15 may be considered "the birth story of Israel as a people", says Birch (1999:99). • For Rendtor (2005:47) the Exodus "is the determinative event in Israel's history, for all times to come". Right at the beginning of the exodus narrative, for the rst time ever, the children of Israel are now called a people (Ex ‫ל).9:1‬ ‫אֵ‬ ‫ׂרָ‬ ‫שְ‬ ‫יִ‬ ‫ֵי‬ ‫ּנ‬ ‫בְ‬ ‫ם‬ ַ ‫ע‬ • For Waltke (2007:390) the signal act of deliverance in the Old Testament is Israel's exodus from Egypt. • Jeremias (2015:89) noted that for Israel the deliverance from Egypt that culminated in the events at the Reed Sea in Exodus 14-15 was nothing else but the "Grunderfahrung der Fürsorge Gottes für sein Volk". Later experiences of God's redemptive acts only serve as the con rmation of this rst experience. It would be hard to overstate the central importance of the Exodus experience for Israel's understanding of itself and of its faith. Brueggemann (1997:179) stated that the Exodus became paradigmatic for Israel's testimony about Yahweh and it became an interpretative lens to guide, inform and discipline Israel's utterances about many aspects of its life. It is then no surprise that the exodus event reverberates through many parts of the Old Testament/Hebrew Bible. Von Rad (1975:176) stated in this regard that in the deliverance from Egypt Israel saw the guarantee for all the future, the absolute surety for YHWH's will to save, something like a warrant to which faith could appeal in times of trial. Ho man (1989:169) noted that the Exodus is the most frequently mentioned event in the entire Old Testament. e topic discussed in this contribution is to investigate the occurrence and function of the Exodus in the book of Amos. What is the function of the Exodus theme in the book of Amos? To answer this question, the main thrust of the Exodus narrative in Exodus 1-15 will rst be determined and brie y summarized. Following on that, the central question put in this paper will be answered via a careful reading of the relevant texts taking the literary and historical contexts of the texts in consideration. A brief overview of the Exodus event as recorded in Exodus 1-15 e Exodus continues the story of Joseph (Gen 37-50) who arrived in Egypt as a result of his brothers' conspiracy to get rid of him. e Israelites living in Egypt became a threat to a new pharaoh because they "were fruitful and multiplied greatly and became exceedingly numerous" (Ex 1:6). e pharaoh decided to put them to forced labour in an e ort to control their numbers. In another more extreme e ort the pharaoh issued an order that every Israelite boy that is born should be thrown in the river Nile (Ex 1:22). YHWH heard and saw the terrible plight of his people (Ex 2:24-25) and decided to do something about this dire situation. He revealed himself to Moses as YHWH, the God of Abraham, Jacob and Isaac and commissioned Moses "to bring my people the Israelites out of Egypt" (Ex 3:10). Eventually Moses, with the assistance of his brother Aaron, confronted the king of Egypt in a series of meetings where the king together with his people had to su er "signs and wonders" (Ex 7:3). Zimmerli (1978:22) remarked in this regard "that there is no other event in the entire history of Israel so surrounded by a plethora of miraculous interventions on the part of YHWH as the event of the deliverance from Egypt". e last wonder was the death of the Egyptian rstborn including the rstborn of the pharaoh himself. As a result of this he summoned Moses and Aaron to allow the people of Israel to go and worship YHWH (Ex 12:31) as they initially requested. e tenth wonder is closely linked with the Passover festival celebrating the fact the fact that YHWH "passes over" the Israelites and did not strike down the rstborn of the Israelites due to the blood smeared unto the sides and tops of the doorframes of the houses where ate the lamb slaughtered (Ex 12). e Exodus narrative culminated in the passing of the Israelites through the Reed Sea while the Egyptian military forces were defeated by YHWH who swept them into the sea so that no one of the entire army of Egypt survived (Ex 14:27-28). is saving event was summarized in a victory song in Exodus 15. Miriam, the sister of Moses and Aaron sang a song: "Sing to the Lord, for he is highly exalted. e horse and its rider he cast into the sea" (Ex 15:21). God has led his people out of Egypt "and can derive from this the hope of helping, saving, and forgiving action in the future" (Rendtor 2005:47). Two of these texts (Amos 8:8 and 9:5) cannot be related to the Exodusevent and will therefore be excluded from this investigation. Shall not the land tremble on this account, and everyone mourns who dwells in it, and all of it rise like the Nile, and be tossed about and sink again, like the Nile of Egypt?" 2 Amos 8:8 forms part of a pericope where the prophet utters once again a prophecy of doom upon Israel because of social injustices prevailing in society. e judgement will be experienced in the form of an earthquake. e coming earthquake is compared to the seasonal rising and falling of the river Nile in Egypt. e Lord, GOD of hosts, he who touches the earth and it melts, and all who dwell in it mourn, and all of it rises like the Nile, and sinks again, like the Nile of Egypt. ‫ַה‬ Amos 9:5 is an almost verbatim repetition of Amos 8:8 and forms part of the third doxology in the book (Amos 4:13; 5:8-9). It alludes to an earthquake and likens the movement of the earth associated with an earthquake with the movement of the water of the river Nile in Egypt. e mentioning of Egypt in both Amos 8:8 and 9:5 cannot be regarded as references to the Exodus tradition telling the event of Israel's deliverance from the hardships of Egypt to the freedom of their own land. In both verses eretz ‫)ארץ(‬ is used as an indication of the land where Israel as the people of God make a living. It is thus rather the tradition of the land or perhaps creation traditions at work here rather than the exodus tradition. Both these verses refer to a geographical phenomenon in the form of an earthquake that will occur. e ebb and ow of the river Nile in Egypt is used as an example to illustrate the point the author wishes to make. Exploring the function of the Exodus theme in the book of Amos ree di erent functions for making use of the Exodus theme in the book of Amos have been determined: e Exodus as the motive for the prophecies directed at Israel Amos 2:10 Israel (Amos 2:6-16). In the rst part of the prophecy (verses 6-8) directed to Israel YHWH speaks about Israel and her sins. e sins mentioned are sins committed to their fellow Israelites. All the sins mentioned are related to social injustices done to fellow Israelites. In verses 9-11 YHWH's redemptive deeds in the history of the people are mentioned. Verse 9 mentioned the granting of the land and it was reiterated again in the second part of verse 10. YHWH also sent the people prophets and Nazirites to guide the people in the ways YHWH wanted them to follow. In verse 10 YHWH addresses Israel changing to second person singular form ("I brought you up from Egypt") thereby addressing Israel directly (Eidevall 2017:117). What YHWH did to his people in the past is contrasted with the acts of the people now living in the land. YHWH's act in the Exodus event is in fact rejected by Israel (Hadjiev 2009:58). In this prophecy Israel is reminded of YHWH's acts of deliverance in the history of the people. YHWH brought ‫)הלע(‬ his people up out of the land of Egypt. It is interesting and important to note that this is the only verb used in the book of Amos to describe the deliverance from Egypt (Amos 3:1; 9:7). e same verb is used in Exodus 3:8, 17 to describe this saving act of YHWH. e verb suggests a geographical connotation: moving from a lower place to a higher mountainous one (Brueggemann 1997:176;Niehaus 1992:369;Van Leeuwen 1985:88). It indicates not only a movement from the land of Egypt to the promised land, but also from metaphorically speaking, a movement from a situation of slavery to one of freedom. YHWH exalted Israel from a condition where they were slaves in a foreign country to a promised land where they will no longer be slaves but could enjoy freedom. e people of Israel were brought out of the bondage of Egypt to live in the promised land. Israel is reminded that they were an oppressed people serving as slaves to a foreign nation (Amos 2:10) but now they have become the oppressors not of another foreign nation but of their very own people (Amos 2:6-9) in the land granted to them. Instead of remaining a group of slaves, it is YHWH who made Israel into a people by delivering them from Egypt (Rudolph 1971:146). Garrett (2003:66) notes that the Amorites were mentioned twice creating an "inclusion structure framed by reference to the expulsion of the Amorites. e implication is that Israel, too, could be expelled". In other words, just as the Amorites were once destroyed by YHWH to make room for his people to occupy and make a living in the promised land, the same may now happen to Israel (Amos 2:13-16). Hear this word that the Lord has spoken against you, O people of Israel, against the whole family which I brought up out of the land of Egypt. Amos 3:1-2 introduces a new section in the book and can be seen as the introduction or prologue to the second part of the book (Amos 3-6). e unit commences with the familiar "Hear this word" ‫ּה(‬ ֶ ‫ז‬ ‫הַ‬ ‫ר‬ ‫ּבָ‬ ‫דָ‬ ‫ת-הַ‬ ‫אֶ‬ ‫עּו‬ ‫ׁמְ‬ ‫שִ‬ ) that is also found in Amos 4:1 and 5:1. e people are called "children of Israel" and then it is further explicated as "the whole family" (or clan) indicating the close relationship the Israelites have with one another as well as the close relationship with YHWH (Eidevall 2017:123). In Amos 3:1 the people are reminded that YHWH brought them out of Egypt. e same verb ‫)עלה(‬ is once again used to describe the deliverance from Egypt. e fact that the deliverance from Egypt is repeated here serves as a link with the preceding unit (2:6-16). In this case YHWH's act of salvation is used as the motivation for the prophecies uttered and the punishment announced upon the people in this part of the book. e deliverance from Egypt was an act of YHWH's saving power, grace, and kindness to them (Paul 1991:101;Niehaus 1992:375). It is because YHWH rescued his people that they are addressed about their iniquities. As is the case in Amos 2:10, YHWH's acts in the past on behalf of his people stand in stark contrast to their current behaviour. Proclaim to the strongholds in Assyria, and to the strongholds in the land of Egypt, and say, "Assemble yourselves upon the mountains of Samaria, and see the great tumults within her, and the oppressions in her midst." Amos 3:9 serves as the introduction to a prophetic oracle stretching to verse 15 proclaiming devastation to Israel because of their neglect in terms of social justice. In Amos 3:9 the land of Egypt is mentioned together with Ashdod known as a Philistine city. It seems odd to call upon a country (Egypt) and a city (Ashdod) with the city mentioned rst to witness the unrest and oppression prevailing in Samaria as the capital of the Northern Kingdom. It is also the only instance in the Old Testament/Hebrew Bible where Egypt and Ashdod are mentioned together. It must also be noted that the land of Egypt is called upon but with no direct mentioning of the exodus from Egypt as was the case in Amos 2:10 and 3:1. e question to consider then is whether mentioning the land of Egypt in Amos 3:9 can be interpreted as a reference to the exodus tradition. To answer this question, one has to brie y pay attention to the mentioning of Ashdod. It is interesting to note that the LXX has a di erent reading in this regard. e LXX reads "Assyria" instead of Ashdod probably because Assyria would serve as a better parallel to Egypt as a political power than Ashdod as a city-state of the Philistines. Although there is support for the reading of the LXX as the preferred reading (Barthélemy 1992:647), the reading of the MT has to be retained (Wol 1977:189;Van Leeuwen 1985:124;Paul 1991:115-116;Eidevall 2017:130). To mention Ashdod is a subtle allusion to the land promise and the eventual granting of the land. In Joshua 11:22; 13:3 and 15:47 Ashdod is especially mentioned in connection with the conquest of the land. In Amos 3:9-15 the prophet announces the unthinkable possibility that Israel may lose the land they once occupied. In verse 11 it is said in no uncertain terms that "an adversary shall surround the land and strip you of your defence and your strongholds shall be plundered" (NRSV). e mentioning of Ashdod serves as a subtle reminder of the initial conquest of the land in stark contrast to the threat of losing the land. Di erent answers were given to answer the question as to why Egypt was called upon as a witness "to see the great unrest within her and the oppression among her people" (Amos 3:9). • Some scholars (Mays 1976:63;Van Leeuwen 1985:124) are of the opinion that Ashdod and Egypt were neighbouring states and therefore they are mentioned together. • Carroll (1992:193) thinks that the mentioning of Ashdod and Egypt has to do with an idea of some moral consensus with the implication that even these two (pagan) nations known for their acts of violence and injustice would be shocked by the conditions they are about to observe in Samaria. • Other scholars (Rudolph 1971:163;Paul 1991:115) think that Ashdod and Egypt were summoned because of the stipulation in Israelite law that requires at least two witnesses in a lawsuit as proof of reliable evidence (Deut 17:6; 19:15). • Another solution to the problem was to view the call to the entities as a rhetorical device rather than a historical indication (Rudolph 1971:163;Deissler 1981:105). It has already been argued that in both Amos 2:10 and 3:1 the deliverance from Egypt is mentioned. Keeping in mind that the exodus tradition was an important theological tradition associated with especially the Northern Kingdom of Israel, it is quite possible that mentioning Egypt in Amos 3:9 is also meant as a reminder of the miraculous exodus from Egypt. Mentioning Egypt here is also perfectly in line with the use of the exodus tradition in the two preceding verses where the Exodus from Egypt is mentioned. Israel, who would be well aware of the hardships of oppression by a foreign power, are now oppressing their own people (Snyman 1994:561). What makes it worse is that the foreign power who once acted as the oppressor is now summoned to witness the oppression executed by Israel on its own people. While the fortresses of Egypt and Ashdod will remain safe, the fortresses of Israel will be plundered (Amos 3:11) because the very fortresses mentioned were used to store up the gains gathered by violent means. In other words, the oppressed people were redeemed from oppression only to become oppressors themselves. Recently, Eidevall (2017:132) came to the same solution but added that Ashdod and Egypt were regarded as Israel's archenemies par excellence and that will then be the reason why Ashdod and Egypt in particular were selected. e mentioning of Ashdod and Egypt brought to mind two prominent traditions. Ashdod recalls the conquest of the land while Egypt served as a reminder of the exodus from Egypt as YHWH's act of salvation par excellence in the history of his people. e Exodus turned against Israel Amos 4:10 I sent among you a pestilence a er the manner of Egypt; I slew your young men with the sword; I carried away your horses; and I made the stench of your camp go up into your nostrils; yet you did not return to me," says the Lord. Amos 4:10 is part of Amos 4:6-12 which is a prophecy of doom upon Israel and delivered as a "parody of a priestly Torah" (Wol 1977:211). e prophecy is carefully structured in ve strophes (verses 6, 7-8; 9, 10, 11) each of them starting with a rst person singular verb introducing YHWH as the one speaking and ending with "Yet you have not returned to me, declares the LORD" ‫ם-יהוה(‬ ֻ ‫א‬ ‫נְ‬ ‫י‬ ‫דַ‬ ָ ‫ע‬ ‫ּם‬ ‫תֶ‬ ‫ׁבְ‬ ‫ֹא-שַ‬ ‫ל‬ ‫.)וְ‬ e prophecy pertains to di erent agricultural catastrophes that will be experienced as a famine (verse 6), a severe drought (verses 7-8), crop failure due to all kinds of pests, a plague or pest (verse 10) and a national disaster of some sort comparable to what happened with Sodom and Gomorrah (verse 11). e land promised to the people of God will be a good land, a land owing with milk and honey (Ex 3:8; 17; 13:5; 33:3; Num 13:27; Deut 6:3; 11:9; 26:9; Josh 5:6). It will be a land with abundant pasture for animals to provide milk and an equally abundant of produce to be harvested as a result of the fruitful soil. In Deuteronomy 8:7-10 the land is described as a land with more than enough water and where wheat and barley, vines and g trees, pomegranates, olive oil and honey will be available where nobody will lack anything. Indeed, during the time of Amos's activity as prophet, Israel experienced a time of peace and prosperity. ere was little threat from foreign powers and trade routes passing up and down Transjordan through Israelite territory contributed to a time of economic prosperity. Evidence of this era of economical bloom can be seen from the book of Amos itself. Some Israelites could a ord both a summer and winter house (Amos 3:15). Houses were luxuriously furnished and decorated with ivory. Moreover, they could a ord it to enjoy the nest quality food available (Amos 6:1-7). In Amos 4:10 YHWH says that he sent 'a plague like/according to Egypt' ‫ם(‬ ‫יִ‬ ‫רַ‬ ‫צְ‬ ‫מִ‬ ‫ְך‬ ‫רֶ‬ ‫ּדֶ‬ ‫בְ‬ ‫ר‬ ‫ּבֶ‬ ‫.)דֶ‬ "Deber" or plague is not the only term used in the Exodus narrative to describe YHWH's actions against pharaoh and the Egyptians. YHWH will act with signs and wonders ‫מופת‬ ‫)(אות‬ and indeed that was the case when the di erent wonders are described as wonders. e death of the rst born is described with another word ‫.)(נגע‬ Neither of these terms are used by Amos. It is a matter of dispute of what is exactly meant by 'a plague like/according to Egypt' ‫ם‬ ‫יִ‬ ‫רַ‬ ‫צְ‬ ‫מִ‬ ‫ְך‬ ‫רֶ‬ ‫ּדֶ‬ ‫בְ‬ ‫ר‬ ‫ּבֶ‬ ‫.)(דֶ‬ Does the term refer to the h wonder of the death of the livestock of the Egyptians? Or is it possible that this formulation has the tenth plague (the death of the rstborn) in mind? Another possibility is that it is a reference to the plagues in general. Scholarly opinion on this matter gave di erent answers to the problem of how this phrase has to be understood and at least six possible solutions were proposed. • Wol (1977:221) thinks of Exodus 9:3-7 reminding the people of the wonder that killed all the livestock of Egypt. • Paul (1991:147) opines that the phrase refers to both the disaster that struck the livestock (Ex 9:3-7) as well as the population of Egypt (Ex 9:15). • Rudolph (1971:179) is convinced that the phrase refers only to the death of the rstborn (Ex 12:29) and not to the h wonder. • A fourth possibility is that 'a plague like/according to Egypt' should be understood as a reference to the plagues in general and not a speci c plague as was suggested by other scholars. • Eideval (2017:147) stated that the "passage alludes to the narrative in its entirety as a long series of calamities (including pestilence) followed by a defeat that wiped out the Egyptian army (Ex 7-15)". Nogalski (2011:303) agrees with this point of view when he states that verse 10 alludes "to the plagues YHWH used against the Egyptians to force Pharaoh to release Israel to Moses" (Ex 7-12). • Niehaus (1992:401) does not relate this verse with the Exodus events in particular but interprets the verse against a covenantal background referring to Deuteronomy 28:27, 60. e fact that Egypt is mentioned speci cally serves as an undeniable link to the Exodus tradition in Exodus 1-15. In Exodus 5:3 "deber" ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ is used in the speech made by Moses and Aaron to the Pharaoh. In this speech they stated that they wish to o er sacri ces to YHWH "or he may strike us with plagues ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ or with the sword". It is important to note that "deber" ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ and the sword are mentioned together here as is the case in Amos 4:10. With the announcement of the death of the livestock of the Egyptians as the h plague "deber" ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ is used again to describe the plague. In Exodus 9:15 "deber" ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ is used to indicate a plague in general when YHWH says: "For by now I could have stretched out my hand and struck you and your people with a plague ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ that would have wiped you o the earth". e term "deber" ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ is also used in Leviticus 26:25 with no mention of the Exodus but "deber" ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ is once again connected to the sword (cf Ex 5:3; Amos 4:10). In Numbers 14:12 the people are threatened with a plague "deber" ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ that will destroy them. e reason for this action from YHWH is given in the previous verse where YHWH accuses his people they do not believe in him even in spite of all the miraculous signs ‫)האתות(‬ YHWH performed among them. e term "otot" ‫)אתות(‬ is also used to describe the wonders experienced in the plagues that were brought upon Egypt. In Deuteronomy 28:21 the people are threatened with a "deber" ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ that will destroy them should they be disobedient in the promised land. Amos 4:10 is not only linked to the Exodus, it is more speci cally linked to the so-called Plague-narrative of Exodus. e term "deber" ‫ר(‬ ‫ּבֶ‬ ‫)דֶ‬ used in Amos 4:10 does not refer to the h plague or the death of rstborn but should be interpreted as referring to the total event of the plague-narrative. In the prelude to the signs and wonders to be played out in ten so-called plagues, the plagues and the sword are foreseen in Exodus 5:3. Amos 4:10 recalls the general reference to plagues and the sword as mentioned in Exodus 5:3 rather than a reference to speci c plagues. A powerful contrast is created by the mentioning of the plagues in the Exodus tradition. In the Exodus tradition the plagues were aimed at the Egyptians. Time and again in the plague-narrative it is said that the plagues did not harm the Israelites (Ex 8:22; 9:7; 9:26; 12:12-13). In Amos 4:10 the plagues are directed at the Israelites, the people of God. e people who were once guarded from the devasting e ects of the plagues will now su er from plagues. Israel, as the people of God, is punished in the same way as the Egyptians, a foreign nation. In short, the exodus is reversed (Hubbard 1989). Secondly, the plagues were witnessed by the Israelites on foreign soil in Egypt. Now they will experience the plagues in the promised land (Snyman 2006:141). e promised land was supposed to be a land of bounty and fertility. Exactly the opposite will happen now where the people will su er all kinds of agricultural catastrophes comparable to what happened once in Egypt. Nogalski (2011:303) is right when he observed that YHWH's past actions of salvation are now turned against Israel. Instead of salvation brought about as a result of the plagues, devastation will now be the result of the plagues. In the plague narrative it is repeatedly said that the Pharaoh and the Egyptians had to su er the plagues brought upon them because of their disobedience to YHWH to let his people go and worship him. Israel will su er the plagues because they were also disobedient to the voice of God as is implied in the refrain no less than ve times in this unit: "you have not returned to me, says the LORD" (Amos 4:6; 8; 9; 10; 11). Amos 9:7 is part of the second last unit in the book consisting of Amos 9:7-10. Amos 9:7 mentions the exodus of Israel from the land of Egypt. As was noted earlier the only verb ‫)עלה(‬ used in the book of Amos to refer to the Exodus event is used again as was the case in Amos 2:10 and 3:1. YHWH brought his people up from Egypt to the promised land. What is new in this verse is that the Exodus event is put in relief with other similar migrations of other nations. ree di erent nations are mentioned: the children of Cush, the Philistines, and the Arameans. Mentioning these nations gave rise to the question of why these three nations in particular were mentioned. e Exodus event not unique to Israel Viewing the text from a literary perspective it is interesting to note that Cush is mentioned rst, followed by Israel and then the Philistines and the Arameans are mentioned last. Israel is sandwiched in between the other nations mentioned. e verse displays a "kind of envelope structure" starting with a foreign nation (Cush) and concludes with two other foreign nation nations with Israel at the centre (Strawn 2013:112). Strawn (2013:115) in his thorough poetic analysis of 9:7, concluded that a signi cant number of poetic devices indicate that the three lines of this verse belong together. e only verb used in this verse ‫)הלע(‬ "stands quite literally at the centre of the unit (Strawn 2013:115) and performs "a triply duty having Israel, the Philistines and Aram as direct objects" (Niehaus 1992:486). It is also interesting to note that although Cush is not mentioned in the judgement speeches at the beginning of the book, the Philistines are addressed in the second judgement speech in Amos 1:6-8 while the Arameans are addressed in the rst judgement speech in Amos 1:3-5. Considering the possibility that Amos 9:11-15 is most likely a later addition to the book, a chiastic structure emerged from the mentioning of the Philistines and Arameans at the end of the book: (A) Arameans (1:3-5); (B) Philistines (1:6-8); (B) Philistines (9:7); (A) Arameans (9:7). Why Cush is mentioned remains an intriguing question for scholars (Holter 2015:306-318). Smith (1994:36-37) listed no less than ten di erent answers to this question. Viewing the three nations from a geographical point of view, a universal perspective is created with Israel as the Northern Kingdom, Cush is to be found south of Egypt, Caphtor lies to the west and Kir to the east. YHWH acted on a worldwide scale directing the movement of di erent nations. YHWH indeed rescued his people from Egypt, but he guided and directed the histories of other nations as well (Smith 1994:47). e people of Cush were held in high regard during the time of the ministry of Amos. e Cushites were able to subdue the Egyptians during the same time when Amos was a prophet to the Northern Kingdom (Strawn 2013:116-121). e Philistines and the Arameans are known as archenemies of Israel (Wol 1977:347;Van Leeuwen 1985:333). e Arameans according to 9:7 was once brought from Kir like the Israelites were brought from Egypt. According to Amos 1:5 they will be exiled to Kir. It seems that Aram will be returned by military force to their place of origin where they were once brought from by the power of YHWH. Eidevall (2017:236) mentions the possibility that this might be a hint to Israel/Judah that YHWH just might be prepared to reverse the Israelite exodus tradition. is passage brings together two vital aspects of who God is in the book of Amos. On the one hand there is the rm conviction that the God of Israel is not only a local God served by the Israelites. e God of Israel is also the universal God. is can be seen in Amos 1 and 2 where the moral behaviour of foreign nations does concern YHWH. Social injustices committed by foreign people matter to YHWH. e idea of God as the universal God is expressed again at the end of book in 9:7 where YHWH's action is extended from a local to a universal perspective ( ang 2011:187;Wood 2002:90). On the other hand, it was YHWH who brought his people from Egypt to the promised land. e all-important saving event in the history of God's people is relativized as simply one event amongst other similar events. YHWH delivered his people from Egypt, but he also brought other people to their respective countries. It is noteworthy that the verb used to describe the deliverance of Israel from Egypt is applied to the Philistines and the Arameans as well. It is equally important to note that Amos 9:7 does not say that YHWH entered into a special relationship with the three nations mentioned as is the case in Amos 3:1-2 where it is said that he only knew ‫)ידע(‬ them "of all the families of the earth". In a novel way Amos reinterpreted the Exodus events and applied it to a di erent situation sometime during the eighth century BC/E. e way he did it was to relativize this all-important event by comparing the Exodus event with other similar events that happened in the history of other nations in and through YHWH's guidance. What is even more disturbing is the fact that while no judgement is pronounced upon the three nations, Israel, however, is described as "the sinful kingdom" that will be destroyed (Amos 9:8). The historical context of the book of Amos Determining the historical background of a prophetic book is a risky undertaking. Prophetic books have been edited and updated so that it seldom re ects a single historical time. For the purposes of this paper, the historical background re ected in the book is taken as that of the 8 th century BC/E in Israel, the Northern Kingdom. Eideval (2017:16) con rms this consensus among scholars when he states that "With few exceptions they concentrate on one particular period in the history of the kingdom of Israel, namely the last decades of the reign of Jerobeam II (ca 787-747 BCE)". In Amos 1:1 it is stated that the prophecies in this book were delivered during the time of Jerobeam II, son of Joash, the king of Israel (787/6-747/6 BC/E). e prophetic activity of the prophet can be dated toward the end of Jerobeam's reign somewhere around 750 BC/E. ere was little threat of foreign nations and consequently it was a time of relative peace for both the Northern Kingdom as well as Judah paving the way for economic prosperity (Smith 1994:39-40). Evidence of this time of prosperity could be seen in Amos 3:15 where it is said that some Israelites have both a summer and winter house and that these houses were luxuriously furnished and decorated with ivory where they were feasting on ne quality food (Amos 6:1-7). However, having said this it is also evident that the book was addressed to Judah at a later stage in history. Amos 1:2 states for instance that YHWH roars from Zion (see also Amos 6:1), a tradition more associated with Judah than with Israel. e last part of the book (Amos 9:11-15) speaks of the fallen tent of David that will be restored, once again a tradition that is more at home in a Judean context of exilic and/or even post-exilic times. Recently, Eideval (2017:18-20) suggested no less than six possible historical contexts for the book of Amos. Conclusion e Exodus tradition in the book of Amos is utilized in di erent ways. e investigation yielded the following results: At rst e Exodus from Egypt is used as the motive for the prophecies directed at Israel. YHWH's redemptive actions in the past are put in juxtaposition to the current conditions of the people living in the promised land. Secondly, the Exodus is turned against Israel. What happened once to the Egyptians will now happen to the people of God. Just as the Egyptians had to su er plagues, the Israelites will now su er plagues. irdly, the Exodus event is relativized as an event not unique to Israel. e Exodus is relativized in the sense that the Exodus "was not a unique historical-theological event, but rather a divine routine, to transfer nations from one land to another" (Ho man 1989:181). Barton (2012:72) noted in this regard "precisely because YHWH is a universal God, all the movements of the nations come about through his devising". While it is true that YHWH has caused the exodus, it is also true that "he is responsible for all the movements of peoples on the face of the earth" (Barton 2010:191). e three ways in which the exodus tradition featured in the book of Amos reveal a "Steigerung": at rst the exodus tradition is used as the motivation for the prophecies directed to the people. e second use intensi es the use of the exodus tradition in the sense that the exodus tradition is now turned against Israel while it is in actual fact one of the major redemptive acts of YHWH in the history of Israel/Judah. In the last instance the intensi cation is even further increased. e exodus was, a er all, not the unique event peculiar only to the people of YHWH. In fact, what seemed to be the unique saving event in the history of the people is now relativized by the mentioning of similar events that happened to other people as well. e book of Amos is well aware of the Exodus tradition. e Exodus tradition is utilized to con rm the major redemptive act of YHWH in the history of Israel and it is then contrasted to the current situation in the promised land. e book does not only make use of the tradition, it also interprets the tradition and applies it to a new situation. In the plague narrative Egypt was the victim but in the book of Amos the people will su er the fate previously reserved for the Egyptians. Rather than to view Amos 4:10 as a possible allusion to the exodus "but not indicative of Amos's view on this tradition" (Ho man 1989:177), it seems better to see the reference to the plagues as an interpretation of the plague narrative and applying it to a new situation. In a similar way, the Exodus that was thought of as a unique event that only happened to Israel, is now put in a broader perspective to reveal the surprising insight that other nations also experienced what may be called exodus events. To claim that the exodus tradition was rejected in the book of Amos (Ho man 1989:181) is to press the matter a bit too far. It rather seems that the book a rms the exodus tradition so well-known in the Northern Kingdom reminding the people of this famous event and then utilized it in an innovative way to a new era. e deliverance from the land of Egypt contrasted with living in the promised land. e familiar event of the miraculous exodus from Egypt gained an unexpected and disturbing message in the book of Amos.
2021-09-01T15:05:37.215Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "8608653db7a95fb910026869d939dd92db828dde", "oa_license": "CCBY", "oa_url": "https://ojs.reformedjournals.co.za/stj/article/download/2235/3091", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1bc8fb4f63bf39514a55c237d35134aff90eb2b4", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "History" ] }
246240534
pes2o/s2orc
v3-fos-license
Well-posedness for a Navier-Stokes-Cahn-Hilliard System for Incompressible Two-phase Flows with Surfactant We investigate a diffuse-interface model that describes the dynamics of incompressible two-phase viscous flows with surfactant. The resulting system of partial differential equations consists of a sixth-order Cahn-Hilliard equation for the difference of local concentrations of the binary fluid mixture coupled with a fourth-order Cahn-Hilliard equation for the local concentration of the surfactant. The former has a smooth potential, while the latter has a singular potential. Both equations are coupled with a Navier-Stokes system for the (volume averaged) fluid velocity. The evolution system is endowed with suitable initial conditions, a no-slip boundary condition for the velocity field and homogeneous Neumann boundary conditions for the phase functions as well as for the chemical potentials. We first prove the existence of a global weak solution, which turns out to be unique in two dimensions. Stronger regularity assumptions on the initial data allow us to prove the existence of a unique global (resp. local) strong solution in two (resp. three) dimensions. In the two dimensional case, we can derive a continuous dependence estimate with respect to the norms controlled by the total energy. Then we establish instantaneous regularization properties of global weak solutions for $t>0$. In particular, we show that the surfactant concentration stays uniformly away from the pure states $0$ and $1$ after some positive time. tension may also produce Marangoni flows, which are phenomenologically different from the temperature-driven ones. The rich phenomena induced by surfactants have been exploited extensively in science and they have led to a lot of applications in Engineering (see, e.g., [31]). The dynamics of a binary fluid mixture in presence of a surfactant can be effectively modeled through the socalled diffuse-interface (or phase-field) approach [3]. Within this framework, various models have been proposed in the literature, which account for the rich microstructures in the mixture as well as complicated morphological changes of the interfaces. The possibly first one among them, although neglecting hydrodynamical effects, dates back to the work by Laradji et al. [27] (see also [26]), where the authors investigated the dynamics of phase separation using an evolution system derived from a suitable Ginzburg-Landau free energy functional depending on two order parameters: one for the difference in local concentrations of the two immiscible components (denoted by φ) and the other one for the local concentration of the surfactant (denoted by ρ). The resulting system consists of two (weakly) coupled Allen-Cahn type equations subject to thermal noises. However, in the past years, the structure of the free energy functional has been debated and refined, leading to a variety of descriptions. In order to motivate our choice, we present a brief review of a number of models, without considering hydrodynamical effects at first. Let Ω ⊂ R d , d = 2, 3 be a bounded domain with smooth boundary ∂Ω. The starting point is a coarse-grained model based on a two-component Ginzburg-Landau free energy functional of possibly the simplest form: where k 1 , k 2 ≥ 0. In [27], the parameter k 2 was taken to be zero as a physically reasonable approximation, since the energy cost of the fluid-surfactant attachment is small (see also [25]). Besides, the potential energy densities F 0 and F 1 are modeled by some double well polynomial functions of the concentrations, while the interaction energy density is given by where θ > 0 is a given phenomenological parameter and p is a bivariate polynomial. The first coupling term favors the surfactant to reside at the free interfaces between the two fluids, while p is suitably chosen to penalize the presence of free surfactant in the domain. Nonetheless, as mentioned in [20] (see also [25]), the energy functional proposed in [27] may not be well defined, since it is not bounded from below for large values of the surfactant concentration ρ at the interfaces. For this reason, in [25] the authors proposed a slight modification of the energy E (with k 2 = 0) by including a regularizing term, namely, where k 3 > 0 and F 0 , F int are the same as those in [27], except that p ≡ 0. The additional term k 3 |∆φ| 2 corresponds to the second-order term in the expansion of a free energy density in the region of nonuniform composition for a binary mixture (see e.g., [7]). In particular, the potential F 1 takes the form F 1 (ρ) = ρ 2 (ρ−1) 2 , where the minimum state ρ = 0 means that the interfacial layer is occupied by the two-component mixture and there is no surfactant in the local volume, while the normalised state ρ = 1 indicates that the interface is fully saturated with the surfactant. On the other hand, in [37], the authors did not add any higher-order regularizing term in the energy functional, but for the surfactant they chose an entropy term Besides, the potentials F 0 and F int were kept almost unchanged with respect to [27] with p(φ, ρ) = 1 2 W ρφ 2 (for some W > 0), which counteracts the occurrence of free surfactant and serves as an enthalpic contribution for numerical reasons. The entropy term has the advantage that it guarantees the order parameter ρ for the surfactant will take its values in the physically relevant interval [0, 1]. However, it has been pointed out in [10] that when k 2 = 0, it may exist a relevant set of initial data for which the resulting problem is ill-posed. Moreover, therein the authors suggested replacing the entropy term by the Flory-Huggins type potential (see e.g., [11,23]). Therefore F 1 becomes for some c 2 ∈ R and c 3 > 0. We note that the choice c 3 > 0 is equivalent to assume k 2 > 0. This term fits with the classical diffuse-interface description of binary mixtures [7,8,32]. The phase-field model can further include hydrodynamical effects through a suitable coupling with a system of Navier-Stokes equations for the (volume) averaged velocity u of the fluid mixture. To this end, one can add a term related to the kinetic energy Ω k 4 |u| 2 dx, for some k 4 > 0, into the energy functional E(φ, ρ). Then the full hydrodynamical coupled system of evolutionary partial differential equations can be derived via a variational method. It consists of two convective Cahn-Hilliard type equations and the Navier-Stokes system subject to capillary forces, see, for instance, [33] and also [10] for the case k 3 = 0. In light of the above considerations, throughout of this paper, we shall work with the following energy functional for the binary fluid-surfactant system where α, β, θ are positive constants. The potential function S φ for φ is assumed to be a regular one with double-well structure, whose typical form is while the surfactant potential S ρ is assumed to be a singular one. For instance it can be the Flory-Huggins potential where θ 1 > 0 and θ 2 ∈ R. In (1.1), we simply set the coupling polynomial p to zero, since from the mathematical point of view, those physically interesting cases considered in [10,27,37] can be easily controlled by the potential functions S φ and S ρ . The double-well regular potential (1.2) is a well-known approximation of the Flory-Huggins potential. This does not ensure that φ takes values in its physical range [−1, 1] due to the loss of maximum principle for the higher-order parabolic equation. Yet, in models as well as in numerical simulations of immiscible fluids, this approximation is easy to handle and has been widely used. Then a natural question is: why we do not assume a Flory-Huggins type potential for φ as well ? Our consideration is as follows. Observe that the evolution of φ is described by a sixth-order Cahn-Hilliard equation (see (1.4)). However, this kind of equation with a singular potential is rather difficult to handle. Indeed, even the existence of a weak solution in the usual sense remains an open problem (see Remark 3.3). Only the existence of a weaker solution has been established by replacing the equation with a suitable variational inequality [28] (see also [35,36] for the analysis of some other sixth-order Cahn-Hilliard type equations with singular potentials). On the other hand, one might think to take α = 0 and use a singular potential for φ. Then the problem is to take care of the nonlinear coupling due to the term (θ/2)ρ|∇φ| 2 , which is highly nontrivial (see Remark 2.2 and [34] for a related problem). The system with a singular potential for φ is interesting and will be the subject of a further investigation. Therefore, in this work, we confine ourselves to the case with a regular potential for φ, which seems a reasonable choice in order to prove a number of theoretical results (see also [37] for remarks about modeling). All the phase-field models mentioned above have gained particular interest as far as numerical simulations are concerned. For example, the models with regular potentials have been numerically investigated in [41,42,43,44]. However, it was also noted in [41] that even this modification does not simplify a rigorous proof that the resulting energy functional is bounded from below. This problem is left unanswered, and the authors chose to introduce an artificial modification to the regularizing higher-order term, in order to provide a simple, yet rigorous, proof that the energy functional is bounded from below (provided that some solution exists). Instead, as we shall prove rigorously in this paper, the fact that ρ takes its values within the physical range [0, 1] guarantees the boundedness from below of the energy functional (1.1). On account of (1.1), assuming the two-phase flow to be isothermal and incompressible with matched densities, the system we want to analyze here, on some time interval [0, T ], T > 0, is the following in Ω × (0, T ), ∂ t φ + u · ∇φ = ∆µ in Ω × (0, T ), µ = α∆ 2 φ − ∆φ + S ′ φ (φ) + θ∇ · (ρ∇φ) in Ω × (0, T ), ∂ t ρ + u · ∇ρ = ∆ψ in Ω × (0, T ), System (1.4) is subject to the following boundary and initial conditions:          u = 0 on ∂Ω × (0, T ), ∂ n φ = ∂ n ∆φ = ∂ n µ = 0 on ∂Ω × (0, T ), ∂ n ρ = ∂ n ψ = 0 on ∂Ω × (0, T ), u| t=0 = u 0 , φ| t=0 = φ 0 , ρ| t=0 = ρ 0 , in Ω, (1.5) where the vector n = n(x) denotes the unit outer normal to ∂Ω. We recall that φ = φ(x, t) stands for the difference in local concentrations of the two immiscible fluid components and ρ = ρ(x, t) denotes the local surfactant concentration. The velocity field u = u(x, t) is taken as the volume-averaged velocity of the binary fluid mixture, which is equivalent to the mass-averaged velocity since we only consider the case with matched densities here. The symmetric tensor Du = 1 2 (∇u + (∇u) T ) denotes the strain rate and the scalar function π = π(x, t) stands for the (modified) pressure. The latter can be viewed as a Lagrange multiplier corresponding to the incompressibility condition ∇ · u = 0 for the fluid. The chemical potentials corresponding to φ and ρ are denoted by µ = µ(x, t) and ψ = ψ(x, t), respectively, which can be obtained as variational derivatives of the free energy functional. We note that when the parameter θ = 0, the homogeneous Neumann boundary conditions for µ (resp. ψ) is not equivalent to ∂ n ∆ 2 φ = 0 (resp. ∂ n ∆ρ = 0) on ∂Ω. For the sake of simplicity, the density as well as the mobilities and other physical constants are assumed to be equal to one, but we allow the binary fluid mixture to have an unmatched kinematic viscosity ν = ν(φ, ρ). As we shall see, even if the potential S φ is regular, the higher-order regularizing term in the energy functional entails the global boundedness of φ (albeit not necessarily by 1). Our goal is to provide a first-step theoretical analysis of the initial boundary value problem (1.4)-(1.5). More precisely, we first prove the existence of a global weak solution in both two and three dimensions and this solution is indeed unique in dimension two (see Theorem 2.1). Then we establish the existence of a (unique) strong solution, which is local in time in dimension three and global in time in dimension two (see Theorem 2.2). Further results can be obtained in dimension two. First, we derive a continuous dependence estimate for strong solutions (u, φ, ρ) with respect to the norms in L 2 (Ω) × H 2 (Ω) × H 1 (Ω), which are corresponding to the energy norms associated with (1.1) (see Theorem 2.3). Next, we show that every global weak solution regularizes in finite time and the strict separation property holds for the surfactant concentration ρ (see Theorem 2.4). The latter implies that ρ stays uniformly away from the pure states 0 and 1 for positive time (cf. [17,30], see also [18,19,22]). This also holds for the strong solution on [0, +∞) (see Proposition 5.1). For the proof, we shall take advantage of what has been done in [19] for the Navier-Stokes-Cahn-Hilliard system with singular potential (see also [1,22]). Nevertheless, extra efforts are required to overcome those mathematical difficulties due to the complicated nonlinear coupling structure of problem (1.4)-(1.5). Theoretical analysis of fluid-surfactant type systems (yet different from the one of interest in this paper) have been conducted, starting from sharp interface models (see e.g., [16,38]), typically investigating only the existence of weak solutions (see [2] and references therein). It is also worth mentioning that other phase-field models for mixtures with surfactant have been analyzed theoretically or numerically (see, for instance, [12] for a stationary model and [45,46] for hydrodynamic problems involving moving contact lines and non-constant density). Besides the possibility of considering a Flory-Huggins potential also for φ (see above), other interesting future issues include, for instance, long-time behavior of global weak/strong solutions (existence of global/exponential attractors and convergence to a single equilibrium as t → +∞), rigorous mathematical analysis of extended systems with non-constant (or even degenerate) mobilities, dynamic boundary conditions (moving contact lines) as well as non-constant densities. Also, suitable optimal control problems could be formulated and analyzed. Plan of the paper. In Section 2, we first introduce some notations and the functional setting. Subsection 2.2 is devoted to illustrating the weak formulation of problem (1.4)-(1.5) and to stating the main results. Proofs of well-posedness results are given in Section 3 (existence of global weak solutions and uniqueness of weak solutions in dimension two) and Section 4 (existence and uniqueness of strong solutions). In the final Section 5, when d = 2, we derive a continuous dependence estimate for strong solutions in the norms controlled by (1.1) and then establish the regularization property for weak solutions. In particular, we show the validity of the strict separation property for ρ. Preliminaries We first introduce the function spaces and recall some known results in functional analysis. Let X be a (real) Banach space. Its dual space is denoted by X * , and the duality pairing between X and X * will be denoted by ·, · X * ,X . Given an interval I of R, we introduce the function space L p (I; X) with p ∈ [1, +∞], which consists of Bochner measurable p-integrable functions with values in the Banach space X. The boldface letter X denotes the vector-valued (resp. matrix-valued) space X d (resp. X d×d ) endowed with the corresponding norms. For the standard Lebesgue and Sobolev spaces, we use the notations L p := L p (Ω) and W k,p := W k,p (Ω) for any p ∈ [1, +∞] and k > 0, equipped with the norms · L p (Ω) and · W k,p (Ω) . When p = 2, we denote H k (Ω) := W k,2 (Ω) and the norm · H k (Ω) . The norm and inner product on L 2 (Ω) are simply denoted by · and (·, ·), respectively. The spaces H 2 N (Ω) and H 4 N (Ω) consisting of functions subject to homogeneous Neumann boundary conditions are defined as H 2 N (Ω) = u ∈ H 2 (Ω) : ∂ n u = 0 a.e. on ∂Ω , H 4 N (Ω) = u ∈ H 4 (Ω) : ∂ n u = ∂ n ∆u = 0 a.e. on ∂Ω . For every f ∈ (H 1 (Ω)) * , we denote by f its generalized mean value over Ω such that f = |Ω| −1 f, 1 (H 1 ) * , H 1 . If f ∈ L 1 (Ω), then its mean value is simply given by f = |Ω| −1 Ω f dx. In the subsequent analysis, we will use the well-known Poincaré-Wirtinger inequality where C P is a constant depending only on Ω. We introduce the space L 2 0 (Ω) := {f ∈ L 2 (Ω) : f = 0} and Then we see that f → ( ∇f 2 + |f | 2 ) 1 2 is an equivalent norm on H 1 (Ω) while f → ∇f is an equivalent norm on V 0 . Besides, we recall the following elliptic estimates. Let Ω be a bounded domain with a C 4 -boundary. The following estimates hold: In all cases, the constant C > 0 only depends on Ω, d, but is independent of u. Consider now the realization of −∆ with homogeneous Neumann boundary condition, that is, the linear operator A N ∈ L(H 1 (Ω), H 1 (Ω) * ) defined by Then the restriction of A N from the linear space V 0 onto V * 0 is an isomorphism. In particular, A N is positively defined on V 0 and self-adjoint. We denote its inverse map by It is straightforward to verify that Also, we have the chain rule 2 are equivalent norms on V * 0 and (H 1 (Ω)) * , respectively. Concerning the Navier-Stokes equations, we introduce the spaces (see, for instance, [15]) endowing the former with the L 2 (Ω)-Hilbert structure, whereas for the latter we set (u, v) Vσ := (∇u, ∇v), u Vσ := (∇u, ∇u) The latter is a norm equivalent to the canonical one because of Korn's inequality and Next, we consider the Stokes operator A : V σ → V * σ , which is the Riesz isomorphism between V σ and its topological dual, that is, Here, we have adopted the notation M 1 : In a similar fashion to what has been carried out for the operator A N , we can define the equivalent norm u V * σ := ∇A −1 u in V * σ . Besides, the following chain rule holds for any f ∈ H 1 (0, T ; V * σ ). Next, we define the space W σ := H 2 (Ω) ∩ V σ and recall the following regularity result for Stokes operator (see e.g., [19,Appendix B]): where C is a positive constant that may depend on Ω, d, but is independent of f . Then it follows that the norm u W σ := Au is equivalent to the standard H 2 -norm in W σ . For the sake of convenience, below we report the Ladyzhenskaya and Agmon inequalities (see e.g., [39]) 4) and the Gagliardo-Nirenberg inequality where D j f denotes the j-th weak partial derivatives of f , j, m are arbitrary integers satisfying 0 ≤ j < m and j m ≤ a ≤ 1, and 1 ≤ q, r ≤ +∞ such that If 1 < r < +∞ and m − j − n r is a nonnegative integer, then the above inequality holds only for j m ≤ a < 1. The above inequalities will be frequently used in the subsequent analysis. In the remaining part of this paper, the letters C, C i will denote genetic positive constants possibly depending on the domain Ω, the coefficients of the system as well as on the boundary and initial data at most. These constants may vary in the same line in the subsequent estimates and their special dependence will be pointed out explicitly in the text, if necessary. Main results We first state the following assumptions that will be needed in our analysis. Let us introduce the notion of finite energy weak solution to the initial boundary value problem (1.4)-(1.5). Remark 2.4. On account of the global boundedness of ρ and the L ∞ (0, T ; H 2 (Ω))-regularity of φ, it holds that the weak solutions satisfy φ, ρ ∈ L ∞ (0, T ; L p (Ω)) for every p ≥ 1. In particular, the mapping t → ρ(t) L ∞ is measurable and essentially bounded (see [14,Remark 3.3]). Now we are in a position to state the main results of this paper. The first result concerns the existence of a global weak solution. In the two dimensional case we are able to say more. Let us introduce a further assumption on the singular potential This property is fulfilled by the mixing entropy term in (1.3). It enables us to derive estimates for the singular terms S ′ ρ (ρ) as well as S ′′ ρ (ρ), which further entails higher-order regularity of the solution ρ. Besides, it plays a role in establishing the strict separation property for ρ in dimension two (see [17,Section 5], see also [30]). Proof of Theorem 2.1 The proof of Theorem 2.1 consists of several steps. The first ingredient is the following Proof. To deduce (3.1), we simply test the Cahn-Hilliard type equations in (1.4) by 1 and integrate over Ω. Then (3.1) follows through an integration by parts, thanks to the homogeneous Neumann boundary conditions for µ, ψ, the no-slip boundary condition for u and the incompressibility condition ∇ · u = 0. The energy identity (3.2) can be obtained by testing the first, third, fourth, fifth and sixth equations in (1.4) by u, µ, ∂ t φ, ψ and ∂ t ρ, respectively, integrating over Ω and using the incompressibility condition as well as the boundary conditions for (u, φ, ρ). We thus get Collecting the above identities together, we easily conclude (3.2). Remark 3.1. Integrating (3.1) and (3.2) with respect to time, we find that shows that, physically as well as mathematically, E tot must be bounded from below. The singular potential S ρ (formally) implies that Thus, we can directly infer from (3.4) that The first term can be easily controlled by the higher-order term α 2 ∆φ 2 in E tot . Concerning the second term, without making any assumption on the size of the positive parameters α, θ, it can still be handled thanks to the coercivity of S φ . Indeed, from (H2) and Young's inequality, we have Therefore, the bound (3.4) of ρ plays a crucial role. However, when we prove the existence of weak solutions to problem (1.4)-(1.5) we need to introduce a suitable regularization of the singular potential S ρ (see (3.9) below) and (3.4) can no longer be guaranteed due to the lack of maximum principle for the fourth order Cahn-Hilliard equation. This is the reason why we have to introduce a further penalization term in the approximating problem (see (3.6) below, see also [41] for a similar strategy in a numerical context). Note that the presence of the second-order term in the energy E tot would not play any role in proving its boundedness from below provided that θ ∈ (0, 1) (see Remark 3.3). A regularized problem In view of Remark 3.1, we consider the penalized energy where ω ∈ (0, 1] is a given parameter. Correspondingly, the initial boundary value problem associated to the perturbed energy functional E ω is the following in Ω × (0, T ), subject to the boundary and initial conditions in Ω. To prove the existence of a global weak solution to problem (3.7)-(3.8), we introduce a suitable approximation of the singular potential S ρ , dependent on some (small) parameter ε > 0 in such a way that the original potential is recovered in the limit ε → 0 + . More precisely, following [13] (see also [9]), we consider a family of regular potentials based upon the second-order Taylor's expansion of S ρ . Recalling (H3), for any sufficiently small ε ∈ (0, ǫ 1 ), let S ρ,ε : R → R be a globally defined approximation of S ρ given by Then for any ε ∈ (0, ǫ 1 ), there exist constants γ 1 , γ 2 , γ 3 > 0 such that where γ 1 , γ 2 are independent of ε, while the upper bound γ 3 may depend on ε. Our strategy is as follows. We first find a global weak solution to a regularized system of problem (3.7)-(3.8) with S ρ replaced by the regularized potential (3.10). Then we derive uniform estimates and pass to the limit first as ε → 0 + to obtain a solution to the penalized problem (3.7)-(3.8). Finally, we will get rid of the penalization term by letting also ω → 0 + . The Galerkin scheme Let us consider the regularized system (3.12) subject to the boundary and initial conditions (3.8). Its weak formulation reads essentially the same as (2.6) in Definition 2.1 with obvious modifications. Observe that system (3.12) is associated with the following energy functional The existence of a global weak solution to the regularized problem (3.12) with (3.8) can be proved by using a suitable Galerkin approximation scheme. Recall the countably many eigencouples of the (negative) Neumann-Laplace operator, denoted by (η n , w n ) ∈ R × L 2 (Ω), n ∈ Z + . We note that {w n } forms an orthonormal basis of L 2 (Ω) and is also an orthogonal basis of H 2 N (Ω). Analogously, we set (ζ n , w n ) ∈ R × H σ to be the countably many eigencouples of the Stokes operator A and {w n } forms an orthonormal basis of H σ and also an orthogonal basis of W σ . We set W n := span{w 1 , ..., w n } ⊂ H 2 N (Ω), W n := span{w 1 , ..., w n } ⊂ W σ , with corresponding orthogonal projections Π n : L 2 (Ω) → W n (with respect to the inner product in L 2 (Ω)) and P n : H σ → W n (with respect to the inner product in H σ ). Then we consider the following Galerkin scheme that depends on three approximating parameters n, ε and ω. Namely, for ω ∈ (0, 1], ε ∈ (0, ǫ 1 ) and n ∈ Z + , we look for functions (u n,ε ω , φ n,ε ω , ρ n,ε ω , µ n,ε ω , ψ n,ε ω ) of the form: which solve the following problem: in Ω. (3.14) Inserting the expressions of those approximate solutions into the above weak formulation, we arrive at a system consisting of 5n ordinary differential equations in the unknowns Recalling the assumptions (H1)-(H4), an application of the Cauchy-Lipschitz theorem entails Proposition 3.2. For any positive integer n, there exists T n ∈ (0, T ] such that problem (3.14) admits a unique local solution (u n,ε ω , φ n,ε ω , ρ n,ε ω , µ n,ε ω , ψ n,ε ω ) on [0, T n ], which is given by the functions Uniform estimates Here we proceed to derive some bounds of the local approximating solutions that are uniform with respect to n, ε, and ω. The first one is the following energy estimate (cf. Proposition 3.1) where C 1 > 0 is independent of n and ω, while C 2 > 0 is independent of n, ε, and ω. Proof. Arguing as in the proof of Proposition 3.1, in (3.14) we can take the test functions v = u n,ε ω , v = µ n,ε ω and v = ψ n,ε ω in the equations for u n,ε ω , φ n,ε ω and ρ n,ε ω , respectively, while multiplying the equations for the chemical potentials by ∂ t φ n,ε ω and ∂ t ρ n,ε ω accordingly. Combining all the resulting equalities, we find d dt For any t ∈ (0, T n ], integrating the above identity over [0, t], we obtain Concerning the initial energy, we have We easily obtain the bounds Moreover, we notice that since φ n 0 → φ 0 in H 2 (Ω), there exists n * ∈ N such that for all n > n * ∆φ n where C is independent of n. Also, we infer from (H3) and (3.9) that where C(ε) > 0 may depend on ε and C R > 0 is a constant only depending on R ρ (0), R ′ ρ (0) and L 1 . On account of (H2), we deduce that where we have also used the Sobolev embedding and Ω but not on n, ω, ε, while C(ε) is independent of n and ω. The remaining two terms in the approximate initial energy are treated by using the Cauchy-Schwarz as well as Young's inequalities: where we have also used the Sobolev embedding H 2 (Ω) ֒→ W 1,4 (Ω) (d = 2, 3). Collecting the above estimates, we obtain the required upper bound by choosing a suitable constant C 1 > 0 depending on ε, u 0 , φ 0 H 2 (Ω) , ρ 0 H 1 (Ω) and Ω, but independent of n and ω. The bounds are independent of ω and n, but may depend on ε. Proof. It follows from Lemma 3.1 that Besides, we note that the averages of φ n,ε ω and ρ n,ε ω are independent of n and t (by orthogonality of the eigenvectors), that is, Therefore, by the triangle inequality and the Poincaré-Wirtinger inequality, we have since φ n,ε ω ∈ H 1 (Ω). Thus, a uniform bound of φ n,ε ω in L ∞ (0, T n ; H 1 (Ω)) is obtained. The L ∞ (0, T n ; H 2 (Ω))bound then follows from the standard elliptic regularity theory. A similar argument yields a bound for ρ n,ε ω in L ∞ (0, T n ; H 1 (Ω)). Concerning the velocity field u n,ε ω , it follows from (3.17), Lemma 3.1 and Korn's inequality that The proof is complete. We now prove a priori bounds for the chemical potentials. We now need to derive some uniform estimates on the time derivatives of the approximate solutions in order to apply a compactness argument. Proof. Consider the first equation in (3.14) and let w ∈ V σ be such that w Vσ = 1. On account of Lemmas 3.2 and 3.3, we find that We now perform a comparison argument in the second and fourth equations in (3.14). Let w ∈ H 1 (Ω) such that w H 1 (Ω) = 1 be given. Using Lemmas 3.2 and 3.3, we get where we have used the following estimate and a similar one for ρ n,ε ω . Taking the supremum over all functions w, squaring the inequality and integrating in [0, T ], we arrive at the desired bound thanks to Lemma 3.2. The proof is complete. Existence of weak solutions for the penalized problem We can now prove the existence of a global weak solution to the penalized problem (3.7)-(3.8) on [0, T ]. The proof is complete. On account of Lemma 3.6, we can now argue as in the proofs of Lemmas 3.2-3.5 to deduce a series of uniform estimates with respect to ε for the approximate solution (u ε ω , φ ε ω , ρ ε ω , µ ε ω , ψ ε ω ). The main novelty with respect to the Galerkin scheme is the following estimate which can be deduced from (3.7) 6 (see, for instance, [29, for some C > 0 independent of ε. These bounds combined with compactness arguments (see e.g., [29]) allow us to find (u ω , φ ω , ρ ω , µ ω , ψ ω ) which is a global weak (or finite energy) solution to the penalized problem (3.7)-(3.8). Existence of weak solutions to the original problem The final step is based on uniform estimates that are independent of ω ∈ (0, 1]. First, we have energy bounds Lemma 3.7. For every ω ∈ (0, 1], let (u ω , φ ω , ρ ω , µ ω , ψ ω ) be a global weak solution solving the penalized problem (3.7)-(3.8). Then, for every t ∈ (0, T ], we have where C 6 , C 7 > 0 do not depend on ω and T . Proof. The upper bound is straightforward (cf. the proof of Lemma 3.6). Concerning the lower bound, we shall essentially make use of the estimate ρ ω L ∞ (0,T ;L ∞ (Ω)) ≤ 1. Recalling (H2), (H3) and arguing as in Remark 3.1, we can achieve the conclusion by noting that the perturbation involving ω is nonnegative. The proof is complete. Finally, through a semicontinuity argument applied to (3.26), we can also recover the energy inequality (2.7). If d = 2, the regularity of weak solutions allow us to derive an energy equality by arguing as in the proof of Proposition 3.1 (see also [1]). The existence part of Theorem 2.1 is now proved. Remark 3.3. As we mentioned in the Introduction, it would be physically reasonable to take a Flory-Huggins potential for φ as well. From the mathematical point of view, this case is highly non-trivial since φ satisfies a sixth order Cahn-Hilliard type equation with a singular potential (cf. [28]). In the approximation scheme, the essential bound (3.27) (see also (3.25)) cannot be recovered anymore because of the fourth-order term in the chemical potential. Thus it is not clear how to establish the existence of a weak solution in the usual sense. On one hand, maybe one could show the existence of a weaker solution like the one obtained for a single Cahn-Hilliard equation in [28]. See also [35,36] for alternative approaches to handle singular potentials. On the other hand, one may want to consider a standard fourth-order Cahn-Hilliard equation for φ (i.e., taking α = 0 in (1.1)). In this case, the existence of a weak solution might be provable provided that θ ∈ (0, 1) (see (3.5)). However, other results (e.g., uniqueness and regularity in two dimensions) could be rather challenging because of the couplings between the two Cahn-Hilliard equations. Like in Step 1, we compute the last scalar product using the equation for ψ and obtain where , ρ , I 5 := (ρu 1 , ∇N ρ) + (ρ 2 u, ∇N ρ) , Recalling (H3), we easily obtain Arguing as for I 1 , we get Then, Sobolev embeddings and the Poincaré-Wirtinger inequality give Thus, we can conclude from estimates (3.39)-(3.40) and (3.37) that Step 3. Now we consider the Navier-Stokes system. For the sake of convenience, we make use of the vectorial identity which holds thanks to ∇ · u i = 0. Besides, we recast the Korteweg forces by using the equations for µ i , ψ i , i = 1, 2, and we write In this way, we get rid of the chemical potentials by considering extra pressure terms. After introducing these modifications, we test the equation for u by A −1 u ∈ W σ , which yields where we set (using integrations by parts and adding/subtracting suitable quantities) We analyze the remainder terms on the left-hand side of (3.42) by using the argument in [19]. Since ∇ · ∇u T = ∇∇ · u = 0, we deduce that From the definition of the Stokes operator, we find that there exists q ∈ L 2 (0, T ; H 1 (Ω)) satisfying −∆A −1 u + ∇q = u almost everywhere in Ω × (0, T ) (cf. Lemma 2.2). Moreover, it holds Therefore, the second term on the right-hand side of (3.43) can be estimated as follows: Setting we then recast (3.42) as 1 2 Next, we estimate all the I j terms defined above. Using the Ladyzhenskaya inequality (2.2) and Young's inequality, we can deduce that (see [19]) The estimate for I 10 is slightly more involved. Indeed we have Using now Sobolev embeddings, we deduce that Recalling that V σ ֒→ L r (Ω) for every r > 0, we find Next, we see that Collecting the estimates (3.43)-(3.54), we infer from (3.45) that where we set Step 4. Collecting (3.36), (3.41) and (3.55), we arrive at the differential inequality V * 0 , and the function H is given by (3.56) with a suitably enlarged C > 0. We now analyze the logarithmic term on the right-hand side by using the fact that on any interval (0, M ] the function s ln C s is increasing provided that C > eM . Recalling that u L ∞ (0,T ;Hσ) , φ L ∞ (0,T ;H 2 (Ω)) and ρ L ∞ (0,T ;H 1 (Ω)) are bounded, we have where the constant K 1 > 0 depends on norms of the initial data, Ω, T , and coefficients of the system. Let K 2 be a sufficiently large constant that may depend on K 1 . Then we deduce that As a consequence, we obtain where, again, we possibly enlarge C in H. Integrating (3.57) on [0, t] ⊂ [0, T ], we get Since H ∈ L 1 (0, T ) and Y(0) = 0, from (3.58) and using the Osgood lemma (see, e.g., [ Thus, after taking the double exponential, we find Nevertheless, in the above argument, we should assume that either the initial data for ρ have the same mean value, or take N (ρ − ρ) as a test function in Step 2. Proof of Theorem 2.2 In this section, we prove the existence of strong solutions to problem (1.4)-(1.5). Following the approach devised in [19], we first construct a proper approximation of the initial datum ρ 0 (which is indeed not necessary for the logarithmic potential (1.3) when d = 2, as pointed out in [22]). Then, using the same approximating scheme as in Section 3, we derive higher-order uniform bounds which allow us to pass to the limit with respect to the approximation parameters. Uniform estimates We now show uniform estimates with respect to the approximating parameters ω, k, ε and n. Then, we can follow line by line all the proofs of Lemmas 3.2-3.5 to derive uniform estimates for the approximate solutions with respect to ω, n, k and ε. In particular, we have T n = T and ω is uniformly bounded in L 2 (0, T ; H 1 (Ω)), ψ n,ε ω is uniformly bounded in L 2 (0, T ; H 1 (Ω)). The existence of strong solutions depends on higher-order estimates. The situation is different according to the spatial dimension. Higher-order estimates in two dimensions. We have The proof consists of several steps. Therefore, we see that the solution (u, φ, ρ, µ, ψ) is indeed a strong one, which satisfies the equations almost everywhere. In addition, using the same argument as in the proof of Theorem 2.1, we can deduce that ρ ∈ L ∞ (0, T ; W 2,p (Ω)) for every p ≥ 2 if d = 2 and φ ∈ L ∞ (0, T ; H 5 (Ω)). The pressure π ∈ L ∞ (0, T ; H 1 (Ω)) can be recovered, up to a constant, through the De Rham theorem (see, for instance, [6,40]). Existence of strong solutions in the three dimensional case can be proved by arguing as in the two dimensional case. The only difference is that the higher-order estimates in Lemma 4.3 are only local in time so that the strong solution is local in time as well, as expected. Besides, we can show ρ ∈ L ∞ (0, T * ; W 2,p (Ω)) only for p ∈ [2,6]. The existence part of Theorem 2.2 is now proved. Uniqueness of strong solutions On account of Theorem 2.1, we only need to consider the three dimensional case. Below we present some easy modifications of the argument in Subsection 3.6, taking full advantage of the higher-order regularity properties of the strong solution. To this end, suppose that (u 0 , φ 0 , ρ 0 ) is an admissible set of initial data in the statement of Theorem 2.2. Then let (u 1 , φ 1 , ρ 1 ) and (u 2 , φ 2 , ρ 2 ) be two local strong solutions to problem (1.4)-(1.5) defined in some time interval [0, T * ] and both originating from (u 0 , φ 0 , ρ 0 ). Denoting by µ i , ψ i , π i , i = 1, 2, the corresponding chemical potentials and pressures, we set the differences by Using the Gagliardo-Nirenberg inequality in three dimensions (see (2.5)), we modify the estimate for I 3 (cf. (3.35)) as follows The final result still reads as (3.36), that is, In a similar manner, we can derive (cf. (3.41)) The estimate for I 7 is revised as follows (cf. (3.46)) σ , while for I 13 , we write (cf. (3.52)) Collecting (4.51), (4.52), and (4.53), we get the differential inequality Since Y(0) = 0 and H ∈ L 1 (0, T * ), an application of Gronwall's lemma easily implies that Y(t) ≡ 0 for all t ∈ [0, T * ]. That is, the local strong solution to problem (1.4)-(1.5) in three dimensions is unique. The proof of Theorem 2.2 is complete. Taking advantage of higher regularity of strong solutions (recall Theorem 2.2), we proceed to estimate their difference in stronger norms (cf. the argument in Subsection 3.6). Besides, in the present case we have to take care of the fact that the initial data are no longer null. The proof of Theorem 2.4 is complete.
2022-01-25T02:16:04.620Z
2022-01-22T00:00:00.000
{ "year": 2022, "sha1": "455d06af3d8c87c1cdf50dbde65fa54571b618d5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2201.09022", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "455d06af3d8c87c1cdf50dbde65fa54571b618d5", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
199388123
pes2o/s2orc
v3-fos-license
A portable epigenetic switch for bistable gene expression in bacteria We describe a portable epigenetic switch based on opvAB, a Salmonella enterica operon that undergoes bistable expression under DNA methylation control. A DNA fragment containing the opvAB promoter and the opvAB upstream regulatory region confers bistability to heterologous genes, yielding OFF and ON subpopulations. Bistable expression under opvAB control is reproducible in Escherichia coli, showing that the opvAB switch can be functional in a heterologous host. Subpopulations of different sizes can be produced at will using engineered opvAB variants. Controlled formation of antibiotic-resistant and antibiotic-susceptible subpopulations may allow use of the opvAB switch in the study of bacterial heteroresistance to antibiotics. We describe a portable epigenetic switch based on opvAB, a Salmonella enterica operon that undergoes bistable expression under DnA methylation control. A DnA fragment containing the opvAB promoter and the opvAB upstream regulatory region confers bistability to heterologous genes, yielding off and on subpopulations. Bistable expression under opvAB control is reproducible in Escherichia coli, showing that the opvAB switch can be functional in a heterologous host. Subpopulations of different sizes can be produced at will using engineered opvAB variants. controlled formation of antibiotic-resistant and antibiotic-susceptible subpopulations may allow use of the opvAB switch in the study of bacterial heteroresistance to antibiotics. Biosensors able to detect environmental signals are made of a sensor that detects a given input and a reader that responds to the input generating a detectable signal in a quantitative or semi-quantitative fashion 1 . Classical sensors employ enzymes or whole cells. Enzyme-based biosensors present the advantage of high selectivity but the need for purification can be a drawback due to technical difficulties and high cost. In contrast, whole-cell sensors are often easy to use and inexpensive, especially if microbial strains are used 2 . A common type of microbial biosensor is an engineered strain that responds to physical or chemical inputs generating electrochemical or optical signals. Sensors of this type often employ a promoter sensitive to a specific input and a reporter gene that produces a detectable signal 1,3 . The literature contains multiple examples of sensors that detect electrochemical and optical signals, and use of fluorescent proteins has become widespread in the last decade 4 . An alternative to genetic circuits able to process information in living cells is the design of epigenetic switches. This approach has received special attention to develop diagnostic tests for human diseases [5][6][7] , while synthetic biology based on bacterial epigenetics remains largely unexplored. A relevant exception is the recent development of biosensors based on DNA adenine methylation using Escherichia coli as host 8 . In this study, we describe the construction and application of an epigenetic switch that drives gene expression in a bistable fashion. Bistability generates bacterial subpopulations that differ in a specific phenotypic trait (e. g., antibiotic resistance) and have defined sizes. The switch is based on opvAB, a bacterial operon subjected to epigenetic control by DNA adenine methylation [9][10][11] . Transcription of opvAB is bistable, with concomitant formation of OpvAB OFF and OpvAB ON cells 9 . Bistability is controlled by binding of the OxyR transcription factor to a regulatory region upstream of the opvAB promoter (Fig. 1A) 10 . This region contains four sites for OxyR binding and four GATC motifs. OpvAB OFF and OpvAB ON cell lineages display alternative patterns of OxyR binding, which in turn cause alternative patterns of GATC methylation: in the OFF state, GATC 2 and GATC 4 are methylated; in the ON state, GATC 1 and GACT 3 are methylated 10 . Here, we show that a cassette of 689 nucleotides containing the opvAB promoter and the upstream regulatory region confers bistability to heterologous genes, and describe examples of opvAB-based constructs that produce bacterial subpopulations with distinct phenotypes. One of the examples involves formation of an antibiotic-resistant subpopulation upon cloning of an antibiotic resistance gene downstream of the opvAB promoter. This construct may provide an experimental system to study bacterial heteroresistance (HR) to antibiotics under highly controlled conditions 12 . HR is a phenotype where a bacterial isolate is characterized by the presence of a main susceptible population and a subpopulation with higher antibiotic resistance. Increasing evidence suggests that heteroresistance can lead to treatment failure [12][13][14][15][16][17] . Yet, little is known regarding the characteristics of the heteroresistance phenotypes (i.e. the size of the resistant subpopulation or its level of resistance) that are linked to treatment failure. Animal experiments, where infections are started with bacterial cultures that carry an antibiotic resistance gene under control of the opvAB switch, would allow www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ control of the frequency of the resistant subpopulation and determination of how different ratios of resistant:susceptible bacteria influence treatment outcome 17 . Other potential uses of the opvAB switch in synthetic biology are discussed. Results Bistable expression of lacZY under opvAB transcriptional control. The ability of the opvAB epigenetic switch to confer bistable expression to a heterologous locus was tested by engineering a strain that harbored the E. coli lacZY operon downstream of the opvAB promoter and its upstream regulatory region (P opvAB - Fig. 1A,B). To avoid cell-to-cell heterogeneity associated with variations in plasmid copy number, the construct was engineered in the S. enterica chromosome. Construction involved replacement of the opvAB coding region with a promoterless lacZY operon, leaving the opvAB promoter and upstream regulatory region intact. The construct harbored the opvA ribosome binding site (RBS). Plating of the engineered strain on LB containing X-gal yielded Lac + (blue) and Lac − (white) colonies, thus revealing bistable expression (Lac OFF or Lac ON ) of the heterologous lacZY operon (Fig. 1C). Streaking of either Lac + or Lac − colonies on X-gal agar yielded a mixture of Lac + and Lac − colonies, thus indicating the occurrence of reversible bistability ("phase variation") as previously described for the native opvAB locus 9 . Calculation of phase variation frequencies indicated a frequency of 1.1 × 10 −4 ± 0.3 per cell and generation for the OFF→ON transition, and 3.4 ± 0.1 × 10 −2 per cell and generation for the ON→OFF transition. The 300-fold difference between switching rates was two-fold lower than in the native opvAB locus (OFF → ON, 6.1 ± 1.7 × 10 −5 ; ON → OFF 3.7 ± 0.1 × 10 −2 ; 600-fold difference between switching rates) 9 . The increased size of the Lac ON subpopulation may result from multiple factors including potential differences in mRNA stability and codon usage constraints. Variants of the P opvAB ::lacZY construct were engineered to further explore the ability of opvAB-driven transcription to confer bistable expression to a heterologous locus. One such variant involved the use of a mutant opvAB regulatory region lacking GATC sites 1 and 2 (GATC 1,2 ), previously shown to increase the size of the OpvAB ON subpopulation 10 . As expected, a higher proportion of Lac + colonies was detected (Fig. 1C). Another variant, used as control, lacked all opvAB GATC sites (GATC-less) and locked lacZY transcription in the ON state ( Fig. 1C) as previously described for the native opvAB operon 9 . Variants carrying a green fluorescent protein gene (gfp) dowstream of the lacZY operon were also engineered, and assessment of subpopulation sizes by flow cytometry confirmed that the Lac ON subpopulation formed by the wild type opvAB switch was smaller than that formed by the GATC 1,2 variant (Fig. 1D). Furthermore, only cells in the Lac ON state were detected in the strain that harbored the GATC-less construct, and subpopulation formation was abolished as above (Fig. 1D). The ability of the opvAB switch to permit selection of one of the subpopulations was examined by testing the ability of strains carrying P opvAB ::lacZY::gfp and P opvAB GATC 1,2 ::lacZY::gfp constructs to grow in minimal medium with lactose as sole carbon source. As above, a strain carrying the GATC-less P opvAB ::lacZY::gfp construct was included as a control. Assessment of the growth patterns of these strains revealed that the time required for culture saturation was dependent on the size of the Lac ON subpopulation present at the start of the culture (Fig. 1E). Reversibility of the Lac ON state was confirmed by growth on NCE-glucose (Fig. 1F). Bistable expression of the chimaeric opvAB::lacZY operon in a heterologous host, E. coli. The functionality of the opvAB switch in a heterologous host was tested in E. coli. For this purpose, the P opvAB ::lacZY::gfp construct and its GATC 1,2 and GATC-less variants were introduced into the chromosome of E. coli DR3 (ΔlacZY). Strains carrying the P opvAB ::lacZY-gfp and P opvAB GATC 1,2 ::lacZY::gfp constructs (DR22 and DR23, respectively) formed Lac + and Laccolonies on X-gal agar, and the number of Lac + colonies was higher in the strain carrying the P opvAB GATC 1,2 ::lacZY::gfp construct. The strain carrying the GATC-less construct (DR24) formed Lac + colonies only ( Fig. 2A). Flow cytometry assessment of GFP expression upon growth in LB confirmed the occurrence of subpopulations of Lac OFF and Lac ON cells in the strains carrying the P opvAB ::lacZY::gfp and P opvAB GATC 1,2 ::lacZY::gfp constructs but not in the strain carrying the GATC-less construct (Fig. 2B). As above, growth pattern assessment revealed that the time required for culture saturation was dependent on the initial size of the Lac ON subpopulation (Fig. 2C). Altogether, these observations indicated that the opvAB switch is functional in E. coli. Bistable expression of antibiotic resistance genes under opvAB control. An additional test of the ability of the opvAB bistable switch to generate bacterial subpopulations was performed by cloning antibiotic resistance genes downstream of the opvAB promoter in the S. enterica chromosome. The antibiotic resistance genes chosen for these experiments were aac3-Ib (henceforth, aac3) and aac(6′)-Ib-cr (henceforth, aac6), which encode aminoglycoside acetyl transferases 18 , and bla CTX-M-15 (henceforth, ctxM), which encodes an extended-spectrum β-lactamase 19 . In these constructs, the native ribosome binding sites were replaced with a stronger RBS, named BI 20 to adjust the sensitivity of the switch to a level that could permit unambiguous detection of the antibiotic resistance phenotype under study, thus facilitating discrimination between OFF and ON cells. Experiments with strains carrying P opvAB ::aac6::gfp and P opvAB ::ctxM::gfp fusions (strains SV9703 and SV9706, respectively) yielded bacterial subpopulations resistant to kanamycin and to cefotaxime, respectively (Fig. 3A). Controls using strains that constitutively expressed aac6 and ctxM (SV9705 and SV9707, respectively) showed that the concentrations of antibiotics used permitted growth (Fig. 3A). The wild type strain ATCC 14028 failed to grow under such conditions, confirming that the concentrations of antibiotics used were bactericidal. Flow cytometry analysis confirmed that growth in the presence of kanamycin and cefotaxime was a consequence of subpopulation selection (Fig. 3B), excluding the idea that growth might result from selection of mutants present in the inoculum. This conclusion was further strengthened by the observation that growth in LB restored the initial sizes of ON and OFF subpopulations (Fig. 3B). www.nature.com/scientificreports www.nature.com/scientificreports/ Use of the opvAB synthetic switch in generating antibiotic heteroresistance. As a proof of concept, we examined the utility of the OpvAB switch to address antibiotic heteroresistance and the question of what proportions of resistant subpopulations might lead to clinical treatment failure. Specifically, we tested whether the OpvAB switch could generate, in a susceptible main population, defined subpopulations of cells with increased antibiotic resistance. For this purpose, we used a S. enterica strain harboring a P opvAB ::BI-aac3::gfp construct (SV9776). Expression of aac3 (Aac3 ON ) leads to kanamycin resistance (Km r ). The frequency of Km r cells formed by a pure culture of SV9776 was 1 × 10 −2 (Fig. 4A), similar to the frequency of ON cells detected when gfp was cloned behind the opvAB promoter (1.1%: Fig. 1D). To obtain smaller subpopulation sizes without altering other phenotypic traits of the strain, SV9776 was mixed with an isogenic strain that expressed P opvAB ::gfp (SV9777) and did not produce any Km r resistant subpopulation. Mixtures of cells were prepared from overnight cultures in Mueller-Hinton (MH) broth at proportions 1:10, 1:100, 1:1,000, 1:10,000 and 0:1. Population analysis profile (PAP) tests were then performed by plating on MH agar containing increasing concentrations of kanamycin. After overnight incubation, the number of resistant cells and total number of cells were determined to allow calculation of the fraction of resistant cells. The numbers of Km r colonies detected in the PAP tests were proportional to the amounts of the Aac3 ON subpopulations present in each mixture, and ranged from 1 × 10 −2 to 1 × 10 −6 (Fig. 4A). Epsilometer tests (Etests) further confirmed that the size of the Km r subpopulation decreased in a manner proportional to dilution (Fig. 4B). Discussion In its native host, the opvAB operon undergoes bistable transcription, which generates OpvAB ON and OpvAB OFF subpopulations 9 . Bistability is reversible ("phase-variable") and the switching rate is skewed to OFF in the wild type 9,11 . In this study, we show that a 689 bp DNA fragment containing the opvAB promoter and the opvAB upstream activating sequence (UAS) confers bistability to genes cloned downstream. For instance, an engineered P opvAB ::lacZY operon produces Lac OFF and Lac ON subpopulations (Fig. 1C), and addition of a gfp reporter gene www.nature.com/scientificreports www.nature.com/scientificreports/ permits discrimination of Lac OFF and Lac ON cells by flow cytometry (Fig. 1D). Utilization of L-lactose sustains growth of Lac ON cells (Fig. 1E), thereby producing increased fluorescence. However, because the opvAB switch is reversible, in the absence of L-lactose the system slowly returns to its initial state, with a strong predominance of Lac OFF cells (Fig. 1F). The fact that the opvAB cassette is functional in E. coli (Fig. 2) suggests that the switch can be used to generate bistability in other heterologous hosts. However, the need of both Dam methylation and OxyR may be an obvious limitation. Aside from this caveat, the versatility of the switch is reinforced by an additional example of subpopulation formation presented in Fig. 3: P opvAB -driven bistable expression of kanamycin and cefotaxime resistance genes permitted selection of antibiotic-resistant subpopulations in a reversible fashion. Introduction of mutations in the upstream regulatory region of the native opvAB operon alters the switching rate, yielding OpvAB ON and OpvAB OFF subpopulation sizes that are different from those of the wild type 10,11 . Hence, variants of the opvAB switch can be engineered to modulate subpopulation sizes at will. For instance, a variant (GATC 1,2 ) that lacks two of the four GATC sites present in the wild type increases the initial size of the ON subpopulation (Figs 1 and 2). Additional UAS variants that yield subpopulations of different sizes have been described 10 , and their use may allow choice of other switching frequencies. Modification of the ribosome-binding site of genes under P opvAB control can also contribute to adjust the sensitivity of the switch, facilitating detection of the phenotype under study. For instance, use of the BI ribosome binding site 20 permitted unambiguous detection of aac3-mediated kanamycin resistance, thereby facilitating discrimination of Km r cells (Fig. 4). As a proof of concept, we have used the opvAB switch to produce antibiotic-resistant and antibiotic-susceptible bacterial subpopulations of predetermined sizes. The aim of these experiments was to mimic under laboratory conditions bacterial heteroresistance to antibiotics, a phenomenon where small subpopulations of cells show higher antibiotic resistance than the main population 12 . Heteroresistance is difficult to detect and study in clinical samples 12 , and accurate assessment of the frequencies of subpopulation formation and of their antibiotic resistance levels may improve our understanding of heteroresistance as a cause of clinical treatment failure 15 . Experiments shown in Fig. 4 provide evidence that subpopulation formation under opvAB control allows accurate modulation of the number of resistant cells present in a population. In principle, the method should be applicable to any antibiotic resistance gene. Because we were able to specifically vary the frequency of resistant bacteria in www.nature.com/scientificreports www.nature.com/scientificreports/ the population, this approach provides a proof of concept to study how different frequencies of resistant subpopulations may affect the outcome of antimicrobial treatment in vivo (e. g., in a murine model). In theory, mixing constitutively resistant and susceptible strains that are otherwise isogenic would also lead to bacterial cultures with pre-defined amounts of resistant bacteria. However, to reach specific frequencies of resistant bacteria our OpvAB-based approach requires mixing bacteria at frequencies 100-fold lower (e. g., to reach frequencies of 1 × 10 −6 Km r resistant bacteria, the P opvAB ::BI-aac3 strain was mixed at a frequency of 1 × 10 −4 ). Thus, an advantage of our opvAB switch-based approach is that it can be expected to be less affected by infection bottlenecks that could otherwise eliminate very small subpopulations of bacteria present in the inoculum 21 . For example, one such bottleneck is observed during cecum colonization by Salmonella in mice 2-4 days after oral infection, and is dependent on the inflammatory response induced by S. enterica invading epithelial cells 22,23 . Additional applications of the opvAB switch can be envisaged, including the design of bistable biosensors. For instance, a strain harboring an P opvAB ::gfp fusion might be useful to detect bacteriophages in environmental samples using flow cytometry 24,25 , and to identify DNA methylation inhibitors in screens for novel antimicrobial drugs 26,27 . Sensors of this kind can be expected to be selective as growth will occur under specific circumstances only. Furthermore, use of fluorescence to monitor growth of ON cells can be expected to be sensitive and rapid, and constitutive expression may contribute to robustness, avoiding the problem of instability of transcription-based gene circuits 28 . Besides biosensor design, formation of phenotypic subpopulations under epigenetic control might have additional applications in synthetic biology: for instance, division of labour between subpopulations performing distinct segments of a catabolic pathway might optimize biodegradation processes 29 . Methods Strains and strain construction. Strains of Salmonella enterica serovar Typhimurium and Escherichia coli used in this study are listed in Table 1. Strain construction by targeted gene disruption was achieved using plasmids pKD3, pKD4 or pKD13 as templates to generate PCR products for homologous recombination 30 . Antibiotic resistance cassettes introduced during strain construction were excised by recombination with plasmid pCP20 30 . Primers used in strain construction are shown in Table 2. For the construction of translational lac fusions on the S. enterica chromosome, FRT sites generated by excision of Km r cassettes were used to integrate plasmid pCE40 31 . For construction of fluorescent fusions, a DNA fragment containing a promoterless green fluorescent protein (gfp) gene and a chloramphenicol resistance cassette was PCR-amplified from plasmid pZEP07 32 , and the resulting PCR product was integrated into the chromosome of each strain. For construction of strains that carry antibiotic resistance genes under P opvAB control, a counterselectable cassette containing sacB and Ap R genes was amplified from strain DA52596 using the oligos opvAB-ampsacB-F and ampsacB-gfp-R. The PCR product was integrated into the chromosome of SV6727 and SV6729 respectively, generating the intermediate strains MN441 www.nature.com/scientificreports www.nature.com/scientificreports/ and MN442, respectively. Antibiotic resistance genes were introduced into these strains by targeted gene disruption 30 , and transformants in which the ampicillin-sacB cassette had been excised were selected on minimal plates containing sucrose. Transductional crosses using phage P22 HT 105/1 int201 33 were used for transfer of chromosomal markers between S. enterica strains 34 . To obtain phage-free isolates, transductants were purified by streaking on green plates 35 . Phage sensitivity was tested by cross-streaking with P22 H5. Directed construction of point mutations was achieved using the QuikChange ® Site-Directed Mutagenesis Kit (Stratagene) using the suicide plasmid pDMS197 36 and propagated in E. coli CC118 λ pir. Plasmids derived from pMDS197 (pIZ2224 and pIZ2234) were transformed into E. coli S17-1 λ pir. The resulting strains were used as donors in matings with S. enterica SV9700, selecting tetracycline-resistant transconjugants on minimal plates. One transductant from each mating was propagated as strains SV9701 and SV9702. culture media and growth conditions. Bertani's lysogeny broth (LB) 37 Growth curves. Plates were incubated at 37 °C with shaking on an automated microplate reader (Synergy HTX Multi-Mode Reader, Biotek), and the absorbance at 600 nm for each well was measured every 30 min. Each sample was assayed by triplicate. Growth of SV9700, SV9701, SV9703, DR22, DR23 and DR24 strains was monitored in NCE-lactose and NCE-glucose. Growth of SV9704, SV9705, SV9706, SV9707 was monitored in LB broth with and without antibiotics. www.nature.com/scientificreports www.nature.com/scientificreports/ calculation of phase variation rates. Phase variation rates were estimated as described by Eisenstein 40 . Briefly, a strain harboring a lacZY fusion was plated on LB + X-gal and colonies displaying an ON or OFF phenotype after 16 h growth at 37 °C were selected, resuspended in PBS and re-spread on fresh LB + X-gal plates. Phase variation frequencies were calculated using the formula (M/N)/g where M is the number of cells that underwent phase variation, N the total number of cells, and g the total number of generations that gave rise to the colony. epsilometer (e) tests of antibiotic resistance. Etest strips were purchased from bioMérieux. Mixtures of overnight cultures of bacteria grown in MH broth were diluted 1:25 in phosphate buffered saline (PBS) to reach cell densities of 0.5 MacFarland or about 1.5 × 10 8 CFU/mL. Bacteria were plated onto MH agar plates using sterile cotton swabs dipped in the cell suspensions, and a Etest strip was applied on top. Plates were incubated 18 h at 37 °C before reading the results and taking pictures. Population analysis profile (PAP) tests. PAP tests were performed on MH agar plates supplemented with increasing amounts of kanamycin (Sigma Aldrich) as described previously 15 . Five µl of overnight cultures in MH broth (containing approx. 3 × 10 9 cells/ml) and serial dilutions (down to 10 −6 ) were spread on MH plates containing no antibiotics (for total CFU determination) or different concentrations of kanamycin. The plates were incubated overnight and the colonies were counted. Colony numbers were plotted in a graph to determine if the PAP fulfilled the criteria for heteroresistance (at least 8-fold difference in antibiotic concentration between the highest non-inhibitory concentration and the highest inhibitory concentration). To prepare mixtures of resistant and susceptible cells, three isolated colonies of SV9776 (P opvAB ::BI-aac3::gfp, kanamycin resistant in the ON state) and SV9777 (P opvAB ::gfp, always kanamycin susceptible) were grown overnight in 2 mL MH broth at 37 °C under shaking. Pure cultures of each overnight or three independent sets of SV9776:SV9777 mixtures at proportions ranging from 1:10 to 1:10,000 were used for PAP tests.
2019-08-04T13:02:26.468Z
2019-07-22T00:00:00.000
{ "year": 2019, "sha1": "f5a2fb0ebf7ea3434fc1765b25c17954e388691d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-47650-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd32a52b29775c94d7d38ad5319b3941e4ac51e6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
235326344
pes2o/s2orc
v3-fos-license
Children’s Pronoun Interpretation Problems Are Related to Theory of Mind and Inhibition, But Not Working Memory In several languages, including English and Dutch, children’s acquisition of the interpretation of object pronouns (e.g., him) is delayed compared to that of reflexives (e.g., himself). Various syntactic and pragmatic explanations have been proposed to account for this delay in children’s acquisition of pronoun interpretation. This study aims to provide more insight into this delay by investigating potential cognitive mechanisms underlying this delay. Dutch-speaking children between 6 and 12 years old with autism spectrum disorder (ASD; n = 47), attention-deficit/hyperactivity disorder (ADHD; n = 36) or typical development (TD; n = 38) were tested on their interpretation and production of object pronouns and reflexives and on theory of mind, working memory, and response inhibition. It was found that all three groups of children had difficulty with pronoun interpretation and that their performance on pronoun interpretation was associated with theory of mind and inhibition. These findings support an explanation of object pronoun interpretation in terms of perspective taking, according to which listeners need to consider the speaker’s perspective in order to block coreference between the object pronoun and the subject of the same sentence. Unlike what is predicted by alternative theoretical accounts, performance on pronoun interpretation was not associated with working memory, and the children made virtually no errors in their production of object pronouns. As the difficulties with pronoun interpretation were similar for children with ASD, children with ADHD and typically developing children, this suggests that certain types of perspective taking are unaffected in children with ASD and ADHD. INTRODUCTION A fundamental aspect of children's language acquisition is learning what the linguistic expressions in their language refer to. Proper names (e.g., John) generally have a fixed reference. In contrast, personal pronouns (e.g., he, she, him, her) and reflexives (e.g., himself, herself ) depend on other words in the sentence or the discourse for their interpretation. For instance, in the sentence "Paul got upset when John accidentally hit him" the object pronoun him refers back to the subject of the previous clause, Paul. The fact that him cannot refer back to the subject of the same clause, John, indicates that not only the linguistic discourse, but also grammatical principles play a role. These grammatical principles also apply to reflexives, such as himself, which must refer back to the subject of the same clause and cannot refer back to the subject of a previous clause. The patterns of use and interpretation of pronouns and reflexives have been the focus of much theoretical work in linguistics, including Chomsky's syntactic binding theory (Chomsky, 1981), later revisions of binding theory such as Reinhart and Reuland's reflexivity account (Reinhart and Reuland, 1993) and Reuland's primitives of binding account (Reuland, 2011), and pragmatic alternatives to binding theory such as Levinson (1991Levinson ( , 2000. Already early on it was realized that language acquisition research can inform linguistic theorizing (e.g., Chien and Wexler, 1990). In Chomsky's original conception of binding theory, the use and interpretation of pronouns and reflexives is governed by two related principles of the grammar: Roughly speaking, Principle A requires reflexives in simple transitive sentences to refer to the same referent as the subject (resulting in a so-called coreferential interpretation), and Principle B requires pronouns not to corefer with the subject. It is thus expected that children would show mastery of pronouns and reflexives at more or less the same moment in their language development. However, language acquisition research revealed that children's interpretation of object pronouns in English is delayed in comparison to their interpretation of reflexives (e.g., Chien and Wexler, 1990;Grimshaw and Rosen, 1990). For example, in a situation in which two referents are present in the discourse, children until the age of 6 incorrectly allow the object pronoun in sentences like (1) to corefer with the subject. At the same time, they correctly interpret reflexives such as in (2) as coreferring with the subject from age 4. (1) The elephant is hitting him. (2) The elephant is hitting himself. This phenomenon is known as the Delay of Principle B Effect, or Pronoun Interpretation Problem. Only around the age of 10 or 11 years old, children's performance on object pronoun interpretation is adult-like (Philip and Coopmans, 1996;Başkent et al., 2013). In English, the pronoun him and the reflexive himself are quite similar in form. In contrast, the Dutch pronoun hem ('him') and the Dutch reflexive zichzelf ('himself/herself ') are clearly distinct forms. Nevertheless, the Pronoun Interpretation Problem is also observed in Dutch (e.g., Philip and Coopmans, 1996;Spenader et al., 2009;van Rij et al., 2010). This indicates that the Pronoun Interpretation Problem is not caused by children's confusion of the two forms. Whereas the Pronoun Interpretation Problem occurs in children's typical acquisition of English, Dutch and several other languages, it does not occur in all languages and for example is absent in Romance languages. Thus, the Pronoun Interpretation Problem is not a universal phenomenon in language acquisition but rather appears to depend on certain grammatical properties of the language. As yet, no satisfactory explanation has been given for this cross-linguistic variation, since it is not clear what properties the languages have in common that show or do not show a Pronoun Interpretation Problem. For example, whereas English and Dutch show the Pronoun Interpretation Problem, the closely related language German does not (Ruigendijk, 2008; see also Ruigendijk et al., 2010); as such, German patterns with the Romance languages, which differ from German in that they have clitic pronouns. Although this cross-linguistic variation in the Pronoun Interpretation Problem is relevant for generalizing the findings of the present study, the present study focuses on Dutch with the aim to shed more light on the interaction between grammar and cognitive processes in pronoun interpretation in languages that show a Pronoun Interpretation Problem. Explaining the Pronoun Interpretation Problem In the linguistic literature, various explanations have been put forward for the Pronoun Interpretation Problem. The three explanations most relevant for the current study are discussed below, namely the pragmatic explanation, the working memory explanation, and the perspective taking explanation. These explanations all assume that the interpretation of reflexives is fully determined by the grammar, but that the interpretation of pronouns requires some additional process: pragmatics, reference-set-computation, or bidirectional optimization. Chien and Wexler (1990) argue that children possess the relevant grammatical knowledge of the binding principles required for a mature interpretation of object pronouns and reflexives (cf. Chomsky, 1981), but still lack the pragmatic skills for their mature usage in context (cf. Thornton and Wexler, 1999). Chien and Wexler's (1990) pragmatic explanation is based on a distinction between syntactic binding (e.g., the relation between the reflexive himself and the quantified subject every elephant in the sentence "Every elephant is hitting himself ") and pragmatic coreference (e.g., the relation between the pronoun he and its non-local referential antecedent an elephant in the sentence pair "There is an elephant. He is large"). According to their explanation, children have knowledge of the restrictions on syntactic binding but have difficulty with the restrictions on pragmatic coreference. In particular, Chien and Wexler refer to so-called 'accidental coreference' as a source of confusion for children. Accidental coreference occurs when the object pronoun and the referential subject of the sentence accidentally refer to the same individual (as he and him do in "That must be John. At least he looks like him"), despite the fact that this is disallowed by the binding principles. Accidental coreference is only possible in certain (rare) contexts. To explain why English-speaking children show a Pronoun Interpretation Problem, Chien and Wexler (1990) argue that children have pragmatic difficulty with distinguishing between contexts in which accidental coreference is permitted and contexts in which it is not. Crucially, accidental coreference is not allowed in sentences like (1), but children may not yet have knowledge of this pragmatic restriction. Under the view that children's errors with object pronouns are due to their confusion about accidental coreference, children should also show problems in pronoun production and use object pronouns to express a coreferential meaning in all contexts, so also in contexts in which accidental coreference is not allowed. However, English-speaking children between 2;3 and 3;1 years old already produce object pronouns correctly in their spontaneous speech (Bloom et al., 1994) and English-and Dutch-speaking children's production of object pronouns in an experimental setting was found to be adult-like from age 4;6 (for English : De Villiers et al., 2006;Matthews et al., 2009;for Dutch: Spenader et al., 2009). This makes an explanation in terms of lack of pragmatic skills unlikely. Additionally, the distinction Chien and Wexler (1990) found between children's pronoun interpretation in syntactic binding environments and pragmatic coreference environments has been questioned by later studies as an artifact of their experimental materials (e.g., Elbourne, 2005;Conroy et al., 2009). More recent explanations of the Pronoun Interpretation Problem attribute this problem to children's limited processing resources (e.g., Reinhart, 2006Reinhart, , 2011Ruigendijk et al., 2011). For example, Reinhart (2011) argues that the Pronoun Interpretation Problem results from children's insufficient working memory capacity (see also Grodzinsky and Reinhart, 1993;Montgomery and Evans, 2009). According to Reinhart (2011), there are two means by which object pronouns can be interpreted: by syntactic binding and by pragmatic coreference. If the grammar allows two interpretational possibilities, the process of referenceset computation is required (Reinhart, 2006(Reinhart, , 2011. Referenceset computation compares the different structures and their interpretations, and discards an interpretation if there is a more economical way to obtain that interpretation. Adults use reference-set computation to block pragmatic coreference between an object pronoun and the local subject, as pragmatic coreference is assumed to be a less economical way to express a coreferential interpretation than syntactic binding. Reinhart claims that children have insufficient working memory to perform this costly computation and therefore resort to guessing in their interpretation of object pronouns (Reinhart, 2011). Reference-set computation does not apply in production, since speakers already know which meaning they want to express. Therefore, children's production of object pronouns is predicted to be adult-like (Reinhart, 2006). Another explanation of the Pronoun Interpretation Problem linking this problem to children's cognitive limitations is proposed by Hendriks and Spenader (2006). They argue that the Pronoun Interpretation Problem is caused by core properties of the grammar itself. Instead of formulating their account in terms of universally valid syntactic principles, as Chien and Wexler (1990) and Reinhart (2006Reinhart ( , 2011 do, they formulate their account in terms of violable constraints that differ in strength, as in Optimality Theory (Prince and Smolensky, 2004). The optimal form or meaning is the form or meaning that satisfies the constraints of the grammar best. The constraints determine, for a given input, what is the optimal output for that input in production (when the input is a meaning and the output is a form) or comprehension (when the input is a form and the output is a meaning). As the constraints of the grammar are sensitive to whether they evaluate forms or meanings, they may yield a different form-meaning mapping in comprehension than in production (Smolensky, 1996). To achieve communicative success in spite of these potentially different outcomes in production and comprehension, it has been argued that production and comprehension must be taken into account simultaneously in determining the mature pattern of forms and meanings, through a procedure known as bidirectional optimization (e.g., de Hoop and Krämer, 2006;Legendre et al., 2016). This procedure of bidirectional optimization can be seen as the formalization, within the grammar, of the process of perspective taking (Hendriks, 2014). In Hendriks and Spenader's (2006) constraint-based account, the constraints of the grammar select both a coreferential and a non-coreferential interpretation as the optimal meaning for an object pronoun, resulting in ambiguity for this pronoun. When encountering an object pronoun, adult listeners are able to block the coreferential interpretation for the pronoun by taking into account the perspective of the speaker: if the speaker would have wanted to express a coreferential interpretation, the speaker would have used a reflexive instead of a pronoun. Since the speaker did not use a reflexive, the speaker must have intended to express a non-coreferential interpretation. Young children are argued to not yet be able to take into account the perspective of the speaker in their interpretation of object pronouns in a consistent way. Hence, they consider pronouns to be ambiguous, thus showing the Pronoun Interpretation Problem. Such perspective taking is expected to require theory of mind abilities (Hendriks, 2014). Indeed, first-order theory of mind is generally acquired well before children show adult-like performance on pronoun interpretation (De Villiers et al., 2006). Furthermore, perspective taking may also require inhibition skills, since the listener must suppress the coreferential meaning in order to select the correct non-coreferential meaning for the pronoun. In Hendriks and Spenader's constraint-based account, the same constraints giving rise to ambiguity of object pronouns in comprehension result in the correct interpretation of reflexives in comprehension and the correct selection of a pronoun or reflexive in production. Thus, children's production of object pronouns is predicted to be adult-like. The role of inhibition is not only compatible with the perspective-taking explanation, but in principle follows from all accounts of pronoun processing that assume several potential antecedents for the pronoun to be activated during initial stages of processing and assume the grammatical antecedent to compete with binding theory-incompatible antecedents (e.g., Badecker and Straub, 2002;Clackson et al., 2011). Inhibition is needed to subsequently suppress the antecedent that is incompatible with the binding principles. This contrasts with so-called initialfilter models of pronoun processing, that assume that the principles of binding theory are applied early during sentence processing and act as an initial filter, immediately ruling out antecedents that are not compatible with the binding principles (Nicol and Swinney, 1989). In sum, while the pragmatic explanation attributes children's pronoun interpretation problems to their lack of pragmatic knowledge and predicts that children also make errors with pronouns in production, Reinhart's explanation based on reference-set computation predicts that errors in pronoun interpretation are caused by insufficient working memory, and Hendriks and Spenader's explanation based on bidirectional optimization predicts that these errors result from a failure to take into account the speaker's perspective, which requires theory of mind abilities and inhibition skills. Language in Children With ASD and Children With ADHD The present study aims to clarify how children acquire object pronoun interpretation and production by investigating the role of three possible underlying cognitive mechanisms in pronoun interpretation and production, namely working memory, theory of mind, and inhibition. We designed our study in such a way that we maximized the variation in cognitive mechanisms as well as outcome measures by including children with autism spectrum disorder (ASD), children with attention-deficit/hyperactivity disorder (ADHD) and a group of typically developing (TD) children in our sample. Children with ASD are known to have difficulties in social interaction and communication and show restricted, repetitive behaviors and interests (DSM-5, American Psychiatric Association, 2013). Children with ADHD show a persistent pattern of inattention and/or hyperactivity-impulsivity (DSM-5, American Psychiatric Association, 2013). Besides difficulties with theory of mind, working memory, and inhibition, both children with ASD and children with ADHD exhibit problems with language and communication. Pragmatic problems are among the core deficits of ASD (DSM-5, American Psychiatric Association, 2013). While the pragmatic deficits in ASD are well documented, less is known about problems in ASD with the structural, or morphosyntactic, properties of language. Some studies did not find morphosyntactic impairments in children with ASD (Bartolucci et al., 1976;Tager-Flusberg, 1981). In contrast, other studies found evidence for morphosyntactic impairments or delays in (subgroups of) children with ASD (Kjelgaard and Tager-Flusberg, 2001;Eigsti et al., 2007;Durrleman et al., 2017). These results indicate that there is considerable heterogeneity in language impairments in ASD (for an overview, see Boucher, 2012). In ADHD, language deficits are not part of the diagnosis. However, recent studies using parental and teacher questionnaires suggest that in children with ADHD pragmatic use of language is often impaired (for an overview, see Green et al., 2014). Most studies investigating language impairments in ADHD did not find morphosyntactic impairments in children with ADHD (e.g., Kim and Kaiser, 2000;Geurts et al., 2004a;Geurts and Embrechts, 2008;Helland et al., 2012), but some did (Oram et al., 1999;Papaeliou et al., 2015). The language and communication problems of children with ADHD may therefore partly overlap with those observed in children with ASD (e.g., Geurts and Embrechts, 2008). Although the findings on morphosyntactic impairments of children with ASD and ADHD are equivocal, it may well be that children with ASD or ADHD experience a greater delay in object pronoun interpretation than typically developing children, due to cognitive deficits. Perovic et al. (2013), however, found that highfunctioning children with ASD and TD children demonstrated similar difficulties in their comprehension of object pronouns in English. To our knowledge, object pronoun interpretation has not been investigated yet in children with ADHD. The production of object pronouns has been studied in ASD, but mainly in languages such as French and Greek that have clitic pronouns occurring in a special position to the immediate left of the verb (e.g., Terzi et al., 2014;Tuller et al., 2017;Prévost et al., 2018). This contributes to the complexity of the construction and may explain the difficulty these children have with the production of clitic object pronouns. Thus, in addition to our main aim of investigating possible cognitive mechanisms underlying the Pronoun Interpretation Problem, our study will also yield further insight into the relation between pronoun comprehension and pronoun production in children with ASD and children with ADHD. In our study we focus on children in the age range of 6-12 years, as in this age range in TD children the Pronoun Interpretation Problem gradually decreases (Başkent et al., 2013). Therefore, we expect most variation in object pronoun interpretation performance in this age range. To investigate possible cognitive mechanisms underlying the interpretation of object pronouns, we administer a theory of mind task, a working memory task, and an inhibition task. Following Hendriks and Spenader (2006), object pronoun interpretation is expected to be associated with theory of mind and inhibition. Alternatively, following Reinhart's (2011) account, object pronoun interpretation is hypothesized to be associated with working memory. Children With ASD Children in the ASD group were diagnosed with Autistic Disorder (n = 10), PDD-NOS (n = 34) or Asperger's Disorder (n = 7) by independent clinicians on the basis of the DSM-IV-TR criteria (American Psychiatric Association, 2000). Additional inclusion criteria were that the children had a Full Scale Intelligence Quotient (FSIQ) above 75 and verbal communication skills. Furthermore, both the Autism Diagnostic Interview Revised (ADI-R: Rutter et al., 2003) and the Autism Diagnostic Observation Schema (ADOS, Lord et al., 1999) were administered by certified psychologists. Children in this study were included in the ASD group if they met the ADOS criteria for autism or ASD and/or the ADI-R criteria for autism or ASD (cf. Risi et al.'s ASD2 criteria, Risi et al., 2006). Three children from the ASD group were excluded from further analysis because they did not meet these criteria. One more child was excluded later because he finished neither the pronoun and reflexive comprehension task nor the production task (see section "Procedure"), leaving 47 children in the ASD group. To document the extent to which ADHD symptoms were present, the Parent Interview for Child Symptoms (PICS: Ickowicz et al., 2006) was administered. Seven children in the ASD group scored above the ADHD cut-offs on the PICS (see Table 1). In line with their clinical ASD diagnosis, we included these children in the ASD group. Children With ADHD Children in the ADHD group were diagnosed with Combined type (n = 19), Predominantly Hyperactive-Impulsive type (n = 12) or Predominantly Inattentive type (n = 6) by independent clinicians on the basis of the DSM-IV-TR criteria (American Psychiatric Association, 2000). Furthermore, both the Parent Interview for Child Symptoms (PICS: Ickowicz et al., 2006) and the Teacher Telephone Interview-IV (TTI-IV: Tannock et al., 2002) were administered by trained clinicians. Six children with ADHD lacked TTI information. Four of them already scored above the cut-off for ADHD based on parent information alone. The remaining two children scored 1 point below the cut-off for ADHD. Since these children scored comparable on the PICS to the other children in the ADHD group (for whom TTI scores combined with their PICS scores reached the cut-off), we included them in the analyses. Seven children in the ADHD group scored within ASD criteria on the ADOS TABLE 1 | Mean scores (standard deviations) of age, clinical interviews, WISC-III, PPVT, False Belief task, n-back task, and stop task. ASD (n = 47) ADHD (n = 36) TD (n = 38) Group differences (Bonferroni corrected post hoc analyses) *p < 0.05; **p < 0.01; ***p < 0.001. n.s., non-significant. 1 Five children in the ADHD group scored on the ADI-R above the cut-off for ASD (on the basis of Risi et al.'s criteria, Risi et al., 2006). 2 Two children in the ADHD group scored above the ADOS criteria for ASD. 3 Seven children in the ASD group scored within our criteria for ADHD on the PICS (above or one point below the cut-off on the PICS). Frontiers in Psychology | www.frontiersin.org or ADI-R (see Table 1). In line with their clinical diagnosis, we included these children in the ADHD group. One child was excluded later for task-related reasons (see section "Procedure"), leaving 36 children in the ADHD group. TD Children Children in the TD group had not been diagnosed with ASD or ADHD. The ADOS, ADI-R and PICS were administered by trained clinicians in this group as well. None of the children scored above the cut-offs for ASD or ADHD described above. Background Variables IQ of the children was assessed by two subtests (Vocabulary and Block Design) of the Dutch Wechsler Intelligence Scale for Children (WISC-III NL: Kort et al., 2002). Verbal ability was assessed by the Dutch version of the Peabody Picture Vocabulary Test-III (PPVT: Dunn and Dunn, 1997;Schlichting, 2005). Group means and standard deviations for age, IQ, PPVT, and clinical interviews can be found in Table 1. Pronoun and Reflexive Comprehension Task To test the comprehension of object pronouns and reflexives, we carried out a Picture Verification Task. Children saw one picture at a time. The picture showed two animals engaged in an otheroriented action (Figure 1) or a self-oriented action (Figure 2). At the same time, the child heard an introductory sentence, followed by a test sentence with either an object pronoun or a reflexive [see example (3) and (4)]. (3) Introductory sentence: Een krokodil en een olifant zijn op de stoep. ' An alligator and an elephant are on the sidewalk.' (4) Test sentence: De olifant slaat hem/zichzelf. 'The elephant is hitting him/himself.' The materials were based on the materials of Spenader et al. (2009) andvan Rij et al. (2010). The transitive verbs that were used in the test sentences were the Dutch translations of to tickle, to hit, to bite, to point to, to draw, to paint, to tie, to make up, and to dress. The child was asked whether or not the recorded sentence matched the picture. Children had to respond by pressing the yeskey when the sentence matched the picture, and by pressing the no-key when the sentence did not match the picture. On trials for which the children decided that the sentence did not match the picture, they were asked to explain why. A second tester noted these justifications. The task started with two practice items to determine whether the children understood the task. The comprehension task consisted of 34 items: 2 practice items, 16 test items (eight items in the reflexive condition and eight items in the pronoun condition), and 16 control items without an object pronoun or reflexive. The control items were included to measure children's general understanding of the task. In half of the items the sentence matched the picture (match condition). In the other half of the items the sentence and the picture did not match (mismatch condition). Mismatch items contained either a picture of an other-oriented action in combination with a sentence with a reflexive, or a picture of a self-oriented action in combination with a sentence with a pronoun. We expect children exhibiting the Pronoun Interpretation Problem to make more errors in the pronoun mismatch condition than in the pronoun match condition (cf. Chien and Wexler, 1990;van Rij et al., 2010). Because these children are expected to allow both interpretations of the object pronoun, they will correctly accept the non-coreferential interpretation in the match condition, but also incorrectly accept the coreferential interpretation in the mismatch condition, leading to lower performance on the mismatch condition than on the match condition. Furthermore, we expect these children to not make errors in the reflexive condition if they are not impaired in their syntactic abilities. The reflexive conditions (match and mismatch) thus serve as control conditions to measure children's mastery of the syntactic knowledge required for interpreting the test sentences. Pronoun and Reflexive Production Task To check whether children's production of object pronouns and reflexives is adult-like in the same sentence context that is used in the comprehension task, we carried out a sentence elicitation task. This production task was designed to be similar to the comprehension task, as it is well-known from the literature on linguistic reference that pronoun interpretation and use is highly dependent on contextual features such as the structure of the linguistic discourse and visual information. This also holds for object pronouns in simple transitive sentences, which were used in the comprehension task. For example, Dutchspeaking children's as well as adults' online processing of object pronouns in simple transitive sentences is influenced by the linear order in which the potential antecedents of the pronoun are mentioned in the preceding sentence (van Rij et al., 2016). To rule out the possibility that observed differences between production and comprehension outcomes are caused by subtle differences in verbal or visual materials or task instructions, we kept the two tasks as similar as possible. Thus, the production task allows us to test whether the children obey the binding principles in production. The visual materials of the production task were based on the materials of Spenader et al. (2009). Pictures that were used in the production task were similar to those in the comprehension task. When a picture with an other-oriented action was used in the comprehension task, the corresponding picture with the selfdirected action was used in the production task and vice versa. In this way, no picture was shown in both comprehension and production, to avoid possible priming effects. The production task consisted of 16 items in total: two practice items and 14 test items. No filler items were used. Half of the items displayed an other-oriented action, the other half a self-oriented action. Children saw one picture at a time. They were instructed to first introduce both animals and then to describe the action, leading to sentences like "I see an elephant and a crocodile. The elephant is hitting himself." The production task started with two practice items to determine whether the children understood the task, before they were presented with the test items. Theory of Mind To test theory of mind, we used a second-order False Belief task adopted from Hollebrandse et al. (2014). False Belief tasks require one to understand that another person has his or her own beliefs and that these can be different from one's own beliefs (e.g., Baron-Cohen et al., 1985). The task measured both first-order False Belief (FB) (involving the belief of another person) and second-order FB (involving the belief of another person about someone else's belief). We used Hollebrandse et al.'s (2014) verbal rather than their low-verbal second-order FB task for this study, as their low-verbal task turned out to be much more demanding for children in the age range tested than their verbal task, for reasons unrelated to theory of mind abilities (see Hollebrandse et al., 2014, for discussion). As most typically developing children pass first-order FB tasks around age 4 (see the meta-analysis of Wellman et al., 2001), and our participant group is between 6 and 12 years old, we expect ceiling performance on the first-order FB questions. Therefore, of specific interest to our study is children's performance on the second-order FB questions. Each story in the FB task starts with an initial belief that is shared by the two main characters in the story (e.g., Sam and Maria both believe that they are selling cookies at the bake sale). This belief changes in the middle of the story for the first character without the second character knowing about this (e.g., while Maria has gone out to buy cookies, Sam hears that they are selling apple pie instead), and next changes for the second character without the first character knowing about this (e.g., Maria finds out at the bake sale that they are only selling waffles, without Sam knowing about this). As a result, the story involves three distinct beliefs: the second character's true belief about the actual situation and two false beliefs. The first-order FB question asks about the first character's false belief about the situation (e.g., what does Sam think they are selling at the bake sale?). The second-order FB question asks about the second character's false belief about the first character's belief, and is broken down into two separate questions to avoid asking syntactically too complex questions (e.g., Maria is asked what Sam thinks they are selling at the bake sale, and then the child is asked what Maria will answer, thus effectively asking what Maria thinks that Sam thinks they are selling at the bake sale). See Hollebrandse et al. (2014, Appendix 1) for a sample item. The task consisted of eight stories read to the child by the experimenter. Each story was accompanied by four pictures that were presented one by one on a computer screen. The task was divided in two blocks with a short break in between. The order of stories was counterbalanced across participants. Each story contained one second-order FB question and two first-order FB questions. The first first-order FB question was asked in the middle of the story, when the first false belief was introduced. At the end of the story, the second-order FB question was asked, followed by the first-order FB question. The first-order FB question was asked again at the end of the story in order to check whether children had difficulties with the length and complexity of the story. One item was removed from further analysis since item analysis showed that the response on this item differed from the other seven items: on the second first-order FB question, mean accuracy on this item was only 0.48, while mean accuracy on other items varied between 0.79 and 0.92. Additionally, on this item, mean accuracy on the second-order FB question was higher (0.80) than on the easier first-order FB question (0.48). Inspection of this item revealed that its content differed from the other items in that an extra belief had inadvertently been introduced, which made the correct first-order FB answer less plausible. Two dependent measures were calculated: mean accuracy on the first first-order FB question (FB1) and mean accuracy on the second-order FB question (FB2). Working Memory Working memory is the ability to temporarily maintain and manipulate information (Baddeley and Hitch, 1974). It can be operationalized in different ways. Because of the known language and communication difficulties of children with ASD and children with ADHD, we wanted to reduce the verbal load of the working memory task by using a visual task, rather than a verbal task such as a listening span task or digit span task. Specifically, we operationalized working memory by the n-back task (Owen et al., 2005). The n-back task is a continuous performance task to measure working memory capacity. The task is commonly used in psychology and cognitive neuroscience (e.g., Williams et al., 2005;Cui et al., 2010;Chatham et al., 2011;de Vries and Geurts, 2014) and requires sustained maintenance and updating of information in working memory. The n-back task in our study included three experimental conditions: 0-back (baseline), 1-back, and 2-back. In each condition, pictures were presented on a computer screen with a stimulus duration of 1000 milliseconds, followed by an interstimulus interval of 1500 milliseconds. In the 0-back condition, participants were instructed to press the yes-button when they saw a picture of a car, and to press the no-button when another picture appeared. In the 1-back condition, participants had to press the yes-button when the picture matched the picture immediately preceding it, and otherwise press the no-button. In the 2-back condition, participants had to press the yes-button when the picture matched the picture that appeared two pictures back, and otherwise press the no-button. Studies have shown that 2-back tasks seem suitable for children in our age range (e.g., Schleepen and Jonkman, 2010). The task was divided in different blocks, which were presented in random order. Each block started with 0-back, followed by 1-back and then 2-back. In this way, children got used to the task and were able to understand the more difficult 2-back condition. Participants started with a practice session of 15 trials per condition ( 0-, 1-, and 2-back), followed by the test session consisting of four blocks of 15 trials per condition (resulting in a total of 60 trials per condition). The total number correct on the 2-back condition was calculated as a measure of working memory. Response Inhibition The study also included a task to measure response inhibition. Response inhibition is the capacity to suppress an ongoing motor response that is no longer relevant. To capture response inhibition, we used a stop task, which is considered a relatively pure, reliable and valid measure of prepotent response inhibition (Tannock et al., 1989;Kindlon et al., 1995;de Vries and Geurts, 2014). Like the n-back task, the stop task is often used in psychology and cognitive neuroscience, and measures individual, clinical and developmental differences in the inhibition of responses. In this study we adopted the stop task from van den Wildenberg and Christoffels (2010). This is a non-verbal response inhibition task, which we preferred over a verbal task for the same reason as mentioned for the n-back task. In this stop task, simple drawings of a tree and a door were presented on the computer screen. During go-trials, participants were asked to press the button corresponding with the picture on a two-button box. In 30% of the trials, a visual stop-signal was presented: a red square frame surrounding the picture border. When confronted with the stop-signal, participants had to inhibit the go-response by not pressing the button. The interval between the onset of the go-picture and the onset of the stop-signal (stopsignal delay) was set at 200 ms on the first stop-trial. An online tracking algorithm adjusted stop-signal delay as a function of individual stopping performance (Levitt, 1971). If the participant was able to stop, the stop-signal delay increased by 50 ms, thereby decreasing the chances of successful inhibition on the next stoptrial. After a failed-inhibition trial, the stop-signal delay decreased by 50 ms. This adaptive algorithm ensured successful inhibition on about 50% of the stop-trials, a procedure that yields reliable estimates of the Stop Signal Reaction Time (SSRT: Band et al., 2003). SSRT was calculated as a measure of response inhibition. Procedure Children and their parents were recruited by brochures at schools and in outpatient clinics for child and adolescent psychiatry in Groningen. They took part in a larger study on language and communication in ASD and ADHD (Kuijper, 2016). The study was reviewed and approved by the research ethics committee CETO of the University of Groningen. Parents of all child participants gave written informed consent prior to participation in the study. Children and parents came to the lab together. Children were tested individually on a single day in a quiet testing room with two experimenters present. After every task children had a short break. Two participants were excluded from further analysis: one (ASD) because he finished neither the comprehension task nor the production task, leaving 47 children in the ASD group, and the other (ADHD) because he scored below 0.75 on the control items in the comprehension task, leaving 36 children in the ADHD group. Furthermore, one child (ASD) conducted only half of the False Belief task and was removed from analyses involving this task. One child (ASD) did not finish the n-back task and was removed from analyses involving the n-back task. Another child (ADHD) did not complete the stop task and consequently was excluded from analyses involving this task. Finally, one child (ADHD) finished neither the n-back nor the stop task and was excluded from analyses including these tasks. Coding of Production Data Children's answers on the production task were voice-recorded. Only active transitive sentences containing a subject and an object that referred to one of the two animals in the picture were included in analyses (93.1% of all items). In the production task, more answers are acceptable than only object pronouns or reflexives. For pictures showing an other-oriented action, the use of a full noun phrase (e.g., "the elephant is hitting the crocodile") to describe such actions is compatible with the binding principles. In fact, such a choice is pragmatically felicitous as well, as adults produce mainly full noun phases in this sentence context (see Spenader et al., 2009). Both the use of object pronouns (e.g., "the elephant is hitting him") and the use of full noun phrases were therefore coded as correct responses in this condition. For pictures showing a self-oriented action, only the use of a reflexive (e.g., "the elephant is hitting himself ") was treated as accurate. All items were scored independently by two coders, who were blind to the participant's diagnosis. The coders scored the grammatical form of the object (pronoun, reflexive, or full noun phrase). Inter-scorer agreement was high (Cohen'sκ = 0.95). Data Analysis The data were analyzed using Generalized Linear Mixed Models (GLMM). A logit link was used to accommodate the repeatedly measured binary outcome variable (i.e., accuracy of pronoun interpretation, denoted below as Accuracy) (Jaeger, 2008;Heck et al., 2012). Compound symmetry was used as covariance matrix. First we tested for differences between groups in pronoun comprehension. Contrasts between diagnostic groups and controls (ASD vs. TD and ADHD vs. TD) were dummy-coded and included as fixed factors in the analysis. Whether the sentence matched the picture (coded as 0) or not (coded as 1) was additionally included as a fixed factor. This last factor was included because previous studies showed clear differences between match and mismatch conditions, likely caused by a yes-bias (see also Chien and Wexler, 1990;van Rij et al., 2010). In addition to these three main effects (denoted as ASD, ADHD, and Match) we included two two-way interactions (ASD * Match, ADHD * Match) in the model. A two-way interaction or main effect that had no effect on Accuracy (p > 0.05) was removed from the model. Next, we examined possible cognitive mechanisms underlying object pronoun interpretation by including the relevant parameters derived from the False Belief task (FB1 and FB2), the n-back task (working memory, or WM), and the stop task (SSRT), respectively. All four were meancentered around a value of zero and were included, in four separate analyses, as fixed factors in the aforementioned model. Interactions that had no effect on Accuracy (p > 0.05) were removed from the model. Finally, we tested whether found associations held up when all main and interaction effects with a significance value of p ≤ 0.05 were examined simultaneously in a multiple GLMM analysis. Pronoun and Reflexive Comprehension Task In line with our expectations, neither the reflexive match condition nor the reflexive mismatch condition yielded a substantial number of errors (see Table 2). Therefore we did not statistically test for differences in reflexive interpretation between the groups. Below, our focus is on the two object pronoun conditions. Despite the rather small differences in performance in the object pronoun conditions, there was enough variance to build a meaningful GLMM. Clinical Groups As expected (Chien and Wexler, 1990;van Rij et al., 2010), a significant effect of Match was found (see Table 3), indicating that more errors were made in the object pronoun mismatch condition than in the object pronoun match condition. Interactions of ASD or ADHD with Match did not contribute significantly to participants' scores on the comprehension task (all p-values > 0.05), showing that this effect held for all groups. In addition, the main effects of ASD and ADHD did not significantly contribute to Accuracy. With no differences among the groups, we conclude that errors in object pronoun interpretation are not explained by the presence of ASD or ADHD. In subsequent analyses, main and interaction effects related to diagnostic group were removed, leaving a model that included two main effects (Mechanism and Match) and one interaction effect (Mechanism * Match). Because the TD group differs from the ASD and ADHD group in mean IQ-score and the TD group differs from the ADHD group in mean PPVT-score (see Table 1), we checked post-hoc if group differences in pronoun interpretation between ASD, ADHD and TD emerge, by (i) selecting part of our TD group (n = 27) to match the IQs of both other groups, and by (ii) selecting part of our TD group (n = 34) to match the PPVT of the ADHD group. No group differences in pronoun interpretation emerge when we use the subgroups matched on IQ or verbal ability in the two post hoc analyses (see Table 4). Mechanisms No interaction effect of Match with any of the cognitive mechanisms was found (all p-values > 0.05). Therefore, in the final model only the main effects of each of the cognitive mechanisms and Match were included, first separately, and next in the multiple GLMM. We found a main effect of FB2 (see Table 5). Lower scores on second-order False Belief questions were associated with lower Accuracy scores in both the Object pronoun match and the Object pronoun mismatch condition. We also found a significant main effect of SSRT. Higher SSRT scores (indicating lower inhibition) were associated with more errors in the object pronoun conditions. No significant effects of FB1 or working memory were found. In all four analyses, the main effect of Match remained significant: more errors were made in the object pronoun mismatch condition than in the object pronoun match condition. FB2, SSRT and Match were included in a multiple GLMM ( Table 6). All aforementioned associations remained significant. Thus, when adjusted for the effect of SSRT, lower scores on FB2 questions were still associated with lower Accuracy scores in the object pronoun conditions. Vice versa, when adjusted for the effect of FB2, higher SSRT scores were still associated with lower Accuracy scores in the object pronoun conditions. Furthermore, a main effect for Match remained: adjusted for the effects of FB2 and SSRT, children still performed worse in the object pronoun mismatch condition than in the object pronoun match condition. In a post hoc analysis we added age to our model. In our study we focused on children in the age range of 6-12 years, during which the Pronoun Interpretation Problem gradually disappears. With age being associated with FB2 and SSRT, age was added to our model to study the extent to which age would subsume the effects of FB2 and SSRT. Table 7 shows that the effects of FB2 and SSRT were attenuated when age was included, confirming that children's pronoun interpretation errors decrease with age and indicating that age is more strongly linked to object pronoun interpretation than theory of mind and inhibition. The main effect of Match remained significant: children made more pronoun interpretation errors in the mismatch condition than in the match condition. Pronoun and Reflexive Production Task In production, consistent with our expectations, children hardly made any mistakes (see Table 8). With all three groups performing at ceiling, we did not test for group differences in production accuracy. Only main effects and interactions p < 0.05 in univariate analyses were included in multiple analyses. *p < 0.05; **p < 0.01; ***p < 0.001. Recall that, for the other-oriented action, both the use of an object pronoun and the use of a full noun phrase were scored as correct responses. Only in 5% of the cases an object pronoun was used. In the remaining 95% of the cases a full noun phrase was used. This corresponds with the pattern of production displayed by Dutch adults, who also mainly used full noun phrases to describe an other-oriented action in a similar experiment (Spenader et al., 2009). Importantly, children hardly ever incorrectly use an object pronoun [4 out of 769 scorable sentences, produced by three children (two ADHD and one ASD)] or a reflexive [4 out of 794 scorable sentences, produced by only one child (ASD)]. We tested, post hoc, if children with ASD or ADHD differed from TD children in their use of full noun phrases and object pronouns. A GLMM was performed on all items in the otheroriented condition, with full noun phrase (yes or no) as binary dependent variable and two dummy-coded contrasts between diagnostic groups and controls (ASD vs. TD and ADHD vs. TD) as fixed factors. No significant differences between the groups were found (all p-values > 0.05): children with ASD used a full noun phrase in 96% of the cases, children with ADHD in 95% of the cases and TD children in 94% of the cases. This indicates that children with ASD and children with ADHD use the same linguistic forms as TD children to express other-oriented and self-oriented actions. Only main effects and interactions p < 0.05 in univariate analyses were included in multiple analyses. *p < 0.05; **p < 0.01; ***p < 0.001. DISCUSSION The aim of this study was to clarify how children acquire object pronoun interpretation and production by investigating the possible cognitive mechanisms underlying the Pronoun Interpretation Problem, as different theoretical accounts see a role for different cognitive mechanisms. We found that both second-order False Belief performance and Stop Signal Reaction Time were associated with performance on the object pronoun interpretation task. These results suggest that theory of mind and inhibition are necessary for object pronoun interpretation. This finding is compatible with the perspective taking account of the Pronoun Interpretation Problem by Hendriks and Spenader (2006). According to Hendriks and Spenader's account, object pronouns are potentially ambiguous and listeners must consider the perspective of the speaker to block the incorrect coreferential interpretation for the object pronoun. The results of this study suggest that the Pronoun Interpretation Problem arises if children fail to consider the perspective of the speaker because of insufficient theory of mind abilities, or fail to suppress the incorrect interpretation of the pronoun because of poor inhibition skills. We did not find a relation between working memory and performance on object pronoun interpretation and thus found no support for Reinhart's (2006Reinhart's ( , 2011 claim that sufficient working memory is necessary for the costly operation of reference-set computation that is needed for object pronoun interpretation. The absence of a relation with working memory corroborates the results of Perovic and Wexler (2018). They found that children with Williams Syndrome, who are generally reported not to have memory deficits, nevertheless showed difficulties with pronoun interpretation in simple transitive sentences in English. However, these children did not receive a working memory task to confirm that they did not have memory deficits. Contrasting with these findings, in children with Developmental Language Disorders (DLD) Montgomery and Evans (2009) found a relation between working memory, as measured by a listening span task, and performance on the interpretation of complex sentences, including embedded pronominal sentences such as "Bugs Bunny says Daffy Duck is hugging him." However, performance on different sentence types was combined in this study and also included performance on embedded reflexive sentences and passive sentences. Additionally, the embedded pronominal sentences in this study were more complex than the simple transitive pronominal sentences in the current study (see also Ladányi et al., 2017, who found a relation in children with DLD between performance on the n-back task and the interpretation of pronouns and reflexives in embedded sentences in Hungarian). Because of these differences with the current study, it is possible that the relation with working memory reported in previous studies with children with DLD is due to other features of the linguistic materials than the presence of object pronouns, for example the syntactic complexity of the test sentences used. This explanation is supported by the close link found between working memory capacities and complex syntax in children's comprehension of language, as measured with different working memory tasks and different syntactic constructions (Delage and Frauenfelder, 2019). Regarding children's production of pronouns and reflexives, we did not find support for Chien and Wexler's (1990) pragmatic explanation of the Pronoun Interpretation Problem, as the children hardly made any binding errors in their production of object pronouns or reflexives. That is, they rarely produced a reflexive when a pronoun or full noun phrase was the correct form to use (which would constitute a violation of Principle A), and they rarely produced a pronoun when a reflexive was the correct form to use (which would constitute a violation of Principle B, in generative syntactic terms). Thus, the children observed the constraints of the grammar in their production of these forms. This finding is in line with previous experiments with typically developing children, showing that children produce object pronouns in an adult-like way from a young age (De Villiers et al., 2006;Matthews et al., 2009;Spenader et al., 2009). Because of their known difficulties with theory of mind, working memory, and inhibition, we had expected children with ASD and children with ADHD to have more problems with object pronoun interpretation than TD children. However, we did not find any differences in object pronoun interpretation between children with ASD, children with ADHD, and TD children: all three groups made errors in object pronoun interpretation. As expected, we also found that all three groups performed at ceiling on the reflexive conditions and on the production task. That is, the TD children as well as the children with ASD or ADHD in our study only had problems with the interpretation of object pronouns (particularly emerging in the mismatch condition), and did not have difficulty with the interpretation of reflexives (either in the match condition or in the mismatch condition) or with the production of pronouns and reflexives. That children with ASD and TD children show a similar Pronoun Interpretation Problem corroborates the findings by Perovic et al. (2013). Perovic et al. (2013) consider the Pronoun Interpretation Problem to be pragmatic in nature (cf. Chien and Wexler, 1990). At first glance, this leaves unexplained why they did not find differences between children with ASD and TD children in object pronoun interpretation. After all, if the Pronoun Interpretation Problem is a pragmatic problem, why would children with ASD, who are known for their pragmatic deficits, not make more errors in object pronoun interpretation than TD children? Perovic et al. (2013) argue that there may be different kinds of pragmatics: a kind of pragmatics related to social rules and a kind of pragmatics more directly related to language (cf. Schaeffer, 2003). This latter so-called "linguistic pragmatics" may not be affected in ASD, according to Perovic et al. (2013). We propose an alternative explanation of these findings. Rather than positing two types of pragmatics, one of which is unaffected in ASD, we propose that perspective taking need not be a pragmatic process but can also be part of the grammar. According to Hendriks and Spenader's (2006) account of the Pronoun Interpretation Problem, the interpretation of an object pronoun requires listeners to take into account the perspective of a hypothetical speaker in order to determine the interpretation of the pronoun (see also Hendriks, 2014). That is, listeners must apply the relevant constraints of the grammar to determine the optimal meaning of the pronoun, and must additionally place themselves in the perspective of a hypothetical speaker and apply the same constraints to determine the optimal form for this optimal meaning. In a final step, the listener must check whether the input form in comprehension and the output form in production match, or in other words: whether a speaker would have used a pronoun to express the selected interpretation. If so, the selected interpretation is considered to be correct, but if not, the selected interpretation must be suppressed and another interpretation must be checked. This process of "grammaticalized perspective taking, " which requires listeners to take the perspective of a hypothetical speaker and express a particular meaning as if they were the speaker, may be different from taking the perspective of an actual speaker, who may or may not be sitting in front of the listener. The latter form of perspective taking is much more challenging for listeners, since it differs per speaker and per situation. In contrast, grammaticalized perspective taking may be less demanding, as it does not vary per situation and therefore could be gradually automatized (as is shown in computational cognitive simulations to be psychologically plausible, see van Rij et al., 2010;Vogelzang et al., 2021). Such an automatized process can be understood as being part of the grammar of a mature native speaker. This view of object pronoun interpretation as a process of grammaticalized perspective taking is supported by the finding of similar difficulties with pronoun interpretation in non-advanced second-language learners as in children acquiring their native language. The finding of a Pronoun Interpretation Problem in second-language learners has been put forward as evidence in favor of Reinhart's costly operation of reference-set computation and against an explanation in terms of lack of linguistic knowledge (Slabakova et al., 2017), but is also consistent with the proposed computationally complex process of perspective taking. These results may thus provide support for the claim that this grammaticalized perspective taking is unaffected in children with ASD and ADHD. This corroborates previous findings of similar linguistic performance in ASD children, ADHD children, and TD children (Kim and Kaiser, 2000;Geurts et al., 2004a;Geurts and Embrechts, 2008;Helland et al., 2012). In contrast, taking the perspective of an actual speaker may be involved in pragmatic skills such as turn-taking and conversational rapport, both of which are found to be impaired in ASD and ADHD (e.g., Geurts et al., 2004a;Green et al., 2014). Most of the ASD children in our study could be classified as "language normal" (based on their PPVT scores and the vocabulary subtest of the WISC-III, cf. Kjelgaard and Tager-Flusberg, 2001). Perovic et al. (2013) found that the linguistic performance of ASD children with language impairment differed from the linguistic performance of ASD children without language impairment. However, they only found differences in the interpretation of reflexives, while both groups of ASD children performed similarly on the interpretation of object pronouns. A crucial difference between the study of Perovic et al. (2013) and the present study is the type of task that is used. Perovic et al. (2013) used a Picture Selection Task, which tests for preference of interpretation, whereas our study used a Picture Verification Task, which tests for acceptability of interpretation. On the basis of the study of Perovic et al. (2013) it can be concluded that ASD children with language impairment have a preference for a non-coreferential interpretation for object pronouns and reflexives. However, it is not clear whether these children would incorrectly accept a coreferential interpretation for pronouns, which is what the Pronoun Interpretation Problem entails. To further unravel differences between ASD children with and without language impairment, such children could be tested on their interpretation of object pronouns and reflexives using a Picture Verification Task or some other task testing acceptability rather than preference for one of the two relevant interpretations. A finding of our study that may at first sight be surprising is the fact that we found an effect of second-order False Belief understanding, but no effect of first-order False Belief understanding. The absence of an association between first-order False Belief understanding and object pronoun interpretation is probably due to ceiling performance in first-order False Belief understanding (see Table 1). First-order False Belief understanding is generally mastered around age 4, so at least 2 years before object pronouns are understood correctly (De Villiers et al., 2006). Because we expected a ceiling effect for first-order False Belief understanding in our 6-to 12-year-old children, we included a task that also measured second-order False Belief understanding. Since accurate second-order False Belief understanding is dependent on accurate first-order False Belief understanding, a slower development of first-order theory of mind is expected to result in a slower development of secondorder theory of mind as well, thus allowing us to investigate the relation between pronoun interpretation and theory of mind by looking at second-order False Belief understanding. Secondorder False Belief understanding was found to relate to object pronoun interpretation, which indicates that perspective taking is important in interpreting object pronouns. The False Belief task used in this study is a highly verbal task, which also depends on general language skills. Therefore, it could be argued that the observed relation between False Belief understanding and performance on pronoun comprehension merely reflects children's general language abilities. However, if true, we would expect this to also be reflected in children's performance on reflexive comprehension. The reflexive condition can be considered a control condition, assessing children's general language comprehension abilities and, more specifically, their syntactic abilities. Since the children in our study did not have any problems in the reflexive condition, their general language comprehension abilities appear to be intact. Previous studies found significant relations between various aspects of language and False Belief understanding (for an overview, see Milligan et al., 2007). Our study adds to this the observation of a relation between object pronoun interpretation and False Belief understanding. Yet it would be worthwhile to examine the relation between object pronoun interpretation and theory of mind using other theory of mind tasks, for example lowverbal theory of mind tasks or the (more natural) strange stories task (Happé, 1994). Although we did not find an association between working memory and performance in object pronoun interpretation, it should be kept in mind that working memory is a broad concept and many different tasks for its measurement have been developed. In our study, an n-back task with non-verbal stimuli (pictures) was used. It is possible that working memory tasks with verbal stimuli are associated with object pronoun interpretation. However, meta-analyses show that both working memory tasks with verbal stimuli and with non-verbal stimuli relate to general language comprehension (Daneman and Merikle, 1996) and that both give rise to similar activation patterns in neuroimaging studies (Owen et al., 2005). Additionally, in a related study with the same children (Kuijper et al., 2015), a relation was found between performance on the n-back task with non-verbal stimuli and performance on another linguistic task than the one reported on here. This other linguistic task tested speakers' referential choice between using a pronominal subject and using a full noun phrase subject in production, which is dependent on how well the speaker can keep track of the different referents mentioned in the preceding linguistic discourse. This indicates that the n-back task used in this study relates to at least some aspect of linguistic performance that requires working memory. Since no association was found between the n-back task and performance on object pronoun interpretation in the present study, this strongly suggests that object pronoun interpretation and working memory are unrelated. In contrast to working memory, inhibition was found to be associated with object pronoun interpretation in our study. In our study, we used a stop task to measure prepotent response inhibition. Yet, it may be worthwhile to also investigate the relation between pronoun interpretation and other types of inhibition, in particular interference control (i.e., cognitive inhibition). A final consideration with regard to the cognitive processes that were studied here pertains to the role of age. In a post hoc analysis we added age to our final model, leading to attenuation of the effects of inhibition and theory of mind. Age, as the umbrella variable, was more strongly linked to object pronoun interpretation than the specific effects of theory of mind and inhibition. The effect of age shows that, in addition to theory of mind and inhibition, other cognitive factors are likely involved in pronoun interpretation which also develop with age and which we have not included in this study. These cognitive factors (i.e., the included as well as non-included ones) are all subsumed by the overarching factor of age. Although the theoretical literature on object pronoun interpretation is not explicit about this, possibly cognitive flexibility (to switch from the incorrect interpretation to the alternative correct interpretation, cf. Kissine, 2012) or focused attention (to process speech in real-time, see, e.g., Wolfgramm et al., 2016) play a role too. In summary, the current study provides insight into the Pronoun Interpretation Problem and the cognitive mechanisms underlying this comprehension delay in children's language development. We found that both theory of mind and inhibition skills were associated with performance on object pronoun interpretation. This provides support for Hendriks and Spenader's (2006) perspective taking account of object pronoun interpretation, which holds that listeners must take into account the perspective of a hypothetical speaker and thus block the incorrect interpretation for the pronoun. Furthermore, our study showed that the performance of children with ASD or ADHD was comparable to that of TD children: the three groups demonstrated similar difficulties in their interpretation of object pronouns and neither of the groups showed difficulties in the production of object pronouns and reflexives. This suggests that children with ASD and children with ADHD do not have more problems than TD children in taking into account the grammatical perspective of a hypothetical speaker, despite their possible difficulties in perspective taking with actual conversational partners. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT This study, involving human participants, was reviewed and approved by the Commissie Ethische Toetsing Onderzoek (CETO) of the University of Groningen. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS SK, CH, and PH designed the study. SK carried out the experiment. SK, CH, and PH analyzed the results and wrote the manuscript. All the authors read and approved the final version of the manuscript. FUNDING This investigation was supported by a grant from the Netherlands Organisation for Scientific Research (NWO,, awarded to PH. ACKNOWLEDGMENTS We are grateful to the children and their parents who participated in this study. We thank Accare University Center for Child and Adolescent Psychiatry Groningen, Autisme Team Noord-Nederland and all research assistants for their help in the data collection. Special thanks go to Jessica Overweg and Gisi Cannizzaro for their assistance and useful suggestions. The content of the manuscript previously appeared as part of the Ph.D. thesis of Sanne Kuijper (2016).
2021-06-04T13:25:15.553Z
2021-06-04T00:00:00.000
{ "year": 2021, "sha1": "95d1908b187ae91e821b1590ebcbee37ebcf8efa", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.610401/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95d1908b187ae91e821b1590ebcbee37ebcf8efa", "s2fieldsofstudy": [ "Psychology", "Linguistics" ], "extfieldsofstudy": [ "Medicine" ] }
253526709
pes2o/s2orc
v3-fos-license
Characterizing microbiota and metabolomics analysis to identify candidate biomarkers in lung cancer Background Lung cancer is the leading malignant disease and cause of cancer-related death worldwide. Most patients with lung cancer had insignificant early symptoms so that most of them were diagnosed at an advanced stage. In addition to factors such as smoking, pollution, lung microbiome and its metabolites play vital roles in the development of lung cancer. However, the interaction between lung microbiota and carcinogenesis is lack of systematically characterized and controversial. Therefore, the purpose of this study was to excavate the features of the lung microbiota and metabolites in patients and verify potential biomarkers for lung cancer diagnosis. Methods Lung tissue flushing solutions and bronchoalveolar lavage fluid samples came from patients with lung cancer and non-lung cancer. The composition and variations of the microbiota and metabolites in samples were explored using muti-omics technologies including 16S rRNA amplicon sequencing, metagenomics and metabolomics. Results The metabolomics analysis indicated that 40 different metabolites, such as 9,10-DHOME, sphingosine, and cysteinyl-valine, were statistically significant between two groups (VIP > 1 and P < 0.05). These metabolites were significantly enriched into 11 signal pathways including sphingolipid, autophagy and apoptosis signaling pathway (P < 0.05). The analysis of lung microbiota showed that significant changes reflected the decrease of microbial diversity, changes of distribution of microbial taxa, and variability of the correlation networks of lung microbiota in lung cancer patients. In particular, we found that oral commensal microbiota and multiple probiotics might be connected with the occurrence and progression of lung cancer. Moreover, our study found 3 metabolites and 9 species with significantly differences, which might be regarded as the potential clinical diagnostic markers associated with lung cancer. Conclusions Lung microbiota and metabolites might play important roles in the pathogenesis of lung cancer, and the altered metabolites and microbiota might have the potential to be clinical diagnostic markers and therapeutic targets associated with lung cancer. Introduction Lung cancer (LC) is the main cause of death from cancer worldwide, and its incidence has continued to rise in recent years. Each year, more than 2.2 million people are diagnosed with the disease and 1.79 million die from it (1). The incidence is still rising in China, compared with declines in some western countries, which constitutes a major public health problem and causes a huge social burden (2). In contrast to small cell lung cancer (SCLC), which has been declining in many countries, non-small cell lung cancer (NSCLC) has accounted for the largest proportion of LC (80%-90%) currently (3). For now, common diagnostic methods contain x-ray film, positron emission computed tomography (PET), computed tomography (CT), and computer aided detection and diagnosis (CAD) (4). However, these common techniques, which are conveniently used by medical staff, lack specificity and accuracy. Biomarkers were found to have the potential to assist in the early diagnosis of LC. The most widely used and reliable biomarkers are protein biomarkers found in blood and bronchoalveolar lavage fluid (5). Combining biomarkers, imaging omics and artificial intelligence to constitute an integrated model for LC screening and diagnosis might be the progression orientation for ameliorating LC prediction in the future. The crucial risk factors of LC contain tobacco smoking, environmental and occupational pollution exposure, chronic lung disease, and lifestyle factors (6). Emerging studies have indicated the lung microbiota and metabolites could affect pulmonary health and diseases of the lungs. Lungs were considered as a sterile environment for a long time due to the limit of the culture-based techniques. However, the use of 16S ribosomal RNA (rRNA) amplicon sequencing has led to the increase of the interest in the lung microbiota (7). Numerous studies have shown that the lung microbiota might play a crucial part in the pathogenesis of pulmonary diseases. Liu et al. showed that the lung microbial composition and community structures of smokers with LC were distinct from that of emphysema-only patients: the abundance of Proteobacteria in the lungs of patients with LC was significantly lower and the abundances of Streptococcus and Prevotella were higher compared to patients with emphysema only (8). Tsay et al. found that Streptococcus and Veillonella were up-regulated in the lower airways of LC patients, which was related to the promotion of ERK and PI3K signal pathways (9). Moreover, studies had demonstrated a lower alpha diversity of lung microbiota in LC patients compared with that in patients with non-lung cancer (10). In general, compared with the studies on the gut, there were fewer studies on the correlation between lung microbiota and pulmonary homeostasis and diseases. Recently, the relationships between the progression of chronic inflammatory diseases and the variations of microbiota have been gradually discovered, and the lung diseases involved include cystic fibrosis, asthma and chronic obstructive pulmonary disease (COPD) (11). The relationship between smoking, airflow obstruction, and LC was well recognized. Previously study showed that COPD was an important element in LC risk in smokers and smokers with COPD had a 3-to 10-fold increased risk of developing LC compared to smokers without emphysema or significant airway obstruction (12). Prolonged exposure to environmental pollutants could stimulate inflammatory factors, promote the formation of the environment suitable for the survival of pathogenic bacteria, and lead to the dysbiosis of lung microbiota; the dysbiosis could further induce inflammation and tissue damage, ultimately leading to accelerated decline in lung function (13). However, current researches had focused on the association of the lung microbiota with chronic inflammatory disease of the lungs, there have been fewer studies on LC. In recent years, several studies have generated interest in the relationships between metabolites of the lung microbiota and lung health. Microbial components can contribute to the progression of the pulmonary diseases by producing metabolites with oncogenic potential. Gao et al. showed that the metabolites produced by Pseudomonas aeruginosa might be related to the pathogenesis of cystic fibrosis (14). In addition, the lung microbiota and metabolites contribute to the maintenance of the balance of the host lung immune system, which is an important contributor to defend against infection. Steed et al. found that desaminotyrosine (DAT), which was a metabolite associated with microbiota, helped the host defend against influenza by positively stimulating type I IFN (15). However, the current LC related metabolomics studies mostly targeted metabolites such as plasma proteins, which might not characterize the metabolism of lung microenvironment clearly. Despite recent emerging studies on the correlation between lung microbiota and metabolites associated with LC, the mechanisms still need to be further clarified. In addition, few studies have considered both lung microbiota and metabolites to explore their possible associations and their roles in the pathogenesis of LC. Therefore, in our study, the differences in lung microbiota and metabolites between LC patients and patients with non-lung cancer were explored by 16S rRNA amplicon sequencing and metagenomics. Moreover, we used the samples of lung tissue flushing solutions, which could characterize the metabolic changes of lung microenvironment more clearly, for the analysis of metabolomics to explore their effects on the development of LC. The results indicated that lung microbiota and metabolites might play key roles in the development of LC, and the altered metabolites and microbiota might have the potential to be clinical diagnostic markers and therapeutic targets associated with LC. Materials and methods Participants From 2020 to 2021, patients with LC were recruited in the Zibo Municipal Hospital. The exclusion criteria included the uses of antibiotics, corticoids, probiotics, prebiotics or immunosuppressive drugs in the past 3 months; hypertension; diabetes; previous airway surgery; preoperative radiotherapy and chemotherapy; and atomization treatment. Non-lung cancer patients were set as the control group of the metabolomics, and the exclusion criteria were the same as those in the LC group. This study was approved by Ethics Committee of Zibo Municipal Hospital (No. 20201102), and each subject signed a voluntary informed consent before the study. The clinical information was summarized in Supplementary Tables S1-S3. Sample collection Nine LC patients with unilateral tumors were selected from patients examined by bronchoscopy for the tests of 16S rRNA amplicon sequencing and metagenomics. All patients underwent routine examinations before operation, i ncluding electrocardiogram, pulmonary function, blood routine. Sterile saline samples of bilateral lungs were obtained by bronchoscopy in patients with LC. Paired samples of bronchoalveolar lavage fluid (BALF) included the one from the cancerous lobe and the other from the contralateral noncancerous lobe. Thirty LC patients with lung tumors and thirteen patients with non-lung cancer who underwent lobectomy were selected for the test of metabolomics. A whole tumor of 1 cm 3 and healthy tissue located 5 cm from tumor in the same pulmonary region were extracted for each patient. The removed tumors and tissues were immediately flushed with sterile normal saline and collected in sampling tubes. All samples were immediately stored at -80°C until DNA extraction was performed. Non-targeted metabolomics profiling Metabolites were extracted from the lung tissue flushing solutions and tested with a liquid chromatography-tandem mass spectrometry (LC-MS/MS). The metabolomics analysis was performed by UHPLC -Q Exactive HF-X system with a ACQUITY UPLC HSS T3 column (Waters, Milford, USA). The temperature of the column was set to 40°C and the injection volume was 2L. The flow rate of helium carrier gas was 0.4 mL/min, and the MS scanning range was m/z 70 -1050. Progenesis QI (Waters Corporation, Milford, USA) was used to preprocess the MS raw data, and the obtained data matrix included retention time (RT), mass/charge ratio (M/Z) and peak intensity. Principal component analysis (PCA) and orthogonal partial least squares-discriminant analysis (OPLS-DA) were used to explore whether all samples can be significantly clustered in different groups. The variable importance in projection (VIP) values of OPLS-DA and the P-values (Wilcoxon rank-sum test) were calculated to check the metabolites with statistically significant differences between two groups (16). Metabolites with P-values below 0.05 and VIP values above 1.00 were identified as differentially expressed metabolites. Metabolic pathway analysis was carried out to recognize the enriched pathways based on the altered metabolites. Altered metabolites were annotated through the KEGG database (https://www.kegg. jp/kegg/pathway.html) and the Python package SciPy was used for the pathway enrichment analysis. P-value were corrected by false discovery rate (FDR) with FDR ≤ 0.01 as the threshold. The three diversity indices (Shannon, Chao, ace) of the samples were calculated and averaged to assess the level of alpha diversity in different groups which were obtained by Mothur and visualized by R. The b-diversity was analyzed by weighted UniFrac phylogenetic distance matrices, visualized in non-metric multidimensional scaling analysis (NMDS) plots and determined by Partial Least Squares Discriminant Analysis (PLS-DA) for statistical significance. The effect of the abundance of the species on the discrepancy between groups was estimated using linear discriminant analysis (LDA) and formed a table (LDA 2.0, P < 0.05). Wilcoxon rank-sum test was carried out to compare species differences between groups (P < 0.05). Correlation networks were used to show changes in interactions between microbial communities. Degree (DC), closeness (CC), and betweenness centrality (BC) were used to describe the characteristics of multiple networks. Metagenomics analysis Microbial DNA was obtained from BALF samples using the FastDNA Spin Kit (MP Biomedicals, Shanghai, China) and tested for DNA purity using Nanodrop microspectrophotometer (Nanodrop 2000, Thermo Fisher Scientific, America). Finally, DNA integrity was determined using agarose gel electrophoresis. DNA was fragmented to an average size of approximately 400 bp using Covaris M220 (Gene Company Limited, China) for pairedend library construction. DNA libraries were subsequently constructed and assessed using the NEXTFLEX Rapid DNA-Seq kit (Bioo Scientific, USA). The metagenomics sequencing was carried out on Illumina NovaSeq/Hiseq Xten (Illumina, USA, data available at Sequence Read Archive: PRJNA858501). The raw sequence reads were trimmed, and the clean reads were assembled via MEGAHIT. Gene prediction was performed using MetaGene (http://metagene.cb.k.u-tokyo.ac.jp/), and CD-HIT software (version 4.6.1 http://www.bioinformatics.org/cd-hit/) was used for predicting gene sequence clustering. Redundant gene sets were constructed using the longest sequence of each group clustered in DNA as representative. DIAMOND (https://github. com/bbuchfink/diamond) was employed to compare the sequences of non-redundant gene sets with Eggnog database (http://eggnog.embl.de/) to obtain the Clusters of Orthologous Groups (COG) functions corresponding to genes, and the relative abundance of the COG was calculated using the sum of gene abundances corresponding to COG. Linear regression analysis was carried out to estimate the consistency between species and function. Significantly differences of COG categories between groups were detected by Wilcoxon rank-sum test (P < 0.01). Biomarker identification Biomarker identification was performed by MetaboAnalyst (https://www.metaboanalyst.ca/) (17). Based on the differential metabolites and microbiota obtained by the above analysis, the Receiver Operating Characteristic curve (ROC) analysis was used to obtain curve and calculate the area under the curve (AUC). In addition, we combined the obtained biomarkers to further explore the predictive ability of the model. Metabolomics profiles change in LC patients The samples of lung tissue flushing solutions were used for the analysis of metabolomics. Based on the processing of the raw data, the area under the curve was used to quantify peaks. In positive (ESI+) modes, 8,650 positive peaks were detected, 428 metabolites were identified, and 125 metabolites were annotated compared with KEGG database. In negative (ESI-) modes, 5,580 negative peaks were detected, 178 metabolites were identified, and 55 metabolites were annotated compared with KEGG database. The data was normalized and relative standard deviation (RSD) was used to evaluate the exclusion of data with poor stability during the experiment. Results indicated that favorable stability was tested from samples in the positive and negative modes (Supplementary Figures S1A, B). PCA analysis revealed that QC samples were clearly differentiated, indicating the metabolomics datasets had satisfying stability and repeatability (Supplementary Figures S1C, D). OPLS-DA analysis showed that the separation of metabolites in the samples of the two groups was obvious ( Figure 1A). Metabolite features that distinguished LC patients from controls were selected based on a log2 fold change cutoff at 1, and VIP scores determined by OPLS-DA (VIP > 1, P < 0.05, Supplementary Table S4). We obtained 40 metabolites with significant differences in relative abundance between LC patients and controls ( Figures 1B, C), which included 4 organic oxygen compounds, 4 fatty acyls, 3 organoheterocyclic compounds, 3 prenol lipids, 10 glycerophospholipids, 4 benzene and substituted derivatives, 2 carboxylic acids and derivatives, 1 benzenoid, 2 lipids and lipidlike molecules, 2 organic acids and derivatives, 1 purine nucleoside, and 4 other compounds (Supplementary Figure S1E). Overall, 14 and 26 metabolites were significantly upregulated and down-regulated in LC patients, respectively. KEGG pathway enrichment analysis was performed to explore the metabolic pathways associated with differential metabolites in LC patients and controls. 130 metabolic pathways were identified, among which 24 metabolic pathways had significant differences, including ABC transporters, protein digestion and absorption, central carbon metabolism in cancer (P < 0.01, Supplementary Figure S1F). The significantly different metabolites were enriched into a total of 15 signal pathways, of which 11 signaling pathways were observably changed in LC patients, comprising autophagy, apoptosis, necroptosis and sphingolipid signaling pathway (P < 0.05, Figure 1D). Altered composition of the lung microbiota in LC patients BALF samples were used for 16S rRNA amplicon sequencing to explore the changes of lung microbiota in LC patients and a total of 16 samples passed quality control and were included in the study. According to Usearch statistics, in the raw data of 16S rDNA sequencing using primers 338F and 806R, the total reading of each sample was 888,409 pairs. The original data were filtered by QIIME software and then spliced by FLASH software to generate tags sequence. A total of 16 qualified samples were obtained by BALF sample sequencing, with an average length of 425 bases. Finally, Uparse software was used to cluster the spliced sequences into OTUs according to 97% similarity, and the total number of OTUs obtained was 1,711. Species cumulative curve and rarefaction curve at the OTU level indicated that the vast majority of microbial diversity was obtained in all samples (Supplementary Figures S2A, B). Venn diagram was used to show the variation in OTUs between the two groups (Supplementary Figure S2C). Overall, 453 OTUs were shared between groups and there were more unique OTUs in controls (973) than in the LC patients (285). The results of PLS-DA model analysis reflecting the clustering of the two groups showed that the separation between LC patients and controls was obvious (Figure 2A). The alpha diversities of two groups did not show a significant difference (Supplementary Figure S2D). NMDS analysis on the basis of Bray-Curtis similarity distance indicated that the two groups were apart from each other on the ordination (stress<0.2, Figure 2B). A taxonomic analysis of sequences revealed that the most prevalent phylum in the lung microbial community was Proteobacteria and variations of microbial composition at the genus level between individuals could be seen ( Figure 2C, Supplementary Table S5). We relied on LEfSe analysis to identify the major taxa that influenced the differences between the two groups and two-sided Welch's t-test. LEfSe analysis recognized 26 genera which had discrepant abundances between the two groups (LDA > 2.0, P < 0.05). In LC patients, an enrichment in Chloroflexi taxa was observed and Lactobacillus, Massilia, Lactococcus, Oscillospirales, Christensenellaceae were significantly more abundant in controls ( Figure 2D). Additionally, the results of the two-sided Welch's ttest showed that Lactobacillus delbrueckii subsp. bulgaricus, Massilia timonae, Lactobacillus reuteri were more abundant in controls by species taxa (P < 0.05, Figure 2E). Taken together, we identified some microbiota and metabolites that were different between two groups and their changes may be correlated. Therefore, a heat map showed the association between 20 differential genera and 40 differential metabolites closely related with the progression of LC (Supplementary Figure S2E). Then, we used metagenomics analysis to predict gene functions of the lung microbiota and a total of 12 samples met the criteria after quality inspection. Based on the construction of non-redundant gene sets, we obtained 12, 671 genes with a total sequence length of 6, 216,664 (bp) and an average sequence length of 490.62. 179 different COG functional categories were identified (P < 0.05, Supplementary Table S6), and there were 5 functional categories had significant differences (P < 0.01, Figure 2F), including K + -transporting ATPase, DNA polymerase III, PAS domain, membrane-associated protease RseP and predicted flavoprotein YhiN. Linear regression analysis of the relationship between the similarity in the functional attributes of the community and community composition indicated that there is a prominent correlation between the two parts (R 2 > 0.8, P < 0.01, Supplementary Figure S2F). Microbial interaction networks in nonlung cancer and lung cancer patients To identify the interactions of the lung microbiota in patients with or without lung cancer, we constructed the correlation networks of genus taxa. The networks showed different bacterial interactions in the two groups, especially the network of LC patients was more complex than that of the controls. Given the distinct microbial composition between two groups, we compared the topology of the networks in each group. The number of mean degree and transitivity were higher in the LC patients (mean degree, 4.9; transitivity, 0.64) compared with the controls (mean degree, 3.6; transitivity, 0.58), suggesting that LC patients-enriched genera had a stronger correlation with each other than controls. The results indicated that patientsenriched species affected the host by interacting and exerting similar effects. Furthermore, degree centrality (DC), closeness centrality (CC) and betweenness centrality (BC) were used to screen the influential microbiota in each network (DC > 0.1, CC > 0.2, BC > 0.1). In LC patients, the roles of Campylobacter, Atopobium, Haemophilus and Streptococcus were several network-hubs and they were important to the lung microbial community alteration of the LC patients ( Figure 3A). Results in controls showed that Bacillus, Fusobacterium, Alloprevotella, Klebsiella and Kroppenstedtia contributed to more importance ( Figure 3B). We constructed a correlation network combining the significantly different genera, which were obtained by 16S rRNA amplicon sequencing. Lactobacillus, Brevundimonas, Massilia, Christensenellaceae R-7 group were positively correlated with each other, which were enriched in controls, and they were negatively correlated with Veillonella, Atopobium, Haemophilus, Fusobacterium, which were enriched in LC patients ( Figure 3C). Based on the measurement indexes characterizing the properties of the networks (DC > 0.1, CC > 0.2, BC > 0.1), Brevundimonas, Bacillus, Veillonella, Klebsiella and Pseudomonas were identified. Identifying biomarkers in LC patients Due to the function of evaluating the predictive ability of models, ROC curve has been in widespread use. ROC curve was used to assess representative differential features for the diagnosis of LC in this study. As indicated by the results, the AUC of Cysteinyl-Valine, 3-Chlorobenzoic acid and 3,4-Dihydroxyphenyl ethanol were 0.8692, 0.859 and 0.8103 (Supplementary Table S7), which might be useful in identifying patients with LC ( Figures 4A, B). In order to improve the accuracy of biomarkers, the three metabolites were combined for ROC analysis, which showed more strikingly capability of the diagnosis for LC (AUC:0.91, Figure 4C). LEfSe analysis based on 16S rRNA amplicon sequencing revealed 14 significantly different species (LDA > 2.0, P < 0.01), from which nine species were screened by LASSO (Supplementary Figure S3, Supplementary Table S8). ROC analysis on the basis of the combination of the 9 species, demonstrating that LC could be assessed by representative differential lung microbiota ( Figure 4D). Discussion LC tumor microenvironment is colonized by microbiota, which can interact with the host, and new studies have indicated that this might be a potential factor affecting LC. Generally speaking, the normal tissue microenvironment protects the lungs, while the tumor microenvironment promotes cancer progression. Therefore, we used the samples which could characterize the changes of lung microenvironment to explore the effects of the lung microbiota and metabolites on the progression of LC. This study suggested that the altered microbiota and metabolites between the patients with or without lung cancer might play pivotal roles in LC pathogenesis. In the metabolomics analysis of flushing fluid samples, multiple fatty acyls were significantly upregulated in LC patients and glycerophospholipids accounted for the largest proportion in controls, which indicated that lipid metabolism changed in LC patients. Increasing evidences suggested that lipid metabolism could be assisted in determining tumor metastasis, improving therapeutic efficacy and developing new therapeutic targets (18). Lipids are components of cell membranes that are involved not only in energy storage but also as messengers in signaling. In addition, the disorder of lipid metabolism in cancer cells will affect cell proliferation and differentiation and other processes (19). As the main components of pulmonary surfactant, which is a complex of phospholipids (85% phosphatidylcholine) and surfactant proteins, lipids have been shown to play essential roles in the pathogenesis of LC (20,21). Pulmonary surfactant was synthesized and secreted by alveolar type II cell, a type of lung stem cell and it could transform into monoclonal lung tumor during active KRAS mutation in previous studies (22). Various studies have shown that the destruction of pulmonary surfactant and the changes of alveolar type II cell homeostasis were connected with the pathogenesis of LC (23). In particular, we found that metabolites of sphingosine enriched in sphingolipid signaling pathway, significantly decreased in LC patients. Sphingolipids are bioactive membrane lipids that act as first or second messengers (24, 25). In particular, the first sphingolipid detected was sphingosine, which could regulate various physiological processes such as cell cycle, apoptosis (26). Sphingosine, as a regulator that inhibits cell proliferation, can affect cell growth and apoptosis (27). Particularly, sphingosine is an important substance that helps protect the respiratory tract against bacterial pathogens (28). Sphingosine has been found to inhibit multiple pathogens, including Staphylococcus aureus, Acinetobacter baumannii, Haemophilus influenzae, Escherichia coli, Fusobacterium nucleatum, Streptococcus sanguinis (29). As the heat map showed, the bactericidal effect of sphingosine could have something to do with the downregulation of Haemophilus and Streptococcus in controls of our study. Moreover, the pathway of ABC transporters, protein digestion and absorption and central carbon metabolism in patients were changed. Decreased level of ABC transporters was found in LC patients, containing betaine, L-Arginine and taurine. Betaine is widely regarded as an anti-oxidant and it has beneficial actions in several human diseases, such as obesity, diabetes and cancer (30). Tang et al. reported that cholinebetaine pathway was conducive to hyperosmotic stress and lethal stress resistance in Pseudomonas protegens SN15-2 (31), and this could have something to do with the enrichment of Pseudomonas in controls of our study. Arginine is present in the precursors of various organic compounds such as nitric oxide (NO), ornithine and myosine, which have huge impacts on immune cell biology, especially macrophage, dendritic cell and T cell immunobiology (32,33). Kim et al. reported that arginineinduced changes in gut microbiota enhanced host lung immunity to nontuberculous mycobacterial infection, and that indicated that arginine might plays a protective role in lungs (34). Taurine, as conditionally essential amino acid of human, has multiple physiological functions, including the regulation of neural conduction, participating in endocrine activities, immunity enhancement, and strengthening the antioxidant capacity of cytomembrane (35). Taurine was found to inhibit the proliferation of lung cancer cells, significantly boosted the apoptosis rate, and reduced the expression of migration factors matrix metallopeptidase 9 (MMP-9) and vascular endothelial growth factor (VEGF) (36,37). Previous studies have shown that taurine ABC transporter protein has been identified in Lactobacillus, and this could have something to do with the enrichment of Lactobacillus in controls of our study (38). The upregulation of betaine, arginine and taurine in controls might contribute to the immunity enhancement and the boost of the antioxidant capacity of cells. The pathway of protein digestion and absorption and central carbon metabolism in cancer contained a variety of amino acids such as L-Tryptophan with decreased relative abundance in LC patients. Tryptophan is an essential amino acid and plays essential roles in various physiological processes. Down-regulated tryptophan concentration have been detected in patients with colorectal cancer, malignant melanoma and LC, and studies showed that tryptophan metabolites could drive the motility and migration of cancer cells (39). In addition, the pathway of Linoleic acid metabolism involved in metabolites of 9, 10-DHOME was upregulated in LC. The level of 9, 10-DHOME, which was the epoxide hydrolase metabolite of the leukotoxin 9,10-EpOME, was found to be increased in disease. 9, 10-DHOME activates the NF-kB and AP-1 transcription factors of endotheliocyte to mediate inflammatory responses (40). Moreover, many studies showed that DiHOMEs might be part of the inflammatory response to environmental insults in lungs (41). In this research, we explored the microbial changes in BALF samples using 16S rRNA amplicon sequencing. Results showed that the microbiota constitution in LC patients was different from that of controls and the microbiota differed in terms of beta-diversity. The microbial dysbiosis of LC patients was represented by decreasing microbial diversity, and increasing Streptococcus, Prevotella, Veillonella and Haemophilus, which were in accordance with the existing results (9,42). Elevated abundance of Streptococcus, Prevotella and Veillonella were found in tumor tissues from LC patients previously, and the changes of these genera were related to the up-regulation of ERK and PI3K signaling pathways in LC patients (9). We also found that Fusobacterium was up-regulated in LC patients. The promotion effect of Fusobacterium on tumor cells is mainly achieved by inhibiting host immunity and inducing proinflammatory microenvironment (43). The available studies demonstrated that Fusobacterium acts as an inducer in various cancers, such as breast, colon and oral cancer (44)(45)(46). Several studies on the mechanism of Fusobacterium in promoting tumor development had provided different results. High levels of Fusobacterium promoted the activity of NF-kB and various pro-inflammatory factors, and the FadA virulence factor in Fusobacterium affected cell growth by regulating the b-catenin signaling pathway (47,48). LEfSe analysis showed that potential probiotics, including L a c t o b a c i l l u s , L a c t o c o c c u s , O s c i l l o s p i r a l e s a n d Christensenellaceae, were down-regulated in LC patients. Probiotics were found to have the ability to achieve anticancer effects by promoting apoptosis of cancer cells and improving resistance to oxidative stress (49,50). Multiple common microorganisms in the human gut have probiotic effects such as Bifidobacterium, Lactobacillus, Lactococcus. In particular, many lactic acid bacteria (LAB) have essential conducive impacts on the host such as anti-oxidation and antiinflammation (51). The antioxidant capacity of LAB is based on the high catalase and a, a-diphenyl-b-picrylhydrazyl (DPPH) free radical scavenging activity, the anti-inflammatory property is achieved by the promotion of anti-inflammatory cytokines (IL-10) as well as the decrease of proinflammatory cytokines (IL-6) (43). Oscillospirales was believed to produce short-chain fatty acids, and the level of it was also found to be decreased in disease (52). The Christensenellaceae has been found in human bodies, which plays an important role in human health (53). The correlation networks showed that multiple oral bacteria were enriched in the lungs, and there was a strong correlation between them such as Veillonella, TM7x, Capnocytophaga, Parvimonas, Granulicatella. There has been an increasing interest in detecting the connection between oral microbiota and the occurrence of respiratory tract infections. Associations between oral microbiota and several respiratory infections have been reported previously (54). A previous study found that oral commensal microbiota was enriched in the lower airway of LC patients, and the connections between the lower airway microbiota and host immunity in healthy subjects have also been explored (9). Previous studies confirmed that distinguishing oral commensal microbiotas were detected to have changes during the development of cancers such as pancreatic cancer, breast cancer or LC (55)(56)(57). However, none of them clearly elucidated the relationships between oral commensal microbiota and the pathogenesis of multiple cancers. However, our results still have some shortcomings, and did not consider the tumor stage, the histological subtype and clinical validation. Many studies have found differences in the characteristics of microbiota between different tumor stages and histological subtypes in other cancers (58,59). In the subsequent studies, we will expand sample size, and evaluate the potential marker in a larger cohort. We hope to verify the diagnostic value of the biomarkers and explore the molecular mechanisms by which lung microbiota and metabolites affect LC. Moreover, the relationship between lung microbiota and metabolites in different tumor stages and histological subtypes will be considered. Conclusions In this study, the differences in lung microbiota and metabolites between LC patients and patients with non-lung cancer were explored by 16S rRNA amplicon sequencing, metagenomics and metabolomics. The results suggested that lung microbiota and metabolites might play critical roles in the progression of LC. The composition of the lung metabolites was significantly different between the LC patients and controls, which indicated that lipid metabolism, especially sphingolipid signaling pathway, changed in LC patients. The microbiota in LC patients were different from those in controls, with multiple probiotics were down-regulated in LC patients. Moreover, we found that oral commensal microbiota might be related to the development and progression of LC. Finally, we found 3 metabolites and 9 species, which have significantly differences, and they might have the potential to be clinical diagnostic markers and therapeutic targets associated with LC. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm. nih.gov/, PRJNA858501, https://www.ncbi.nlm.nih. gov/, PRJNA858534. Ethics statement The studies involving human participants were reviewed and approved by Ethics Committee of Zibo Municipal Hospital (20201102). The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. Author contributions MS received the research grant, conceived and designed this study and revised the manuscript. BL and YL analyzed clinical and microbiome data and wrote the manuscript. LS enrolled the study participants, and revised the manuscript. WZ, RW, HC, JL, XY, SX and WW performed the experiments. LD and SL supervised and provided continuous guidance for the experiments. All authors approved the final version of the manuscript before submission.
2022-11-16T15:16:37.565Z
2022-11-15T00:00:00.000
{ "year": 2022, "sha1": "9ceada54f90a9519793cc83d40b08ec604d2181a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9ceada54f90a9519793cc83d40b08ec604d2181a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119111986
pes2o/s2orc
v3-fos-license
Fluctuation persistent current in small superconducting rings We extend previous theoretical studies of the contribution of fluctuating Cooper pairs to the persistent current in superconducting rings subjected to a magnetic field. For sufficiently small rings, in which the coherence length $\xi$ exceeds the radius $R$, mean field theory predicts the emergence of a flux-tuned quantum critical point separating metallic and superconducting phases near half-integer flux through the ring. For larger rings with $R\gtrsim \xi$, the transition temperature is periodically reduced, but superconductivity prevails at very low temperatures. We calculate the fluctuation persistent current in different regions of the metallic phase for both types of rings. Particular attention is devoted to the interplay of the angular momentum modes of the fluctuating order parameter field. We discuss the possibility of using a combination of different pair-breaking mechanisms to simplify the observation of the flux-tuned transition in rings with $\xi>R$. We extend previous theoretical studies of the contribution of fluctuating Cooper pairs to the persistent current in superconducting rings subjected to a magnetic field. For sufficiently small rings, in which the coherence length ξ exceeds the radius R, mean field theory predicts the emergence of a flux-tuned quantum critical point separating metallic and superconducting phases near half-integer flux through the ring. For larger rings with R ξ, the transition temperature is periodically reduced, but superconductivity prevails at very low temperatures. We calculate the fluctuation persistent current in different regions of the metallic phase for both types of rings. Particular attention is devoted to the interplay of the angular momentum modes of the fluctuating order parameter field. We discuss the possibility of using a combination of different pair-breaking mechanisms to simplify the observation of the flux-tuned transition in rings with ξ > R. I. INTRODUCTION The study of superconducting fluctuations has already a long history, for a comprehensive review see Ref. 1. When approaching the superconducting phase from the metallic side, for example by lowering the temperature T , precursors of superconductivity reveal themselves long before the superconducting state is fully established. In this regime, electrons form Cooper pairs only for a limited time. Being charged objects themselves, the Cooper pairs participate in charge transport. At the same time the density of states of the unpaired electrons is reduced. These simple qualitative arguments already indicate that superconducting fluctuations can affect both transport and thermodynamic properties of the metal outside the superconducting phase. Detailed studies of these effects have been conducted in different contexts. 1,2 It is well known, for example, that fluctuation effects are more pronounced when the effective dimensionality of the superconductor is reduced or in the presence of disorder. In bulk superconductors the transition temperature T c can be partially or even completely suppressed by various pair-breaking mechanisms, most notably by applying a magnetic field or introducing magnetic impurities. An additional pair-breaking mechanism can become effective in doubly connected superconductors like superconducting rings or cylinders, when they are threaded by a magnetic flux φ. In this case one observes so-called Little-Parks oscillations, 3 the transition temperature T c is periodically reduced as a function of φ. Due to the periodicity, it is immediately evident that this effect is qualitatively different from the mere suppression of superconductivity by a magnetic field in bulk superconductors. The period of the oscillations is equal to 1 as a function of the reduced flux ϕ = φ/φ 0 , where the superconducting flux quantum is φ 0 = π/e, 4 see Fig. 1. The maximal T c reduction occurs when ϕ takes half-integer values. The magnitude of the T c reduction is size-dependent. It is convenient to measure the ring radius R in units of the zero-temperature coherence length ξ and to define r = R/ξ. A representative mean field phase diagram is displayed in Fig. (1) for two rings of different size. As we see in Fig. 1, mean field theory predicts a moderate T c reduction for moderately small rings with r 1. Most strikingly, it also shows that for very small rings or cylinders with r < 0.6 the transition temperature is expected to be equal to zero in a finite interval close to half-integer fluxes [This regime is sometimes called the destructive regime.] Correspondingly, a flux-tuned quantum phase transition is expected to occur in these rings or cylinders at a critical flux ϕ c0 . The mean field transition line can be found from Eq. (3.4) to be discussed below. As is well known, superconducting rings threaded by a magnetic flux φ support a dissipationless persistent current. 2,5 In this article, we study theoretically the fluctuation persistent current in different regions of the phase diagram, both for rings with r 1 and moderate T c reduction and for rings with r 0.6 with strong T c suppression. In particular, we will study in detail the large fluctuation persistent current I which occurs even at fluxes for which T c is reduced to zero while the system has a finite resistance. A short account of the most important results of this study has already been presented in Ref. 6. Here we extend our study to different regions of the phase diagram, discuss the results in a broader context, and include details of the derivations. This study is motivated by recent experiments that are significant to our understanding of fluctuation phenomena in superconductors with doubly-connected geometry. Koshnick et al. 7 measured the persistent current in small superconducting rings with r 1, for the smallest rings under study T c was reduced by approximately 6%. Superconducting fluctuations in rings of this size are by now well understood, both experimentally . The superconducting phase for small rings with effective radius r = R/ξ < 0.6 is shown in dark gray. Mean field theory predicts a full reduction of Tc for fluxes between ϕc0 ≈ 0.83r and 1 − ϕc0 near ϕ = 1/2. In this article we focus on fluctuations in the normal phase for these small rings. The superconducting phase for larger rings with r 1 is shown in light gray. Tc is periodically reduced as a function of the flux ϕ, but superconductivity prevails at low temperatures. The dotted lines gives Tnϕ defined below Eq. (2.4) for n ∈ {0, 1, 2}. For r 1 it is well approximated by the formula Tcϕ ≈ Tc0(1 − ϕ 2 /r 2 ). The phase diagram is periodic in ϕ with period 1 for vanishing ring thickness. and theoretically. [6][7][8][9] This is not so for smaller rings with r < 0.6. Strong Little-Parks oscillations for cylinders with r < 0.6, for which T c is reduced to zero near halfinteger flux, have been observed in a transport measurement on superconducting cylinders. 10 It has so far, however, not been possible to measure the persistent current in superconducting rings close to the flux-tuned quantum critical point. The difficulty to access the destructive regime for rings is that the experiments require a high sensitivity of the measurement device as well as low temperatures. At the same time, the magnetic field should be strong enough to produce a sufficiently large flux penetrating the small ring area. We address this issue in this manuscript by discussing the possibility that a combination of different pair-breaking mechanisms can lead to progress in this direction. More specifically, we consider the combined effects of the magnetic flux through the ring's center on the one hand and of magnetic impurities and/or the magnetic field passing through the bulk material of the ring on the other hand. By direct calculation, we further explore how the presence of the quantum critical point influences the persistent current away from the quantum critical point, e.g., for temperatures of the order of T 0 c ≡ T c (ϕ = 0). The literature on superconductivity in systems with doubly connected geometry is extensive. We would like to point out a number of works, where related phenomena have been discussed. The possibility of finding com-plete suppression of superconductivity near half integer flux in small superconducting rings was pointed out by de Gennes. 11,12 The phase diagram of superconducting cylinders was considered in Ref. 13 taking into account the interplay of pair-breaking effects caused by the flux on the one hand and the magnetic field penetrating the walls (of finite width) on the other hand. We will use their results when we discuss the influence of finite-width effects on the phase diagram. A detailed study of the fluctuation persistent current in rings for which T c is reduced to zero by magnetic impurities (at any value of the flux) can be found in Refs. 14 and 15. These works address a long standing puzzle related to the observation of an unexpectedly large persistent current in copper rings. 16 It is suggested that these rings contain a finite amount of magnetic impurities, which suppress superconductivity and cause the rings to remain in the normal state even at low temperatures. Denoting the scattering rate on the magnetic impurities by 1/τ s , there is a critical rate 1/τ sc and an associated quantum critical point that separates the superconducting from the normal phase. If the measurements are performed on rings with a scattering rate that is larger but close to 1/τ sc , the corresponding fluctuations can lead to large currents in the rings. In contrast, in the parallel work of Ref. 6 and in the present manuscript we consider the opposite case, in which the phase transition is primarily tuned by the magnetic flux, so that for vanishing flux and low temperatures the ring is in the superconducting state. We examine the influence of additional weak pairbreaking effects on the phase diagram and how they can help to experimentally observe the flux-tuned quantum phase transition. In the experiment of Ref. 10 on cylinders it was observed that near half-integer flux the resistance R along the cylinder drops as T decreases and then saturates for the lowest temperatures. In a later experiment 17 , regular step-like features where additionally observed in the R − T diagram and interpreted as being due to a separation into normal and superconducting regions along the cylinder. A number of theoretical works addresses the issue of transport in small superconducting-cylinders. In Refs. 18 and 19 the perturbative fluctuation contribution to the conductivity of long superconducting cylinders near a flux-tuned quantum critical point was discussed as a particular example for transport near a pairbreaking transition. In a broad sense, the general approach is similar to ours, but the considered system has a different dimensionality and the work discusses transport, while we study a thermodynamic property. It has been suggested in Ref. 18 that the observable regime in the experiment 10 is dominated by thermal fluctuations and that at even lower T an upturn of R could be expected. To the best of our knowledge, so far no detailed comparison between theory and experiment is available. The observed saturation in the experiment 10 corresponds to a strong reduction of the normal resistance and as such lies outside the region of validity of the perturbative ap-proach. The role of inhomogeneities along the cylinder axis has been further emphasized in Ref. 20. In Ref. 21 a mean field model was proposed that takes into account inhomogeneities caused by a variation of parameters like the mean free path or the width along the cylinder axis and a specific profile was found that would quantitatively fit the experimental phase diagram both as far as the saturation and the step-like features are concerned. Fluctuation effects, on the other hand, where neglected. It seems likely that a complete description would have to include both inhomogeneities and fluctuations, which is very demanding. In this respect, the situation with rings is more advantageous. Since the typical size of inhomogeneities is larger or of the order of the coherence length, they are unlikely to play a role for the superconducting rings under study here and we may focus on fluctuation effects only. We make detailed predictions for flux and temperature dependence of the resulting fluctuation persistent current. As mentioned before, due to experimental difficulties, (to the best of our knowledge) no measurements of the destructive regime are available for rings yet. Fluctuation effects in moderately small superconducting rings (r 1) were studied in Refs. 9 and 22 with particular emphasis on the regime of strong fluctuations near the transition. These works introduce the idea that the strong fluctuation regime near the thermal transition can be described in terms of one or two coupled angular momentum modes of the order parameter field in the classical GL functional. We will make use of this idea and derive more detailed results for the persistent current and susceptibility close to integer and half-integer fluxes. Ref. 8 studies superconducting fluctuations in rings near the thermal transition using a numerical approach based on the mapping of the classical GL-functional onto the problem of solving an effective Schrödinger equation 23 . The results of this approach agree well with the experiment of Koshnick et al. 24 This approach works nicely in cases where many angular momentum modes of the order parameter field give a sizeable contribution to the persistent current, but can become cumbersome in the opposite limit (see Ref. 24). In this sense, it is complementary to the approach used in this manuscript (as well as in Refs. 6,9,22), which is well suited for rings that are so small that only one or two angular momentum modes are important. The role of the back-action effects caused by the selfinduction of cylinders and rings was discussed in this context in Refs. 25 and 26. While we will discuss finite thickness effects, we will generally assume that the selfinduction and the persistent currents of the rings under study in this article are sufficiently small so that backaction effects can be neglected. The article is organized as follows. Section II is devoted to the persistent current in rings with r 1. Some calculational details are presented in appendix A. In section III we study in detail the fluctuation persistent cur-rent in different regions of the phase diagram for rings with r 0.6, including the vicinity of the quantum critical point. Details of the calculation are relegated to appendix B. In section IV we discuss different ways to reduce T c to zero in rings with finite width or by introducing magnetic impurities. II. THERMAL TRANSITION FOR RINGS WITH r 1 In this section we will discuss the description of rings with only a moderate suppression of T c , i.e., rings for which r 1. A condensed discussion has already been presented in Ref. 6. Here we take the opportunity to provide additional information. For the rings with r 1 the superconducting transition occurs at a finite temperature and fluctuations can be described with the help of the classical GL functional in which the order parameter field is static. In the imaginary time formalism this amounts to neglecting order parameter field components with finite Matsubara frequency in the functional 1 (a formal justification will be given in the first paragraph of Sec. III C below). This simplified description is valid close to the transition line in the T c − ϕ phase diagram. A. Ginzburg-Landau functional The starting point for our discussion of superconducting fluctuations in rings with r 1 is the classical GL functional 27 F N describes the the normal (non-superconducting) part of the free energy. It gives rise to a normal component of the persistent current. 28 Since we are mainly interested in a regime close to the transition, however, the fluctuation contribution is much larger and we will not discuss F N further. At the mean field level, the sign change of the quadratic form in ψ signals the onset of the superconducting phase, which motivates the parametrization . A characteristic length scale, the (zero temperature) coherence length, can be identified ξ = 1/ 4mαT 0 c . The microscopic theory for disordered superconductors 29 gives rise to the following relations where D is the diffusion coefficient. The normalization of ψ allows for a certain arbitrariness, this is why only the ratio of α 2 and b is fixed. The quartic part of the functional stabilizes the system once it is tuned below the transition temperature. Above this temperature, the quartic term gives only a small contribution to thermal averages, except in the very vicinity of the transition, the so-called Ginzburg region. As long as one stays outside of this region on the metallic side, one can restrict oneself to a quadratic (Gaussian) theory (i.e. neglect the quartic term), which is much easier to handle theoretically, but becomes unreliably close to the transition where the quadratic theory becomes unstable. Importantly, even above the transition temperature, in the normal region of the mean field phase diagram, the average of |ψ| 2 with respect the functional F is finite. After this preparation, we turn to the description of superconducting rings. When the superconducting coherence length ξ(T ) = ξ/ √ ε and the magnetic penetration depth λ(T ) are much larger than the ring thickness, the system is well described by a one-dimensional order parameter field ψ, albeit with periodic boundary conditions 5 . In order to account for these boundary conditions, it is convenient to introduce angular momentum modes as ⊥ is the volume of the ring, S ⊥ the crosssection of the wire forming the ring. The vector potential can be chosen as A = B × r/2, the integration as dr → S ⊥ R dφ. Then, the free energy functional takes the form where a nϕ = a + (n − ϕ) 2 /2mR 2 . Let us make three important observations. First, the free energy functional is flux dependent. As a consequence a persistent current can flow in the ring. Second, the functional is periodic in the reduced flux ϕ with period 1. Correspondingly, the same is true for all thermodynamic quantities derived from the the GL functional. This property holds strictly speaking only in the idealized limit of a one-dimensional ring. In reality, the external magnetic field also penetrates the superconductor and provides an additional mechanism for the suppression of superconductivity. We will come back later to this point in section IV. The third observation is that now the kinetic energy of the Cooper pairs vanishes only when the reduced flux ϕ takes integer values. Otherwise it gives a finite contribution to the quadratic part of the functional and therefore the transition takes place at a temperature T c (ϕ) that is in general reduced with respect to T 0 c of the bulk material. Let us parameterize a nϕ = αT 0 c ε nϕ . Then ε nϕ = (T − T nϕ )/T 0 c and a n change sign at a temperature This temperature can loosely be interpreted as the transition temperature of the nth angular momentum mode ψ n . The mean field transition for the ring occurs at T cϕ that is equal to the maximal T n for given ϕ, i.e., at the point where the first mode becomes superconducting when lowering the temperature (cf. Fig. 1). In the subsequent discussion, the parameter Λ = 1/r 2 Gi will play a crucial role. Its relevance is now easily understood. 1/r 2 is a measure for the typical spacing between the transition temperatures T n for different modes, since e.g. Fig. 1). This spacing can be compared to the typical width of the non-Gaussian fluctuation region, Gi. Only in this region fluctuations are strong. The zerodimensional Ginzburg parameter Gi of relevance here is given as 1 where ν is the density of states at the Fermi level and V is the volume of the ring. The parameter Λ = 1/r 2 Gi determines whether a description in terms of a few modes only is a good approximation in the critical regime (for Λ 1) or not. Defining the dimensionless conductance of the ring as g = R Q /R • = 2e 2 νDR Q V /(2πR) 2 , R Q = π/e 2 , one can find an alternative expression, Λ ≈ 5 √ g/r. We state this alternative (but equivalent) expression for Λ, because it might be more convenient for estimates when performing an experiment. An analytic computation of the functional integral necessary to obtain Z is in general not possible and one has to resort to approximation schemes. If the spacing is large (Λ 1) and one is interested in the region of strong fluctuations close to the transition temperature T c (ϕ), an effective theory including only one angular momentum mode ψ n for n ∼ ϕ is applicable, This is so, since in this case the temperature T ∼ T c (ϕ) lies far above the individual transition temperatures T m (ϕ) (m = n) of all other modes ψ m and they will give only a small contribution when calculating observables. This is very convenient, because in this case one comes to the zero-dimensional limit of the GL functional. 1,30 The partition function based on this free energy functional can be calculated exactly and all thermodynamic quantities derived from it. Clearly, for half-integer values of the flux the spacing between two adjacent modes always goes to zero and due to this degeneracy at least two modes are required for the description. Let us choose for definiteness the example 0 < ϕ < 1, then one may work with In this situation the parameter √ g/r ≈ 1/5r 2 Gi is still useful, because if it is large, additional modes need not be taken into account and a two-mode description is valid. If √ g/r is not exceedingly large, the small contribution of the remaining modes can easily be accounted for by using the Gaussian approximation for them. One should, however, not forget that the presence of the dominant mode(s) can influence the effective transition temperatures of all others via the quartic term. As an example, let us write the resulting effective action for the modes with n = 0 assuming that ϕ ∼ 0 and the mode with n = 0 is the dominant one, Essentially the same argument was first presented in Ref. 9. If temperatures are sufficiently high T T 0 c (1 + Gi) the quartic term may be dropped altogether and one may work with a purely Gaussian theory F n ≈ n a nϕ |ψ n | 2 . B. Persistent current and susceptibility The persistent current I is found from the free energy F = −T ln Z by differentiation I = −∂F/∂φ. The normalized current is given by The averaging is performed with respect to the functional F in Eq. (2.4). Just as the free energy functional F, the persistent current i is periodic in the flux ϕ with period one. Since it is also an odd function of ϕ, the persistent current vanishes when ϕ takes integer or halfinteger values. a. Case ϕ ≈ n, T ≈ T c : As pointed out above, the most important contribution in the regime of non-Gaussian fluctuations close to integer fluxes comes from the angular momentum mode ψ n with the highest transition temperature T nϕ . One may then approximate Eq. (2.4) by a single-mode and calculate with F n = a n |ψ n | 2 + b 2V |ψ n | 4 . This is the 0d limit of the GL functional 30 already introduced above. In this limit, Eq. (2.10) gives (2.11) Here x n = ε n /Gi and the function is defined with the help of the conjugated error function. 31 All persistent current measurements will fall on the same curve, if the persistent current -measured in suitable units i = I/(T 0 c /φ 0 ) -and the reduced temperature ε ϕ = (T − T cϕ )/T 0 c are scaled as This relation can serve as a valuable guide in characterizing different rings in experiments. b. Gaussian theory for T T 0 c , estimate for T T 0 c : It is possible to make contact with the Gaussian and the mean field results using the asymptotic expansion of the conjugated error function and the limit erfc(x) → 2 for x → −∞. Far above T c one obtains as a limiting case the Gaussian result for a single mode i n ≈ 2(n − ϕ)/r 2 ε nϕ , that can also be obtained directly by neglecting the quartic term in the GL functional. In this form, however, it is of limited use, since for the temperatures in question one should sum the contribution of all modes. Indeed, in this case one can use the relation |ψ n | 2 ∼ T 0 c /a n (a n was defined below Eq. 2.4) and perform the sum in Eq. (2.10) to obtain a result that is valid at arbitrary fluxes 32 One of the main features of this result besides the periodicity in ϕ is the exponential decay of the persistent current as a function of temperature for ε > 1/(2π) 2 r 2 , which is due to a mutual cancelation of the contributions of many modes to the persistent current. Turning back to Eq. (2.11), we see that far below T c one recovers the mean field result which gives an estimate for the persistent current in the superconducting regime. In this approximation the current grows linearly with |T − T c | and as soon as |ε| Gi + (n − ϕ) 2 /r 2 (i.e. |x n | 1) one expects a sawtooth-like behavior as a function of the flux (i.e. linear dependence from ϕ = n − 1/2 to ϕ = n + 1/2 passing through zero at integer n) with a discontinuous jump at half integer ϕ. Both the Gaussian and the mean field result are reliable only outside the region of strong fluctuations, the persistent current i n in Eq. (2.11) covers this region and interpolates smoothly between them. c. Case ϕ ≈ n + 1/2, T ≈ T c : At half integer values of ϕ, the transition temperatures for two modes become equal. In the vicinity of this point in the phase diagram the two dominant modes influence each other, their coupling becomes crucial. We discuss the case ϕ ≈ 1/2 for definiteness, and use the form of the free energy functional already displayed in Eq. (2.8). Calculation of the persistent current in the presence of the coupling requires a generalization of the approach used for the single mode case. 9 Explicit formulas for the persistent current are derived and displayed in appendix A for the sake of completeness. For a graphical illustration see Fig. 2 of Ref. 6. We define the susceptibility as χ = −∂I/∂φ. Differentiating the expression for i 2 (see appendix A) one obtains where 33 x Note that x = 0 at the mean field transition for ϕ = 1/2, i.e. the condition x = 0 defines T c,1/2 = T 0 c (1 − 1/4r 2 ). The dimensionless smooth functions g n are defined by the relations Exact results can be given for the functions g n at the transition, i.e. for x = 0. These give a useful estimate for the magnitude inside the fluctuation region, √ g/r one can neglect the first term in Eq. (2.17). Then one obtains For the susceptibility close to integer flux one easily obtains χ 0 = 4Λf (x 0 ) from Eq. (2.11). Comparing to the expression for χ 1/2 , we find Experimentally, a strong enhancement of the magnetic susceptibility near ϕ = 1/2 compared to ϕ ≈ 0 was observed 24 and Eq. (2.21) demonstrates that it is controlled by the parameter √ g/r. If it is large, the current rapidly changes sign as a function of the flux at halfinteger flux, leading to a saw-tooth like shape of i ϕ . The full T dependence of χ ϕ=1/2 is given in Eq. (2.17). The shape of the persistent current as a function of the flux has been discussed in more detail in Ref. 6. Let us just stress the main physical mechanism at work here. To be specific, we discuss the vicinity of ϕ = 1/2. Close to 1/2 there is a competition of two angular momentum modes, ψ 0 and ψ 1 , that are almost degenerate. If one tunes to a slightly smaller flux, say, then the mode ψ 0 is dominant, because a 0 is smaller than a 1 . The effect of the coupling term in this case is to further weaken the mode ψ 1 (this can be seen from an effective action in the form displayed in Eq. 2.9.), for a repulsive interaction the dominant mode suppresses the subdominant mode. Since the contribution to the persistent current of these two modes is opposite in sign, the result is an almost saw-tooth like shape of the persistent current. 6 . Similar effects related to the competition of two order parameter fields have been discussed in the past, see, e.g., Ref. 34. d. Summary: To summarize, in this section we argued that for Λ = 5 √ g/r 1 one can use simple approximation schemes to calculate the persistent current and susceptibility near T c . The significance of the parameter Λ is as follows. If Λ is large, the statistical weight of one angular momentum mode in the strong fluctuation region by far exceeds the weight of all other modes, unless two modes become degenerate. The degeneracy points are φ = n + 1/2. Our calculations were based on formula 2.10 which is a general expression for the fluctuation persistent current expressed in terms of the thermal averages |ψ n | 2 . Near integer flux (ϕ ≈ n) and for T ≈ T c it is sufficient to include only one mode, which leads to formula 2.11. The periodicity of the phase diagram is not crucial here. Instead, the relevant free energy (Eq. 2.7) has the same form as that of a small superconducting grain in the zero-dimensional limit. 30 This is a drastic simplification compared to the original problem and implies a high degree of universality. With the appropriate scaling given in Eq. 2.13, i(ϕ)-data measured for rings with various parameters should fall on the same curve. As the temperature increases or the flux comes closer to the degeneracy points, the restriction to only one mode is no longer a good approximation. Formula 2.15, which is valid for high temperatures, includes the Gaussian contribution of formula 2.11 and all the other modes. It is therefore applicable for arbitrary fluxes. The shape of i(ϕ) becomes more and more sinusodial as the temperature increases, this is the result of the combined contribution of many modes. Returning to the vicinity of T c (ϕ), in formula 2.17 we calculate the contribution of two modes to the magnetic susceptibility at half integer fluxes (ϕ ≈ n + 1/2). Formula 2.21 compares the magnetic susceptibility at half-integer and integer fluxes and reflects the large enhancement of the susceptibility near ϕ ≈ n + 1/2, which is controlled by two fluctuating angular momentum modes. We interpret this enhancement as being due to the suppression of the subdominant by the dominant mode as a result of their interaction. 34 This interaction is induced by the quartic term of the GL functional. In table I we summarize the expressions and analytical approximations for the fluctuation persistent current near the thermal transition together with their range of validity. III. GAUSSIAN FLUCTUATIONS FOR RINGS WITH r < 0.6 So far we have discussed the limit of moderately small rings with r = R/ξ > 1. For these rings the transition temperature is only weakly suppressed at finite flux and the phase transition occurs at a finite temperature. We will now discuss even smaller rings with r < 1. For such rings the theoretical description based on the classical GL functional we used so far is not valid in large regions of the phase diagram, as it is applicable only in a relatively small temperature interval close to T 0 c . For rings with r < 1, however, the transition near half-integer flux can occur at temperatures far below T 0 c , or even at vanishing temperatures. The theoretical description for these rings can be developed in close analogy to the general theory of pair-breaking transitions. In this section we will first determine the mean field transition line and then calculate the contribution of Gaussian fluctuations to the The range of validity for the analytical approximations is also estimated. Tc is the (flux dependent) critical temperature, ϕ = φ/(h/2e) is the dimensionless flux through the interior of the ring and Gi is the 0d-Ginzburg parameter defined in Eq. (2.6). The dimensionless parameter Λ has been introduced after Eq. (2.6) and can be much larger than 1 for small rings. persistent current outside the superconducting regime. A. Mean field transition line The partition function is conveniently formulated in terms of an integral over a complex order parameter field ∆ as Z = D(∆, ∆ * ) exp(−S), where The field ∆ -unlike the field ψ in the classical GL functional of Eq. 2.1 -is dynamical, ω m = 2πmT is a bosonic Matsubara frequency. For a derivation of Eq. 3.1 see, e.g., Chapter 6 of Ref. 1. Let us note here that the field ψ 0 is proportional to the static component ∆(ω n = 0). A neglect of fields with nonzero Matsubara frequencies can be justified for the thermal transition, where it leads to the classical GL functional, but is not justified near the quantum phase transition. In full analogy to previous considerations, ∆ has been expanded in terms of angular momentum modes. The fluctuation propagator L fulfills Here we introduced the pair-breaking parameter for the problem under consideration and ν is the density of states at the Fermi energy. We use the notation ε T = D/R 2 and ψ is the Digamma function. 31 Since we are interested in the mean field transition line and in the Gaussian fluctuations on the normal side of the transition, the quartic and higher order terms in the action of Eq. 3.1 may be safely neglected (see, e.g., the book 1 ). As usual, the mean field transition occurs when L −1 (n, ω) changes sign first for arbitrary n and ω. Assuming that |ϕ| < 0.5, this happens for n = 0, ω = 0, so that the condition for the mean field transition reads The larger α 0 , the lower values of T c (ϕ) are required to fulfill this relation. Eventually, for the transition temperature T c (ϕ) vanishes, i.e. one reaches a (flux-tuned) quantum critical point. This happens for the critical flux 4 Due to the flux-periodicity of the phase diagram, a quantum transition can only be observed in the ring geometry if ϕ c < 1/2, which implies r < √ 2γ E /π ≈ 0.6 (compare Fig. 1). Notice that this critical value of r = R/ξ is less restrictive than a naive application of the quadratic approximation valid for r 1 would suggest. The latter would give 1 − (1/2r) 2 = 0 ⇒ r = 1/2. B. Persistent Current The formula for the persistent current I = T φ0 ∂ ϕ ln Z in the Gaussian approximation reads where we used Eq. 3.1. In order to gain a good qualitative and quantitative understanding of the flux and temperature dependence, the magnitude and relevance of the scales involved in the problem, we will in the following derive simpler expressions for the persistent current and discuss various limiting cases. A particular emphasis will be put on the analysis of the persistent current in the vicinity of the quantum critical point. This approach will be corroborated by a direct numerical evaluation of (3.7). When doing so, some care needs to be exercised in order to correctly deal with the slow convergence properties for large values of |n| and |ω|. As before, we find it convenient to consider the dimensionless quantity i = I/(T 0 c /ϕ 0 ) and for definiteness analyze fluxes ϕ in the interval (0, 0.5). Since the persistent current is periodic i(ϕ + 1) = i(ϕ) and odd i(ϕ) = −i(−ϕ) in the flux, this is sufficient to infer the persistent current for arbitrary fluxes. As is shown in appendix B, the persistent current can be written as the sum of two contributions, i = i s + i ns . The rationale behind this decomposition is the following: The singular part i s diverges on the transition line ϕ c (T ). Outside the Ginzburg region on the normal side of the transition, i s is still strongly flux and temperature dependent. It describes by far the dominant contribution to the fluctuation persistent current in the entire normal part of the phase diagram. The nonsingular part i ns , on the other hand, displays a smooth flux dependence and is much smaller. i s and i ns have opposite signs. We will mostly discuss i s , for explicit formulas for i ns we refer to appendix B. The expression for i s reads Here, z s is defined as the unique positive solution of the equation for which the argument of the digamma function on the right hand side is positive. The formula for i s in Eq. 3.8 can be viewed as a generalization of Eq. 2.15 to the entire normal part of the phase diagram (outside the Ginzburg regime). In order to find a more convenient expression for z s it is useful to define the function α c (T ) describing the phase boundary in the α 0 -T phase diagram, i.e. α c (T ) = α 0 (ϕ c (T )), (3.10) One immediately reads off Here, α c (T ) in Eq. (3.10) is allowed to become negative as soon as T > T 0 c . Note that z s is always real for T > T 0 c . This is no longer true for T < T 0 c . In this regime it is instructive to use the fact that α c (T ) and the temperature dependent critical flux ϕ c (T ) are closely related, ε T ϕ 2 c (T ) = 2α c (T ). As a result, Now z s is real for ω > 2α c (T ), but becomes purely imaginary for ω < 2α c (T ). The fact that z s becomes imaginary for small |ω| and T < T 0 c , but not for larger T > T 0 c , is intimately related to the occurrence of the phase transition. Indeed, the denominator in the expression 3.8 for i s vanishes for ϕ = ϕ c (T ) at ω = 0, signaling the onset of the superconducting regime. For reference, recall that α c0 ≡ α c (0) = πT 0 c /2γ E ≈ 0.88T 0 c . C. Fluctuation persistent current for T > T 0 c Let us first make contact with the Gaussian result for larger rings r 1 stated before in Eq. (2.15). For T ∼ T 0 c , when the logarithm on the left hand side of Eq. (3.10) is small, one can expand the Digamma function on the right hand side. When keeping only the most dominant term in the sum, the term with vanishing Matsubara frequency, one easily finds z s ∼ √ εr and in this way reproduces the result of Eq. (2.15) after setting t ≈ 1. The restriction to ω = 0 is justified in this case, because z s (ω n ) ∼ |ω n |/ε T = πr √ n/2 is real and larger than one for finite Matsubara frequencies, and the denominator in the expression for i s becomes large, thereby strongly suppressing the contribution of finite n = 0. Now we turn to the smaller rings with r < 0.6, for which a quantum phase transition takes place at zero temperature. We first analyze the flux dependence of i for a given temperature T T 0 c . As long as ϕ c0 and correspondingly r do not become very small, the same argument concerning the importance of the ω = 0 component that was used for rings with r 1 in the previous paragraph is applicable here. For an estimate, z s (ω 1 ) ∼ |ω 1 |/ε T = πr/2 becomes equal to 1/2 only for r < 1/π, which corresponds to ϕ c0 ≈ 0.27. Interestingly, the same parameter πr/2 determines the relevance of the nonsingular contribution in this case (see appendix B). As long as this parameter does not become considerably smaller than one, it is therefore safe to concentrate on the ω = 0 term of the singular contribution only. Whenever it is justified to use only this term near T 0 c , then it is also justified for larger temperatures (as can be seen by comparing z s to z 0 defined in appendix B). To summarize this somewhat technical discussion, for not too small rings with ϕ c0 0.3 we obtain a very good description for the entire temperature range T > T 0 c + Gi by the formula 33 The formula given above remains valid for T T 0 c for fluxes, for which the ring is in the normal regime. In Fig. (2) we display the flux dependence of the persistent current for different temperatures. At small fluxes, the persistent current is proportional to ϕ. For temperatures close to T 0 c , this leads to a rapid increase of |i| for small ϕ. As the flux increases, however, T c (ϕ) decreases rapidly for the small rings under consideration here. Therefore the distance to the critical line in the phase diagram grows as ϕ increases while the temperature is kept constant. This is why the current subsequently drops. For larger temperatures, the situation is different. As the temperature increases, cosh(2πz s ) ≈ exp(2πz s ) grows and the flux dependence is determined by the numerator in the expression for i s , Eq. (3.8). As a consequence the shape becomes more sinusoidal. For an estimate we can use that for T T 0 c , z s ∼ √ εr, see Eq. (2.15). The exponential decay of i and transition to a sinusoidal shape therefore starts at ε ≈ 1/(2πr) 2 . The persistent current at these high temperatures results from a combined effect of many angular momentum modes and is therefore much less sensitive to the rapid drop of the transition line than for temperatures T ∼ T 0 c . D. Fluctuation persistent current near the quantum critical point and at intermediate temperatures for rings with r < 0.6 The situation is quite different in the low temperature limit T T 0 c for rings with r < 0.6, to which we will turn now. Indeed, for very low temperatures a restriction to the thermal fluctuations, namely those with ω = 0, is not justified as we will see now. We discuss the vicinity of the critical line for T T 0 c . Here one can expand the denominator in the general expression 3.8 for i s in small as well as small |ω|/α c (T ). In this way one obtains the approximate relation 33 h(∆ϕ T , t), (3.15) where the dimensionless function h is defined as The same expression can be obtained directly from the initial formula for i (i = I/(T 0 c /φ 0 ) with I given in Eq. 3.7), if one identifies ϕ ≈ ϕ c (T ), and considers the most singular angular momentum mode n = 0 only. It is worth noting, however, that for fixed n the sum in ω is ultraviolet divergent (even before expanding in |ω|/α c (T )). When proceeding in this way a cut-off has therefore to be introduced by hand. In contrast, our formula (Eq. 3.8) for i s immediately reveals that terms in the sum with ω > 2α c (T ) are suppressed, since z s becomes real and cosh(2πz s ) grows rapidly for larger Matsubara frequencies. We can therefore perform the sum with logarithmic accuracy and choose ω = 2α c (T ) as the upper cutoff. Put in different words, the upper limit for the |ω|summation is effectively provided by mutual cancelations between different angular momentum modes. The result of the described procedure is The first term in the expression for h is the classical ω = 0 contribution to the sum. These thermal fluctuations are proportional to the temperature and correspondingly vanish for T → 0. This does, however, not imply the vanishing of the persistent current in this limit. In order to see this more clearly, let us display the asymptotic behavior of the function h: Even at vanishing temperatures, a flux dependent contribution to the persistent current i s ∝ ln(1/∆ϕ) remains. Close to the critical mean field line (see Fig. 1) there is a parametrically large enhancement of the persistent current due to quantum fluctuations that decays slowly when moving away from that line. In order to put this result into perspective, it is instructive to compare to the persistent current in normal metal rings. The magnitude of the normal persistent current is 33,35 The asymptotic behavior of h for t ∆ϕ T 1 implies, that the persistent current due to pair fluctuations near ϕ c0 is parametrically larger and at low T T 0 c given by The persistent current decreases when moving away from the quantum critical point. It displays a pronounced maximum at low but finite temperatures. Thermal fluctuations grow with increasing temperature, but for fixed flux the system moves further away from the critical line. At vanishing temperatures the persistent current is still large and entirely caused by quantum fluctuations. Also shown is is (of Eq. (3.8)) in blue, which provides a very good approximation. The small difference between i and is is ins (Eq. B10). In this temperature regime ins can be obtained from Eq. where ∆ϕ ≡ (ϕ − ϕ c0 )/ϕ c0 measures the distance to the critical flux ϕ c0 . Since r −1 = ξ/R is a number of order 1 and D R 2 = 8 π T 0 c r 2 for a weakly disordered superconductor, we find an enhancement factor of log(g) log(1/∆ϕ). When increasing the flux at fixed temperature the persistent current decays logarithmically away from the transition. It should be kept in mind, however, that the persistent current vanishes at ϕ = 0.5 due to symmetry reasons (see the discussion below Eq. 3.7). The validity of the approximations leading to Eq. 3.17 is restricted to small ∆ϕ T , for larger ∆ϕ T the full expression for i s should be used (see Fig. 3). Turning to finite temperatures next, it is important that ∆ϕ T is T -dependent itself and therefore, in order to reveal the full T -dependence of i s , one should first find the transition line α c (T ). We display the persistent current near the quantum critical point in Fig. 3, also comparing the different approximations and showing the contribution of the classical zero frequency part in the sum of Eq. (3.16). The maximum of |i| at finite T is a result of two competing mechanisms. As T grows from zero, thermal fluctuations become stronger. At the same time the distance to the critical line becomes larger for fixed fixed ϕ, which eventually leads to a decrease of |i|. With the help of the following analytic approximation, (3.22) one can obtain an estimate for the position of the maximum ϕ m (T ) in the ϕ − T phase diagram, (3.23) In appendix B it is shown that at low temperatures T T 0 c the nonsingular contribution to the persistent current i ns can be written as where H(x, ϕ) = sin(2πϕ)/(cosh(2πx) − cos(2πϕ)). One can perform the integration in ω at zero temperature It is obvious, that i s and i ns have opposite signs. For a comparison with the singular contribution near the critical point it is instructive to calculate the zero temperature value of i ns right at the critical flux, i ns (ϕ c0 = 0.45) = 0.17 and i ns (ϕ c0 = 0.4) = 0.43. Since i ns is also a monotonously decreasing function of ϕ and vanishes at ϕ = 0.5 we conclude that it is numerically small for all fluxes of our interest, |i ns | |i s |. The same remains true at finite temperatures, see Figs. 3, 4. We display the temperature dependence of the persistent current in the entire temperature interval 0 < T < T 0 c in Fig. 4. For comparison, the thermal ω = 0 contribution is also shown. One can see, that at low temperatures nonzero Matsubara frequencies give a sizable contribution to the persistent current. When increasing the temperature the thermal ω = 0 contribution becomes increasingly important. In Fig. 4 we compare the numerically obtained current i to the approximation i s . Obviously, it provides a very good approximation in the entire temperature range. In summary, in this section we analyzed the persistent current near the flux-tuned quantum critical point and then extended the discussion to the temperature regime 0 < T < T 0 c . Even at vanishingly small temperatures quantum fluctuations lead to a persistent current that is parametrically larger than the normal persistent current. When increasing the flux starting from ϕ c the persistent current decreases slowly and vanishes for ϕ = 0.5. At this point the contributions from all angular momentum modes precisely cancel. When increasing the temperature from zero at fixed flux the persistent current increases initially since thermal fluctuations set in. This can be particularly well seen from Fig. 4 (dashed line). On the other hand, the distance to the critical line increases for a fixed flux when increasing the temperature, which reduces fluctuations. The competition of these two mechanisms leads to a maximum in the persistent current at finite temperature. This maximum is the more pronounced the closer the flux is to the critical flux ϕ c . We derived an approximate expression for the position of the maximum in the ϕ − T phase diagram, Eq. (3.23). It can be expected that the general features, such as the saturation at low T and the appearance of a maximum at finite T remain intact for non-ideal rings, e.g., if the rings have a finite width, because they are mainly determined by a single dominant angular momentum mode. In contrast, the critical flux ϕ c may vary (see next section) and the sign change of the current may occur at a flux that is not equal to a half-integer (since many modes are involved and the phase diagram is not perfectly periodic in ϕ any more). IV. DISCUSSION Our results are obtained for the case when the flux acts as a pair-breaking mechanism. Other pair-breaking mechanisms, e.g. magnetic impurities or a magnetic field penetrating the ring itself will lead to similar results. They cause a reduction of T c to zero, the pair fluctuations, however, lead to a parametric enhancement of the persistent current in the normal state. Ref. 14 suggests that a similar mechanism due to magnetic impurities is related to the unexpectedly large persistent current in noble metal rings. 36,37 As mentioned previously, the maximal reduction of T c at finite flux in the experiment of Ref. 7 was about 6 %. It has so far not been possible to measure the persistent current close to the quantum phase transition. For this type of experiment one would need both sufficiently small rings [in order to fulfill the condition r < 0.6] and a measurement device that allows measuring the persistent current at the comparatively strong magnetic fields necessary to generate a flux of φ ≈ φ 0 /2 threading the small area πR 2 . (In the experiment at Yale 38 the use of cantilever required a strong magnetic field that most probably reduces almost totally the contributions of pair fluctuations.) We suggest two possible strategies to relax these conditions. If the size of the ring is the main problem, one can try to use wider rings, because in this case the magnetic field penetrating the annulus of the ring helps to suppress superconductivity. A first consequence is that the critical flux ϕ c0 is reduced and correspondingly the condition ϕ c0 < 0.5 for observing the quantum phase transition in the ring geometry is less restrictive, the ring radius R is allowed to be larger. This effect, however, is rather small, as we will see below. A second effect is that the phase diagram is no longer periodic. It can then be advantageous to consider the transition at φ ≈ 1.5φ 0 or even higher fluxes, because then the effect of the magnetic field itself (as opposed to the flux) is stronger (see Fig. 5 below). This approach requires measurements at high magnetic fields. If in turn the main problem lies in measuring at high magnetic fields, then the addition of magnetic impurities can help. Magnetic impurities reduce T 0 c itself and thereby also reduce ϕ c0 . We will now discuss the two mentioned effects in more detail. Rings of finite width: So far we considered the idealized case for which the width w of the ring (in radial direction) is vanishingly small. Next we discuss corrections to this result, resulting from a finite width. While doing so, we will still assume that the width is much smaller than both the coherence length ξ and the penetration depth λ. The first assumption implies that the order parameter field does not vary appreciably as a function of the radius r, the second assumption implies that the magnetic field is almost constant as a function of r. For a ring of finite width the pair-breaking parameter acquires a correction, α 0n = α The relation ξ 2 = πD/(8T 0 c ) can be used to express the result through T 0 c and r. Most importantly, the widthdependent correction α (1) 0n is not a function of n − φ/φ 0 . Correspondingly, the phase diagram is no longer periodic in ϕ. The interpretation of this result is simple. Since superconductivity is already weakened by the magnetic field, T c can be suppressed at a smaller flux compared to a ring of vanishing width. For the sake of brevity, we will write only the leading correction in the following. Let us examine some of the consequences. For the transition line we should now solve an equation analogous to Eq. (3.4), where now α 0 (ϕ) should be replaced by α 0,m = min n (α 0n ). Let us first consider the regime of small suppression r 1 and small fluxes ϕ 1. Then we can use T c (ϕ) ∼ T 0 c , α 0,m = α 0,0 and for α 0,0 (ϕ) T c (ϕ) one can approximately calculate the reduction of the transition temperature. At vanishing flux, there is no magnetic field and T c (ϕ = 0) = T 0 c is unchanged, at small but finite flux, however, there is a correction, For small fluxes the T c reduction is slightly stronger than for vanishing width. The condition for a suppression of T 0 c to zero close to ϕ ∼ 1/2 can also be found. As for the case w = 0 the critical value for α, for which T c vanishes, is α c0 = πT 0 c /2γ E . The critical flux, however, should now be found by equating α c0 to α 0,0 (ϕ) = 4 π ϕ 2 The result for the critical flux is As expected, for a ring of finite width the critical flux is reduced. In Fig. 5 we show the mean field transition line for two rings with the same radius r = 0.66, but different widths. The ring with vanishingly small width has a fluxperiodic phase diagram and does not show a quantum phase transition, since r > 0.6. The other ring has a width of w = R/3. One observes three main changes. The maxima in T c at finite flux are reduced compared to T 0 c . They are shifted towards smaller flux, i.e. they do no longer occur at integer values of ϕ. Finally, the ring exhibits a quantum phase transition close to ϕ = 1.5. Magnetic impurities: In this article we have so far discussed the role of an external magnetic field as the origin of the pair-breaking mechanisms. In principle, there are other effects that may cause pair-breaking. Among them are proximity effect, exchange field, magnetic impurities or interaction with the electromagnetic environment. Each pair-breaking mechanism will have its own pair-breaking parameter, and to a good approximation the total pair breaking parameter α 0;tot is the sum of the individual ones. The effects of these pair-breaking mechanisms can be obtained formally by substituting α 0 by α 0;tot in the formulas discussed above. In particular, transition temperature T c (ϕ) can be obtained from Eq. 3.4 after the replacement α 0 (ϕ) → α 0,tot (ϕ). The . It can be seen that for the ring with finite width a quantum phase transition occurs near ϕ = 1.5, while for the ideal one-dimensional ring there is no quantum critical point due to the periodicity of the phase diagram. In the presence of magnetic impurities the phase diagram for the ring with w = 0 remains periodic, but now quantum transitions can be found close to ϕ = n + 1/2 for any integer n. . Compared to the case ϕ = 0.5, the width dependence is much stronger for ϕ = 1.5. The addition of a small amount of magnetic impurities reduces the minimal width, for which Tc vanishes. It is noteworthy that for a larger scattering rate than considered in this figure, a quantum transition can be induced near ϕ = 0.5 even in the limit of vanishing width (see Fig. 5) condition for the quantum critical point reads α 0,tot (ϕ) = πT 0 c /2γ E . Of particular interest is the case of magnetic impurities, discussed by Bary-Soroker, Entin-Wohlman and Imry, 14,15 for which α 0,mi = 1/τ s is equal to the scattering rate caused by the magnetic impurities and independent of the flux. A sufficiently large concentration of magnetic impurities will reduce T c to zero [even at vanishing flux] and a large persistent current is obtained due to the pair fluctuations. Ref. 14 suggest that the large persistent current observed in copper rings 36 may c is plotted as a function of the width w/R for three values of the spin scattering rate, 1/T 0 c τs = 0.0 for (a), 0.1 for (b), and 0.2 for (c). The radius of the ring is r = 2/3. Also shown is the width-dependence of the mean field transition temperature Tc/T 0 c at ϕ = 1.5 (dashed lines) for two values of the spin scattering rate, 1/T 0 c τs = 0 for (α), and 1/T 0 c τs = 0.1 for (β). For 1/T 0 c τs = 0.2, Tc is equal to zero for arbitrary width. The susceptibility at ϕ = 1.5 is entirely caused by fluctuations, because the mean field transition temperature is smaller than 0.5T 0 c , even in the limit of vanishing width. We see that the susceptibility is a smooth function of the width. In particular, it remains large even after passing the threshold values of w/R for which Tc vanishes. be attributed to such pairing fluctuations. As emphasize earlier, in our case the addition of magnetic impurities may push the system to the quantum critical point at a smaller external magnetic field, since it provides an additional flux-independent pair-breaking mechanism. If the main experimental difficulty is to perform sensitive measurements at high magnetic fields, introducing magnetic impurities might therefore make possible the experimental observation of the flux tuned quantum critical point, see Fig. 6 i where ∆ϕ = ϕ − 1/2 = 4mR 2 T 0 c A − . Let us note that where x n = A n /(2 √ B) = ε n /Gi coincides with the variable x n used in the main text. After combining these results one finds where M ± = x ± ∓ 1 P e x 2 0 erfc(x 0 ) ± e x 2 1 erfc(x 1 ) (A6) and P = 4 ∞ 0 dz e 3z 2 +2(2x1−x0)z+x 2 1 erfc(2z + x 1 ) (A7) where x ± = (x 0 ± x 1 )/2. An analogous formula has been given in Ref. 9. Most interesting for us is the quantity χ(1/2) = − ∂i 2 /∂ϕ| ϕ=1/2 . It is worth noting that for ϕ = 1/2 one finds A − = x − = 0 (i.e x 0 = x 1 ), and formulas simplify considerably. We can obtain the formula for χ(1/2) directly by differentiating the result for i 2 or by first differentiating Eq. (A3) and then using the relations in Eq. (A4). In the latter case one obtains as an intermediate step the relation χ(1/2) = − In general, the poles z n need to be determined numerically. The derived representation is particularly useful in two limiting cases, either for very high or very low temperatures. For very high temperatures z s is real the spacing between consecutive z s , z n ,z n is large. Since H(z, ϕ) decays fast as a function of z one can approximate the result well by considering just the smallest of the z s , z n ,z n and in this way obtain relatively simple formulas. This approximation was utilized in Ref. 15, where a similar representation was derived for temperatures T > T c (ϕ = 0) (in the presence of magnetic impurities), when all poles of R(z, ω) are real. Another useful limit is the limit of very low temperatures, when z s is possibly imaginary, but the other z n ,z n are closely placed on the real axis. Then, calculating the residues for i ns is very similar to an integration along a branch cut as we will describe now. In this low temperature limit one can write Following the same analysis presented above with this approximate representation, i s remains unchanged. For i ns we have to perform an integration along a branch cut appearing for z > z 0 ∼ |ω|/ε T , which is related to the branch cut of the logarithm. It gives the result i ns = 2πt ω ∞ 0 dy y H(ϕ c (T ) y + |ω|/2α c (T ), ϕ) ln 2 y + π 2 (B12) Simple substitution gives the formula Eq. (3.24) stated in the main text. In the limit T → 0, it is more convenient to use the relation [y(ln 2 y + π 2 )] −1 = 1 π ∂ y arctan (ln(y)/π), perform a partial integration in y, subsequently use ∂ y f (y + |ω|/2α c (T )) = 2α c (T )∂ |ω| f (y + |ω|/2α c (T )) to perform the integral in ω and combine the result with the boundary term. The result is Eq. (3.25). Finally, let us make contact with the analysis of Ref. 15. In this paper, fluctuations were analyzed for high temperatures T > T c (ϕ = 0) (in the presence of magnetic impurities) and the flux harmonics i m of the persistent current i = m i m sin(2πmϕ) were calculated. In this situation one may use the Poisson summation formula to perform the sum over angular momentum modes, since the flux dependence of i is smooth for T T c (ϕ = 0). [This is not so for T < T c (ϕ = 0) due to the phase transition to the superconducting state, or, in a mathematical language, due to the presence of the pole at purely imaginary z s .] A comparable situation arises for T > T 0 c in the absence of magnetic impurities: All poles of R(z, ω) (including z s ) are real, one can use Eqs.
2010-10-09T12:38:44.000Z
2010-10-09T00:00:00.000
{ "year": 2010, "sha1": "e7e3d62d321eb9767dba2066cb698eb23458e19d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1010.1841", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e7e3d62d321eb9767dba2066cb698eb23458e19d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17578626
pes2o/s2orc
v3-fos-license
Improving Rational Treatment of Malaria: Perceptions and Influence of RDTs on Prescribing Behaviour of Health Workers in Southeast Nigeria Introduction Developments in rapid diagnostic tests (RDTs) have opened new possibilities for improved remote malaria diagnosis that is independent of microscopic diagnosis. Studies in some settings have tried to assess the influence of RDTs on the prescribing behaviour of health workers, but such information is generally lacking in Nigeria and many parts of sub-Saharan Africa. This study analysed health workers' perceptions of RDTs and their potential influence on their prescribing and treatment practices after their introduction. Methods The study was conducted in four health centers in the Enugu East local government of Enugu State, Nigeria. All 32 health workers in the health centers where RDTs were deployed were interviewed by field workers. Information was sought on their perception of symptoms-based, RDT-based, and microscopy-based malaria diagnoses. In addition, prescription analysis was carried out on 400 prescriptions before and 12 months after RDT deployment. Results The majority of the health workers perceived RDTs to be more effective for malaria diagnosis than microscopy and clinical diagnosis. They also felt that the benefits of RDTs included increased use of RDTs in the facilities and the tendency to prescribe more Artemisinin-based combination therapies (ACTs) and less chloroquine and SP. Some of the health workers experienced some difficulties in the process of using RDT kits. ACTs were prescribed in 74% of RDT-negative results. Conclusions/Significance RDT-supported malaria diagnosis may have led to the overprescription of ACTs, with the drug being prescribed to people with RDT-negative results. However, the prescription of other antimalarial drugs that are not first-line drugs has been reduced. Efforts should be made to encourage health workers to trust RDT results and prescribe ACTs only to those with positive RDT results. In-depth studies are needed to determine why health workers continue to prescribe ACTs in RDT-negative results. Introduction Malaria has remained a major public health problem in Nigeria. It causes more than 50% of the disease burden [1] and almost 50% of all-cause health expenditure [2]. Also, 20% of all hospital admissions, 30% of outpatient visits, and 10% of hospital deaths are attributable to malaria, while half of Nigeria's population is exposed to at least one episode of malaria every year [3]. Worldwide, it kills more than one million people each year; between 20 and 40% of outpatient visits and 10 to 15% of hospital admissions in Africa are attributed to malaria [3][4][5]. Prompt parasitological confirmation by microscopy or with a rapid diagnostic test (RDT) is recommended for all patients with suspected malaria before treatment is started [6], and confirmed cases of uncomplicated Plasmodium falciparum malaria should be treated with artemisinin-based combination therapy (ACT). However, the microscopic diagnosis of malaria is time-consuming, labour-intensive, and costly [7,8]. There is also a lack of reliable microscopy in the majority of peripheral health centers. On the other hand, clinical diagnosis based on malaria symptoms has proven to be unspecific [9][10][11]. These shortcomings of microscopy and clinical diagnosis have favoured the deployment and use of RDTs, which allows diagnosis even in health settings lacking any laboratory facility. RDTs have been found to be cost-effective both in Nigeria and elsewhere [12][13][14][15][16], and they generally cost less than a full course of ACT. Therefore, their introduction should not only improve malaria management but should also limit malaria treatment costs [17]. However, new knowledge is needed about health workers' perceptions of RDTs and their potential influence on their malaria treatment practices. This is important because the use of RDTs to diagnose malaria without recourse to laboratory and clinical approaches is a new experience for health workers. Therefore, the nature of health workers' perceptions of the usefulness of RDTs and their influence on their drug prescription patterns will provide useful information for the effective and sustained scaled-up use of RDTs in health centers. Predictors of health workers' prescribing practice have been explored in a number of developing country settings [18][19][20][21][22] where patient (age, complaint), consultation (time of day, duration), and health worker (level of training) factors have been identified as influencing decisions in prescribing antimalarials. These studies have focused on health workers prescribing correct antimalarials to non-severe febrile children. Some authors have also assessed the influence of RDTs on the treatment practices of providers in other settings [17,23], but studies have not yet sought to identify the quality of these prescriptions, especially after an intervention to improve the rapid diagnosis of malaria by health workers in Nigeria and many parts of sub-Saharan Africa. At this time, ACT had been introduced in Nigeria as a first-line antimalarial drug [1] as a result of extensive resistance to chloroquine and sulphadoxine-pyrimethamine (SP), and following WHO's recommendation that a combination of antimalarials be used to treat malaria caused by P. falciparum [24]. However, chloroquine was still being used by health workers in Nigeria. This study therefore investigated the antimalarial prescription patterns of health workers before and after the introduction of RDTs. It also measured the perceptions and usefulness of RDTs before their introduction and assessed the problems that health workers had with using RDTs to diagnose malaria after the tests were introduced. Study Area The study was undertaken in the Enugu East Local Government Area (LGA) in Enugu State, southeast Nigeria. The Enugu East LGA had a population of 279,089 in 2006 [25]. It has 12 public health centres, and 30 private clinics and hospitals. The health centres are stratified into three groups with high, medium, and low levels of infrastructure based on the number of staff, availability of relevant facilities, such as maternity beds, and utilization rates. All the centres have drug-dispensing units but no laboratory facilities. The 4 health centers with high-level infrastructure were purposely selected for the study. This was to enhance the recruitment of patients, as malaria cases were more likely to be seen there than in the low-or medium-level health centers. There is a high transmission rate of malaria year-round in the study area, with an average monthly malaria incidence of 6% [26]. Study Design The study was conducted from 2005 to 2008 as a component of a larger study that lasted for 30 months [27]. This component of the study compared the prescription practices of health workers before the introduction of the RDTs in the larger study and 12 months after their introduction in 4 health centers. The study also examined the perceptions of health workers towards different diagnostic methods before the introduction of RDTs. The study was conducted when ACTs were newly introduced in the country as a first-line antimalarial drug as a result of extensive resistance to chloroquine and sulphadoxine-pyrimethamine (SP), and following WHO's recommendation that a combination of antimalarials be used to treat malaria caused by P. falciparum. The drug resistance level to SP and chloroquine at the time of the study was more than 70% [1]. Introduction of RDTs and ACTs Before the introduction of RDTs and ACTs, as a supplement to the manufacturer's instruction, the health workers in the larger study were trained for three days by the research team on how to use RDTs and read the results. Parasitological tests for malaria were then undertaken using an RDT-ICT Malaria Combo Cassette Test (ML02) (ICT Diagnostics, Cape Town, South Africa). The sensitivity and specificity of the test in another setting have been found to be 96% and 95%, respectively [28]. The researchers observed how the health workers performed and interpreted the tests by paying them visits twice every week for 2 months. Correction was provided where it was observed that the health workers were having trouble with the products. At the same time, an ACT (dihydroxy-artemisinin/piperaquine) was introduced to complement the RDTs. Data Collection All the 32 health workers in the 4 health centers were interviewed 12 months after the introduction of RDTs. Through open-ended questions, information was sought on their perceptions of symptoms-based, RDT-based, and microscopy-based malaria diagnoses, and how these have affected their diagnosis and treatment of malaria. Information was also collected on the problems they encountered while using the test, and its perceived effectiveness (ease of use) and diagnostic accuracy. They were also asked to suggest how RDT use by health workers could be improved. Although some health facilities did not have microscopes, all the health workers knew about microscopy for malaria diagnosis and may have been exposed to it in the past. For the prescription analysis, the sample size was determined according to the WHO manual on how to investigate drug use in health facilities [29]. In addition to following the WHOrecommended sample size of a minimum of 20 prescriptions per facility, we decided to increase the number of prescriptions per facility to 50 to control for the design effect of clustering of multiple patients seen by the same health worker. Thus, a total of 200 prescriptions before and 200 after the RDT introduction were collected, observed, and recorded. The prescription analysis was only for the first visit and no re-attendances were taken. Therefore, in each of the four health centers the average number of drugs (any drugs) per prescription, and the percentage of prescriptions with antibiotics, injections (any injections), and 3 antimalarial drugs (ACT, SP, and chloroquine) were determined. Also, data on the number of RDT-negative and RDT-positive patients who received ACTs were collected. Before the RDT introduction, the prescription data were collected from the outpatient cards of the last 50 patients who presented with fever. After the RDT introduction, the prescription and diagnostic data were collected from the outpatient cards of the last 50 patients who presented with fever and which contained RDT results. The cards containing RDT results and the prescriptions given were routinely retained at each facility. The prescription data were collected before the introduction of RDT and 12 months thereafter to allow enough time to elapse so as to obtain reliable information about the impact, influence, and perception of RDTs from health workers, as it may not have been possible to get accurate information if the evaluation was made within a short period. Data Analysis Data entry was done using Epi Info version 3.5, while analysis was done using SPSS version 11. The frequency distribution of the health workers' responses was computed. The data on prescribing patterns in the pre-and post-RDT introduction periods were compared. The key variables that were compared between the two periods were: the average number of drugs (any drugs) prescribed per encounter, the average number and type of antimalarials prescribed per encounter, the average percentage of prescriptions with one or more antibiotics, and the average percentage of prescriptions with any injections. In addition, the number of RDTnegative and RDT-positive cases that received ACTs as well as the non-antimalarial drugs prescribed for RDT-positive and RDTnegative results were analysed. Student's t-test was used to analyse continuous variables and the chi-square test for categorical variables. All tests of significance were done based on a p-level of 0.05. The design effect was accounted for in the final statistical analyses by multilevel modeling. Ethical Aspects This research was approved by the Medical Research Ethics Committee, University of Nigeria Teaching Hospital, Enugu. Individual written and signed informed consent was obtained from all participants following verbal and written explanations of the study aims and procedures. Characteristics of the respondents As shown in table 1, most of the respondents 30 (93.7%) were females. Nineteen (59.4%) of the health workers were community health extension workers, 8 (25%) were staff nurses/midwives, and only 4 (12.5%) and 1 (3.1%), respectively, were pharmacy technicians and a doctor. The majority of the respondents, 27 (84.4%), had been working in the health centers for more than a year and therefore were there when the RDT was introduced. Twenty-eight (87.5%) respondents had received RDT training. Table 2 shows that the majority of the health workers (21,65.6%) were of the opinion that the RDT is more effective for malaria diagnosis than microscopy (8,25.0%) and clinical diagnosis (3, 9.4%). They also felt that the benefits of RDTs included increased use of RDTs in the facilities (24,75.0%) and the tendency to prescribe more ACTs (25,78.1%) and less chloroquine (13, 40.6%). All the health workers felt that RDTs led to a fast diagnosis. Use of RDTs by health workers Some of the health workers experienced difficulties in the process of using the RDT kits. These problems included difficulty in collecting blood from the finger and transferring it to the test device, and in timing the duration of the test before the results are read. Misinterpreting faint positive test lines as negative, and unsafe handling and disposal of sharps also posed challenges to some of the health workers. To improve their use of RDTs in the facilities, the health workers suggested additional training on RDT use, the provision of pictorial job aids, reminders to always look at the instructions to prompt oneself about certain steps, and performing a repeat test when the result is negative. Antimalarial prescription practices of health workers As shown in table 3, the average numbers of drugs per prescription were 6.2 and 3.3, respectively, for the period before and after the RDT introduction (p,0.05). The average percentages of prescriptions with injections were 43.5 and 11.5%, respectively, for the period before and after the RDT introduction (p,0.05). Also, the average percentages of prescriptions with one or more antibiotics were 75 and 62%, respectively, for the period before and after the RDT introduction (p,0.05). Also, in the period before the RDT introduction, the average percentages of prescriptions with ACT, SP, and chloroquine were 1.5, 19.5, and 79%, respectively, compared with 86.0, 2.5, and 11.5%, respectively, in the post-RDT era. There were statistically significant differences in SP, chloroquine, and ACT prescriptions between the pre-and post-RDT eras. Table 4 shows that in the post-RDT intervention period, a total of 92 (46.0%) prescriptions contained RDT-positive results and 108 (54.0%) had RDT-negative results. All the RDT-positive results were prescribed ACTs. However, of the RDT-negative results, 80 (74.0%) were prescribed ACTs. On the whole, a total of 172 (86%) patient cards had ACT prescriptions. Prescribing of non-antimalarial drugs by RDT test results As shown in table 5, in the post-RDT intervention period, when the prescribing of non-antimalarial drugs was stratified by RDT test results, the numbers of drugs per prescription for those with RDT-positive and RDT-negative results were 2.1 and 3.8, respectively. Also, the average percentages of RDT-positive and RDT-negative prescriptions with injections were 17.4% and 82.6%, respectively. The average percentages of RDT-positive and RDT-negative prescriptions with one or two antibiotics were 7.3 and 92.7%, respectively. The differences in prescription between RDT-positive and RDT-negative results were statistically significant. Discussion The introduction of RDTs led to a reduction in the prescription of antimalarial drugs (chloroquine and SP). Conversely, the prescription of ACTs increased after the introduction of RDTs. However, health center workers gave ACTs to patients with negative RDT results and this was quite high. Studies [8,30] in other settings have also confirmed this trend in which clinicians were reluctant to refrain from treating malaria even after a negative RDT test. In another study, 80 to 85% of RDT-negative febrile patients were treated for malaria [31], while still another study reported a level as low as 16% [17]. This non-compliance with the test results in our study may be associated with the fact that 12 months after the introduction of RDTs in the study area, no effort had been made by the government to monitor and supervise the health workers on the use of RDTs, except for the supervision done by the research team in the first two months. Thus, the initial zeal to adhere to RDT results experienced in the early phase of the RDT introduction may have waned. Not all the health workers agreed that the benefit of RDTs included increased usage because making RDTs available does not always translate to use, as some may have felt it was not necessary. The use of RDT is expected to reduce the overuse of antimalarial drugs, especially the expensive ACTs, by ensuring that treatment is targeted to patients suffering from malaria as opposed to treating all patients with fever, which was the case when chloroquine was the first-line drug. It has been noted that health workers' adherence to a test-based strategy is a key factor in determining whether the strategy is effective in improving management and curtailing costs [32,33]; the argument has been that, if the result of a test is not going to influence management, then doing the test is a waste of money [32]. Therefore, there is a need for the government to engage in supportive supervision of health workers and to constantly remind them that the cost of treating all individuals regardless of the test results is enormous, both for the patient and to the health services. In contrast to our results, high adherence to RDT results by health workers was reported in Tanzania [34] and Zanzibar [23]. However, in the Zanzibar study, supervision and incentives were provided to the nurses, which may have resulted in the high adherence. Also, the prescribers themselves were the research assistants and thus had to record their own prescribing behaviour, which may have influenced their decisions. In addition, the assessments in these studies were done immediately after the intervention, in contrast to our assessment, which was carried out 12 months after the RDT introduction. Thus, with the lack of supervision and incentives in our study area, the health workers were unlikely to adhere to RDT-negative results. Interestingly, the risk of a false-negative test and its potential consequences have Table 3. Prescription practices of health workers before and after RDT introduction. Indicators Before RDT After RDT P value recently been evaluated in Uganda [35] and Tanzania [36], and the safety of not treating malaria-negative children was confirmed. If RDTs are to be effective in malaria programmes, the need to manage RDT-negative results should be addressed; otherwise, health workers will continue to treat many cases of non-malarial fever with ACTs, and the potential benefit of malaria RDTs in improving the early management of non-malarial febrile illness through early diagnosis and exclusion of malaria as a cause would be lost. One option is to develop and provide management algorithms for the appropriate management of RDT-negative cases and to train health workers on their use. This algorithm should include a pathway that will enable the health workers to always perform a repeat test when the result is negative if they strongly feel that the patient has malaria and a pathway for the appropriate referral and follow-up of RDT-negative patients. The community members should be empowered through appropriate sensitization on the importance of parasite-based diagnosis and the need to insist on taking ACTs only when the RDT result is positive. However, changing the behaviour of health workers on this matter has presented a major challenge to the program in Madagascar [37]. The results of this study indicate that health center workers perceive the RDT to be more effective for malaria diagnosis than microscopy and clinical diagnosis, and that it has a lot of benefits. This is despite the fact that microscopy is regarded as the gold standard for malaria diagnosis. This is a positive development, as this shows the acceptability of RDTs among the health workers. Although the reasons for this perception were not explored, the fact that RDTs were readily available, the results were known within a short time, and treatment was given immediately may have contributed to this perception. The health workers identified some potential concerns regarding the use of RDTs. The first relates to the technique of performing the finger prick, and collecting and transferring blood to the test device. Although health workers are used to collecting blood from patients with needles and syringes for other laboratory investigations, most of them had never taken a finger prick blood sample before the study and had some difficulties with their initial attempts. Some of them claimed that at times they stab too lightly and at other times too deeply. In both cases, there is a tendency to collect too little or too much blood. It has been noted that an inadequate blood volume can reduce sensitivity, while an excessive volume may cause background staining and obscure faint results [38]. Collecting too little blood might also cause the health workers to repeat the finger prick, which might scare patients away. In addition, some of the health workers said they read the test results too soon. This problem has been noted elsewhere [39], and it has been suggested that the reason for this might be that the package instructions give insufficient emphasis to the importance of waiting. Hence, the RDT manufacturer might do well to lay emphasis on the waiting time. Another concern raised by the health workers was the incorrect interpretation of test results; they said that on some occasions, they read faint positive or invalid tests as negative. It is known that the strength of the test line can vary significantly depending on the level of parasitaemia, blood viscosity, volume of blood, and other factors [40]. The inability to correctly interpret this might be due to improper training, absence or lack of training (as some of the health workers were posted to the health centers after the introduction of RDTs), or visual acuity problems. Training health workers, and any users of RDTs in the community for that matter, to recognize faint results is likely to enhance their performance and confidence in reading the test results. The results also show that health workers are usually not very happy when the test shows no malaria, especially if they believe the patient has the disease, as they are likely to lose a client. This was buttressed by the results of the prescription analysis, which showed excessive prescription of ACTs in RDT-negative results. Evidence has shown that most of the illnesses treated in health centers in Nigeria are due to presumptive malaria [41]. The smaller number of drugs, antibiotics, and injections per prescription post-RDT intervention may be a reflection of an increased knowledge of prescribing practices among health workers as a result of the evidence of the presence or absence of malaria produced by RDTs and the availability of ACTs in the facilities. Polypharmacy is likely to be encouraged by the absence of a diagnostic method. In Nigeria evidence exists that health personnel tend to engage in polypharmacy in their attempts to treat a number of possible diseases simultaneously in the absence of a definitive diagnosis [42]. In some settings the prescription rate of antibiotics was found to be higher after RDTs were introduced; the authors have suggested that RDT-negative results led nurses to consider and treat alternative causes of fever [17]. The reverse was the case in our study, suggesting that the health workers may have been restrained from engaging in polypharmacy, in which providers prescribe additional unnecessary drugs, and may have considered the overprescription of ACTs in the event of RDTnegative results. In both cases, the prescription was irrational and the losses from irrational drug prescriptions have been estimated to reduce drug availability by 50% [29]. However, in the pre-and post-evaluation periods of this study, all the assessed drugs were present in the facilities, although in different quantities; this difference may have also affected the prescriptions. We acknowledge that there may have been some bias in the results due to historical evolution or concurrent unknown interventions that took place in the study area in the intervening period. Hence, because there were no comparative data, it would not have been possible to detect the occurrence and effects of such concurrent unknown interventions. However, the authors were not aware of any such intervention with RDT after our intervention in the study area. Conclusions The introduction of RDTs increased the prescription of ACTs but equally increased its overprescription, with the drug being prescribed to people with RDT-negative results. However, this conclusion is tempered by the fact that the study had no control arm to detect the effects of other concurrent interventions. Therefore, in order to improve rational prescribing and tap the gains of the concurrent introduction of ACTs and RDTs, the overprescription of ACTs should be tackled by encouraging health workers to trust the RDT results and prescribe ACTs only to those with positive RDT results. Because the health workers may have become so accustomed to the clinical approach to malaria diagnosis, there is a need to remind them regularly of the potential savings and reductions in suffering that come from more rational antimalarial drug use. Malaria programme officers and national malaria control programmes will therefore need to follow up on the training and retraining of health workers on the use of RDTs and correct some of the problems the health workers encountered while using RDTs.
2016-05-12T22:15:10.714Z
2011-01-31T00:00:00.000
{ "year": 2011, "sha1": "162c86ef75fac39eca98906311cc9ba9bb55f25f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0014627&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "162c86ef75fac39eca98906311cc9ba9bb55f25f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259903502
pes2o/s2orc
v3-fos-license
Experiences of Diverse Introductory Computer Science Students Moving to Online Classes in a Pandemic Research question: For students enrolling in introductory computer science classes at community colleges, how did they experience the class in an emergency remote teaching environment, particularly in contrast to in-person instruction at the start of the semester? Methods: Semi-structured interviews were conducted with 18 students from diverse backgrounds who were enrolled in introductory computer science at a community college in California during the first semester of online classes due to the COVID-19 pandemic. Grounded theory data analysis was conducted on the interview data. Results: Students’ overall educational trajectories were largely unchanged by the shift to emergency remote teaching. However, one crucial factor in many students’ learning experiences was the lack of a physical transition to the campus and a corresponding transition into a school or studying mode supported by physically gathering with other students and away from distractions at home. Experiences in the classroom were found less engaging by many, and virtual interactions were sometimes awkward. Students struggled to get individualized help from instructors and campus resources and to interact with peers. Conclusions/Contributions: Instructors and administrators in community colleges need to be aware that the loss of college campus spaces and embodied peer interactions may pose an especially large barrier to success for the population they serve. An important takeaway for instructors is that the modalities and tools employed in emergency remote teaching are experienced quite differently by different students, and that additional supports, such as videotaped classes and flexibility in due dates, can be key for students’ success. The pervasiveness of technology in the workplace and an ongoing shortage of educated workers to fill jobs requiring programming skills have elevated the importance of increasing the numbers of college graduates with computer science (CS) degrees (NRC, 2010). While many interventions and research projects have focused on the recruitment and retention of students from groups such as women and Latinx in computing at bachelor's degree-granting institutions, far less work has focused on the context of community colleges, even though individuals from these groups attend community colleges in disproportionate numbers (American Association of Community Colleges, 2023). Community colleges are not only widely diverse in demographic categories such as race/ethnicity and socioeconomic status, but also include large categories of students less frequently represented in 4-year institutions, including older students, veterans, first generation college attenders, and parents of small children (American Association of Community Colleges, 2023). Community colleges are thus poised to be key institutions in efforts to broaden participation in computing, yet very few community college students seeking a CS degree transfer and earn a bachelor's degree (Jaggars et al., 2016). One of the perceived advantages of community colleges for computer science students is the small class sizes and the faculty focus on teaching (Mitchell & Kerr, 2019), which allow for the possibility of more face-to-face, individualized attention to student needs. Due to the COVID-19 pandemic in the spring of 2020, many colleges moved classes to a fully virtual, emergency remote teaching (ERT) format (Hodges et al., 2020), with unknown consequences for students. With no prior research on student experiences in an unprecedented ERT situation, we turn to research in online learning at community colleges and other postsecondary institutions for findings on student experiences in virtual classrooms and the impact on students of this increasingly popular format. Prior research conducted in Washington state, for example, found that-in a community college setting-online learning had a negative effect on students in terms of course persistence and course grade across subject matters . This contrasts with a smaller, single-institution study in the Midwest that found no significant difference in course completion based on most demographic characteristics; gender was an exception, with women completing courses at a higher rate than men (Aragon & Johnson, 2008). Other research suggested that the outcome gap between delivery modes may be stronger for certain less-advantaged populations such as ethnic minorities and students with below-average grade point averages (Figlio et al., 2013;Kaupp, 2012;. In one study conducted at community colleges in California, short-term course success rates were found to be poor, with achievement gaps larger across ethnic groups (Johnson & Mejia, 2014). While this previous study listed performance gaps by subject area, it did not report results for computer science. Results from these studies of online learning are based overwhelmingly on asynchronous courses in which students access material at their own pace. In contrast, findings from our current study give us an early glimpse into the impact of synchronous online courses in which the instructor and students gather at the same time. As reported by faculty at the college, the onset of the pandemic forced a nearly instantaneous pivot to synchronous remote class sessions for all students. This situation has highlighted a need for research on synchronous online courses. In addition, while prior research on online courses may give some understanding of outcomes for postsecondary students, findings for computer science may be distinct from those in other disciplines, and research specifically focused on the field is warranted. A deeper understanding of student perspectives on supports and barriers allows college computer science departments to continue positive practices as in-person classes resume and the portion of online or hybrid classes declines, eliminating practices that were detrimental to students during an ERT college experience. In addition, this work can build an understanding of what college-level resources are essential for best student outcomes. This work addresses the research question: For students enrolling in introductory computer science classes at community colleges, how did they experience the class in an emergency remote teaching environment, particularly in contrast to in-person instruction at the start of the semester? Conceptual Framework According to Merriam (2009), a conceptual framework focuses and bounds a study by limiting the investigation to the main factors examined. For this work, we took advantage of an emerging situation occurring in the spring of 2020-a worldwide COVID-19 pandemic-to investigate its consequences for a population of students enrolled in an introductory computer science class (CS1) at a community college during the semester that the pandemic hit the U.S. This situation provided a one-time opportunity to ask students to compare their emergency remote teaching class experiences to inperson experiences in CS1. Rather than relying on a fixed conceptual framework from prior literature, the emergent nature of the pandemic and educational institutions' necessarily rapid adaptation to it led us to create a semi-structured interview protocol centered around the move to ERT classes. We then pursued a grounded theory (Glaser & Strauss, 1967) data analysis to allow themes to emerge from student experiences. Setting and Participants Las Positas College (LPC) is a 2-year public Hispanic-serving (HSI) community college located 40 miles southeast of San Francisco, California, that enrolls approximately 18,000 students annually. Both the college's enrollment as a whole and the diversity of its student population have substantially increased in recent years, including a portion of Latinx students over 25% and rapidly increasing. At Las Positas College, CS1 ("Computing Fundamentals I") is the first course required for the A.S. degree in computer science, as well as meeting requirements for several other degrees and certificates and college-wide General Education requirements for any A.S. or A.A. degree. CS1 is taught in C++ and is formally articulated to many 4-year college and university computer science department courses. CS1 has no prerequisites and is considered appropriate for students without prior programming experience. During the spring 2020 semester, all classes at the college moved to an ERT format in mid-March for the safety of college staff and students during the COVID-19 pandemic. This included all five sections of CS1 at the college taught by four instructors. In the computer science department, all instructors chose a synchronous format, with classes held on the Zoom platform during the scheduled days and times of the class. Students attended the classes virtually and could ask questions during the class. During class, small breakout sessions allowed for small group discussions, with instructors rotating through groups to answer student questions and check on student progress. Participants for this study were recruited from all sections of CS1. Students were emailed in batches with an invitation to participate, beginning with all students who had no prior experience in programming, based on student self-identification and course-taking history, followed by all females, Latinx, and Black students who had prior programming experience. These populations were specifically targeted in recruitment efforts because they include the group of students an introductory CS is explicitly intended, as well as groups typically underrepresented in the field whose contributions we sought to illuminate a diversity of student experiences beyond the dominant groups in the field. Participants were emailed a $20 gift card as a thank-you after being interviewed. Participants varied in gender, self-reported race/ethnicity, and prior experience in programming (see Table 1). Data Collection and Analysis Data collection for this work consisted of 60-minute virtual interviews conducted by the first author in April and May 2020 that were audio recorded and later transcribed word-for-word. Questions were focused on the in-class, college, and outside differences pre-and post-pandemic for students, for example, "Describe the biggest differences for you between CS1 in-person and CS1 online," and "How has the Las Positas College campus closure affected your studies?" To answer the research questions, data analysis followed a grounded theory approach (Glaser & Strauss, 1967). Coding was a multi-step process to move from the raw data to a synthesized and organized understanding of the themes emerging from the data. Three coders undertook this process. The primary researcher had a background in Learning Sciences, Feminist Studies, and Computer Science. The secondary researcher had a background in Computer Science, and the third coder was a research associate with a background in Applied Anthropology. After dividing all data by interview questions, the three coders separately read through the data corpus and divided the data into segments, giving each a descriptive name (Charmaz, 2006). We then met and discussed our initial open coding, creating a full list of named codes and discussing a preliminary organization into higher-level themes. A findings draft was written up with supporting quotes by the first author and reviewed. Another round of axial coding allowed for refinement of the findings, with emerging categories and subcategories suggesting an improved organization. This analysis method allowed for building consensus through understanding and exploring different perspectives on the data, as contrasted to inter-rater reliability, which is appropriate in more limited circumstances (Sweeney et al., 2013). This data analysis technique provided investigator triangulation of the data (Elliott et al., 1999) and allowed for a rich description and deep understanding of the data. In the tradition of qualitative work, our intention in the project was to analyze the data in such a way that we deeply described the experiences of students often marginalized in computer science. Our goal was not to draw generalizable conclusions based on demographic groups, as to do so would betray the nuance of individualized experience, but in the findings we note any suggestions we found of differences between groups. Findings Findings below are reported first by student plans and expectations, then by location, moving from small to large (in the classroom, the department, the college, the community). As the move to synchronous ERT class format was due to a profound public health crisis, some of the findings reported below could be specific to such a situation and not generalizable to online courses more broadly. However, the findings reveal which parts of the college ecosystem are important to student success-an insight exposed by the unique situation and the nature of qualitative data. As reported below, participants felt a loss of a sense of community and engagement both in CS classes and in the larger college. Students also struggled with being at home all day and the accompanying issues of finding a quiet private location and getting personal technology up and running for college attendance and feeling connected. These issues, although possible consequences of online learning more generally, were exacerbated by the public health crisis. Future Plans and Goals We queried students about their future plans and goals to discover what the extent and variety of consequences for students of moving to ERT classes, including course-taking patterns, plans to transfer, and choice of a major. Class taking and transfer. Few students reported any changes to their future plans based on the move to ERT classes due to the pandemic. When asked "Has anything changed about your plans going forward?," Alejandro, for example, stated "No, it hasn't; everything's still on schedule." Similarly, Bethany said "I still have my main goals, which is get my bachelor's degree, transfer, get my transferable credits from Las Positas. All that, yeah, still remains the same." Emika had to change her summer plans due to the pandemic, saying that she "was planning on taking a summer session at the university that I was transferring to, but because most schools aren't going to be open for the summer, I'm going to have to do that online." This was the "one thing that I was kind of disappointed about; I just wanted to get, like, a head start in terms of making sure I have all my courses completed." Major in computer science. For two students in this study, the move to ERT classes led to questioning their choice of major. At the time of this study, Ashti was considering changing to a computer science major since she had dropped a statistics engineering class required for her civil engineering major because it was too difficult online. She then wondered: On the other hand, Amida found that moving to ERT led to questioning her choice of computer science as a major. She reported that she enrolled in CS1 thinking that she might be interested in majoring in computer science and had found "it was great until it changed to online because, personally, for me, it's kind of hard to get everything in that two-hour class, and, like, understand everything quickly." This led to her reporting that "at this point, I'm thinking, 'Am I really for computer science if this keeps going online and if I had to study by myself?'" since "it's a lot to cover in smaller time. The thing is that, in computer science, I've never done it, so it's harder." This, in turn, led her to report that "I think my goals have changed a little, when it comes to self-studying. But I think if it would be in-person classes, it was much better for me. I'm that kind of learner. I've never taken online before." Amida was concerned that the move to ERT would delay her graduation, saying that she was "thinking that if it doesn't work out in fall that the school gets in-person, I might move my computer science to next semester after that." However, she noted that "if I do that, it will ruin my whole plan, and it would take me more than three years" to finish at the community college. In contrast, Zahra's experiences in CS1 cemented her choice of a CS major; she reported that "at the first, when I enrolled for CS1, I didn't know about CS2. I didn't know if I wanted to stay in programming and computering [sic] or no, but after taking class, I'm 100% 'yes! I'm going to take CS2! I'm going to study computer science.'" Future Plans and Goals Summary. The move to ERT classes did not affect students' long-term goals for any participants during the time of this study, indicating that a forced move to ERT does not necessarily interrupt student progress. Two of the traditionally aged (18-and 23-year old) female college students, however, reported that the move to an online format for classes changed their intention to major in computer science. Their decision was based on an analysis of the likelihood of their being able to be successful in courses in various fields in an online environment and a move to continue with a major in which they could complete the coursework with ease online. Computer Science: In the Classroom Perhaps the most obvious repercussions of campus closures is the upending of student experiences in the classroom. For this study, we were interested in how students-particularly women, Latinx, and Black students and those without prior programming experience-compared in-person versus ERT class experiences. We found that students struggled both with changed interactions with instructors and peers and with engagement in an ERT class. Engagement and comprehension. Amida "paid more attention" when the class was inperson, finding that "it was more fun. We can code right on our computer, when he's doing, so we were active. Like, 'oh, we can code with him' or something." This changed when the class moved to ERT, as she found "now it's, like, I just have to keep watching him. So it's hard to follow, because my computer is looking at his screen." Juan also found in-person class more engaging, saying "when you show up in-person, I'm much more in the mindset of 'boom, okay, I'm here, computer science, let's do this. Whereas in ERT, the lectures seem more dry. . .because I have more distractions and other stuff." Alejandro found that the "quality of teaching. . .dropped" when his CS1 class went to ERT, finding that the instructor "had a different dynamic, a different voice rhythm, just the whole demeanor kind of was just less up to par, you know. Yeah, I think we all kind of suffered." Chunhua-an emergent bilingual speaker-had found that in ERT it was "very hard to concentrate on the lectures. Yeah, I miss some important points of the teacher, and then I feel it's hard to catch up for the rest part of the lecture." Eleanor also found that she had "a lot of trouble focusing" during the ERT class because she "like [d] being at school and working there and having the professor teach in front of us rather than, like, on a camera." Ryan, the only contrast, had made the transition to ERT easily, saying that "it doesn't make a difference because [the instructor] still explains it." He went on to note that "if anything now, it's even easier because we see his screen. . .with the PowerPoint, and he'll just write notes on his own lecture slides. . .And then, we go back and forth between his screens and demonstrating code." Interaction with instructor and participation. When asked about class participationboth online and in-person-students in this study often spoke about how they asked questions, either to understand the content or to solicit assistance in debugging their programs, equating question asking with participation in many cases, as we report below. Participants also reported changes to their class participation in terms of their individualized interactions with instructors in receiving help debugging programs. During class time, several participants in this study found that asking the instructors questions during class time became "awkward." Javier, for example, found participation "difficult since everybody randomly shows their faces, with the exception of the professor. Everybody's more or less closed on communication." Before going to ERT, Javier would "occasionally raise my hand, and I would ask the professor questions," but at the time of the interview, he was asking less often, and he "assume[d] that everybody's confidence might've dwindled ever since the lockdown. So I'm sure that they probably lack any incentive to even ask a question at this point in time." Steve noted that he "participate[d] a little bit less because it's a lot harder to, like, for Zoom meetings, you can only have one person talking at a time." The contribution of participation points to grades forced Steve to continue to ask questions during class; he noted that "sometimes I may not even have a question, so that makes it a little harder. It's, like, what am I going to ask? A question that I already know? I have no questions, but I just ask a dumb question." For students who did not want to verbalize questions, Zahra explained that "there's a chat bar. We can type which question do we have. . .we can ask just the professor, and no one can see it, and maybe this brings more confidential time to asking questions." Even with confidential questions over chat, Amida found it "hard to ask questions, because everyone is asking really advanced questions." Alejandro found that using the question chat box was "kind of limited in the sense where you can ask questions. It's very different from doing it in in-class where you can raise your hand." Sophia also reported that typing questions was not ideal, as "people will talk in the chat, but [the instructor] doesn't see the chat all the time. So, sometimes people will turn on their mic and be, like, 'Hey, there's a question in the chat', and then he'll check it." Other students reported little difference in their question-asking behavior in the ERT class. Bethany noted that "we're graded in participation for a class, so we're supposed to ask questions," and, since she had "always been the type of person to ask questions," once the class went ERT, she "just put it in the chat, be, like, 'Hey, professor, could you explain this a little bit more?'" Emika noted that class participation to get help debugging programs was more difficult during ERT, as "within the actual class time, it's harder to get, like, one on one with the professor." She gave an example, saying that during the in-person class, when she was "running an IDE [Integrated Development Environment] and then there's, like, an issue, I could usually just raise my hand and [the instructor] would come and then tell me specifically what's wrong in that moment," but when she had a question during the ERT class, she made "sure I jot it down or remember and then ask a few hours later, which is more annoying and the fact that I might not be in the exact same mindset as I was before." Ashti had a similar story, noting that when she was in the room physically with the instructor, she would "have my teacher right here looking at my screen and he'll be, like, 'Oh, but you're doing that, and it's like 'Oh, yeah', quickly fix it," which was a timesaver. Raina had found that individualized help from the instructor was not necessary during the ERT class, as hands-on exercises were eliminated. During in-person classes, the instructor would ask students to "'try to code this'. And then, we were given a few minutes to try to do it. But now it's just all examples. . .so I don't really need to ask for help. I just watch him do it." Interaction with peers. After the pandemic began and classes were moved online, CS1 instructors created breakout rooms in Zoom and assigned students to a room to encourage peer interactions, with generally little success, according to participants. Ryan explained that the instructor "put us in our little rooms or whatever Zoom does, and the four or five of us, we'll discuss and we'll collaborate, and we'll work on our lab or whatever, and we'll discuss it." However, he had found that "sometimes no one says anything because we just go through the lab, and we just reference the example in the textbook. And then, sometimes, there's no conversation at all." Bethany reported that this system diminished working with her peers, saying that "the whole thing that we used to do at the beginning of the lecture where we would get into groups or rows is gone." She said that the instructor "puts us in breakout rooms on Zoom, and we're with four other people in a breakout room. If we do have questions, we can put them in the chat or we can ask, but it's not really the same." Not all instructors used the Zoom breakout rooms; Emika reported that "in other classes we do, like, breakout sessions. . .but not for CS." In-class peers were a source of assistance to students in CS1 before the pandemic not available when students were attending class from home. Sophia described the difference between in-person and ERT interactions during CS labs, which were: The second half of class [when] we'd break down and work into labs. And that was where we really got to, like, talking to people around us, engaging in conversation. And that's a little hard now because we're on Zoom and he'll put us in breakout groups. But, it's not around the people that we're sitting next to or the groups that we established in class. I feel people are shy in the breakout groups, and we don't really talk back and forth. I feel like I'm working on my own now whereas before I had, like, people that I was really familiar with and we would get through the labs together and go back and forth and bounce ideas off each other. I miss that part because I feel like it helps me understand better. And now that I'm working through on my own, but like not really. It's a little bit harder. She missed the "teamwork that you get in class and interacting with other students. I feel like that's extremely beneficial, especially ones that you have established that you enjoy working with." When Juan had a question during the in-person class, he would ask "one guy who sat next to me. He was pretty well versed in it, so I would talk to him real quick and then if he kind of couldn't explain it to me, then I'd ask the instructor." Ashti also reported that being physically close to other students' screens was helpful, saying she "want[s] to be able to show them with my finger, like, 'This is how you do it', using the mouse, 'This is how you do it'. Delete something on theirs, type it up, like, 'You could do it this way.'" Javier concurred, finding that "CS1 in-person was a lot more cooperative experience, since we're always together in-person. We would. . . help each other out, whenever we ran into a problem with our code." He reported how this affected him, saying that his confidence "dwindled given how I haven't been in much contact with other people, ever since the lockdown. So, that's been weighing me down a little bit," which, in turn, affected his performance. He reported that "in the beginning of the semester, I was doing really well. But ever since the lockdown, I've noticed that my academic performance has started to decrease a little bit." Outside of the CS1 class, continued contact with other CS1 students after the pandemic hit was mixed. Emika was continuing to keep in touch with some classmates, saying that she "would say not as much as I did. . .there's only so much in terms of social media and, obviously, texting and FaceTime. . .but it's just not going to be the same as it is in-person." She still kept "in contact and we have a group chat and everything," but she "wouldn't say it's a hundred percent where it is, like, when it was in-person." Javier also worked less with his peers during ERT, saying that he and a classmate "used to work with each other and whenever we'd come across a problem, we would try to help each other out. It was great." However, "since the shelter in place, we haven't been able to. . .make contact with each other consistently since." Roberto noted that "two or three other classmates that we have our personal numbers that we'll share, like, 'do you think that's going to work?', or 'how did you do this?' And then we just work through it. We do have Zoom, it's not something that we use a lot. It's more just a FaceTime or texting." When CS1 was in-person, Margaret had "worked with [her] lab partner," but once the class moved to ERT, she "definitely [didn't] work with him anymore," but she saw an advantage in this, saying that she was "figuring out my own problems for myself, now, which is good for me." Computer Science: Outside of the Classroom Within the computer science department at the college, instructors offered resources to their students outside of the classroom meetings that students generally found positive. Some of these supports, such as videotaped class sessions, were newly introduced during the pandemic. Others, such as office hours, continued as before, but in a different format. Videotaped classes. Margaret was taking advantage of a new resource for ERT CS1: recorded lectures and labs. She had found that if she "need help later, I can look at the slides, and then I also do have the professor explain it to me via the recording, too, so it's very convenient." Zahra reported that this new resource was the only change for her with the move to ERT CS1, noting that "honestly, during class, I didn't understand much about C++ and but now there's one good point. Because he recorded every time and he posted. . .the lecture where it's accessible until end of semester. I can watch lecture video many times." She reported that she had reviewed the most recent class video "maybe more than four times." Eleanor reported a different use of videos in her CS1 section as a replacement for live lectures, saying that "the teacher teaches everything in video form now. So he records everything, like, all the concepts and then he uploads it onto YouTube. I'm not sure how I feel about that." She thought that this approach was possibly "more time efficient, since everything is just like in a five-minute video." After the move to ERT in the first classes, the instructor "was really inefficient with his time when he did everything live. So then I think he figured that he needed to do YouTube videos instead. . .I think time was used a lot more efficiently when he did that." She explained the format, saying that the instructor "goes over the class schedule for today and then he links all the videos that we're supposed to watch for the day. And then he give us like maybe an hour to watch all the YouTube videos," following which "we could get back on to Zoom and then ask any questions that we might have." Class flexibility. Steve found that greater flexibility was an advantage of emergency remote schooling, which varied by "individual teachers. Some of them give you greater flexibility in turning in homework or assignments, because they understand that it's a lot harder" doing classes during ERT. Charlotte agreed, saying that "teachers definitely are going a little easier, more understanding. The due dates are a lot more flexible," although she found this a double-edged sword, noting that working at home was a struggle in "concentrating and getting my stuff done." She found that "at one point in my classes, [she] was four or five assignments behind. My teacher's, like, 'I understand. Please get it in'. And it's kind of hard to knock them all out when you're just at home." Office hours and tutoring sessions. Instructors continued to hold (virtual) office hours after moving to ERT classes. Margaret took advantage of this, going to office hours "all the time," which was "helpful." Bethany, in contrast, found that "when we moved online, it was office hours were really, like, hard to attend," and when she did attempt it, she "had to wait like an hour, an hour-and-a-half." The process created a queue, and the instructor would message that "there's five people remaining, four people remaining," and the wait times were long. She reported that she had not attended office hours in person before the pandemic "because, it was, thankfully, the first half of computer science was the easy part;. . .I didn't really need office hours, but the second half, yeah, it started getting a little bit challenging." Javier, in contrast, had taken advantage of in-person office hours, noting that "there was office hours that we would always meet the professor in time. And that's pretty much been non-existent since the lockdown." Zahra found that the Piazza platform negated her need to take advantage of professor office hours, saying the instructors "have office hours. I didn't go there because for CS1 because there's a Piazza and Piazza is open 24 hours every day a week and you can post easily but office hours sometimes maybe I forgot or I missed the office hours; that's why I prefer to post in Piazza." Emika noted that her instructor added "online tutoring sessions and office hours more so that we can get that one-on-one time" with the instructor and "to, like, help us, and if we want to get individual help, we can always ask him and so that's been helpful." Raina also noted the addition of professor tutoring sessions with the move to CS1 ERT, saying that the instructor "has, like, tutoring sessions that I sometimes attend, but they're not, like, one-on-one, because he has separate office hours for one-on-one, I think. But I attend the sessions where there's more people." Computer Science: In and Outside the Classroom Summary The move from in-person to synchronous remote classes during the semester highlighted for CS students the advantages of various classroom modalities and resources for instruction. Overwhelmingly, students preferred the synchronous in-person format, but also valued having a recording to review later as well as flexibility in assignment due dates. Only one student-an older, male student-found the ERT format easier than the inperson class. Students preferred asking questions in person, including during lab work when a common object such as code on a computer screen could be shared and pointed at, and during lectures as a rapid way to ask an all-class question. However, having an anonymous online way to ask questions was reported as beneficial by some participants in the ERT setting. Outside of the classroom, participants continued attempting to obtain individualized assistance from instructors, but most found it difficult or awkward due to the instructor's online queuing system and the need to wait in a virtual line of students. In describing interactions with peers, no participants reported finding a satisfactory way of connecting with others during the latter half of the semester in the ERT situation. They valued collaborative work and wanted to continue to do so in an authentic way but reported a disruption in working closely with other students both in and outside of the CS classroom that was not resolved by the end of the semester. At the College The larger college setting had resources that were critical to all students, including the participants in this study, and students lamented the campus closure in several areas. Library. Strikingly when asked how the campus closure affected them, most participants in this study first and most strongly mentioned the loss of the library as a study location. Ashti noted that the campus closure affected her studies due to the closure of the library: I isolate myself to study, that's something that I learned of myself, a way to focus is isolating myself, being in a slightly uncomfortable environment helps me study, because, it's weird to say, but I always feel like someone's watching me, like when I'm in the library, at Starbucks, and they'll see me and I'm playing games on my laptop and like why are you playing games in a library? So it's forcing me to actually do work, my little paranoia has made me do work, and then I am in this uncomfortable environment and the only thing that I can do is go to the bathroom and come back and study, or take a five minute break on my phone. . . It made it easier being inside of the library, and then easy access to computers if my laptop dies, I don't have my charger, I have computers around me, I can print things out. I run into people in my classes, I can easily go to office hours. I have access to a big study room, why not take advantage of it? So typically I would stay after class, five, six hours. Zahra valued the atmosphere in the library, saying, "everyone are [sic] studying and that's bringing you more passion, more energy to sit down and study." Similarly, Steve echoed the opinion of many students, missing the library as "a change of setting, a place that was relatively quiet and you can focus." Other students added that scheduled time in the library helped them set aside concentrated study time. Charlotte, for example, stated that she "was actually on campus because I'm also involved in theater from 9:30 in the morning to 10:00 at night. So I'd be on campus all day, knocking out [homework] in the library." Javier went so far as to say that "it's difficult to maintain my academic performances since the lockdown, considering that I am not allowed to study in the public library, or the in-campus library-that's been affecting my academic performances." He found the campus closure had a "dramatic" impact on his studies, explaining that "there's been some notable changes in my academic performances, ever since they closed the campus. . .I used to study within the campus library every once or twice per week, for maybe an hour or two." He found this useful "to process the amount of information that I just learned from the classes, and. . .to keep my focus and my concentration at its maximum when I'm trying to study." Student services. Participants in this study relied on student services before the pandemic, particularly the counseling office and the tutoring office, and found it more difficult to access these resources in the ERT environment due to the elimination of informal, dropin access, and to slow response times to email. When on campus, Alejandro "would drop by to counselor sessions at the administration office. I had several counselors that I would check in with and just reach out and so forth." He "reached out to" the counselors when the campus went to ERT, and "it's been helpful. But it's very different from having community physical contact versus the online process, obviously." Raina also struggled with accessing counseling during ERT, saying that "I had questions regarding my major change, because I needed to talk to a counselor about my transfer plan for the classes I need to take to transfer." She knew that she "could still ask a counselor on their website, but I was confused about how that works, so I emailed them, but they never got back to me." Eleanor noted that she "need[ed] to talk to my counselor about what classes I need you to take for the following semester. And then the campus isn't even open. So I'm not sure what I'm going to do about that." Raina noted that counseling was "less convenient, and there's some forms that I need from the office where I have to look for it on the website instead." With the closure of the campus, Charlotte said that "the only problem I'm having is just connecting with some of the counseling department because they're not answering my emails." At the point in the semester when we talked, she was "trying to sign up for classes, and I have a prerequisite that they haven't passed, so I'm trying to get that through so I can sign up for a class." Tutoring services were also inaccessible during the ERT campus closure. Roberto lamented the difficulty in using student services, saying his biggest struggle was "probably the tutoring and the help." When going to the campus in person, he "would go to the tutoring center and get help, and I knew a couple of people there, too, that were friends and my tutor as well," and he grieved the loss of "the whole vibe and community, socializing and getting help from people that you know and are friendly with." Amida also missed "receiving help, honestly; I like the tutor center there." Emika bemoaned the closing of the campus tutoring center, saying, "I would like go into the tutoring center at Los Positas, like, the on-campus one, but that's not there. Obviously, we can't do really that online, so that's more difficult." Other students at the college. In addition to peers in class, participants in this study found a resource in other college students to help them with computer science and other classes. Alejandro lamented the closing of the campus, since he "was using almost every venue of support that I had in the college when I was at the college at a physical location. Basically, all that stuff has been taken away," including "community social interaction in the college." The closure caused "a lack of resources and the lack of just availability of connections. The whole sense of community and connectedness to the college has just been diminished." Zahra received help from acquaintances on campus; she met a student in the professor's office that she kept an acquaintance with through seeing him work at the library front desk, saying, "I know one student, CS20 and he's in library. I didn't ask him any question but he was available there. Every time we talk about how class is going and he said, 'Yeah, our class is hard too', but sometimes I think maybe if I had a question he was open to answering." At the college summary. Overwhelmingly, participants in this study reported using the in-person physical transition of traveling from home to school to change to a mindset of study mode. They used the campus and other students to focus their attention on the completion of schoolwork. In some cases, participants actively sought out social learning with or from others, but in other cases, they simply used a physical place where other students were working as a reminder or motivator to study and focus on schoolwork. Although student services were still technically available during the ERT situation, participants in this study all reported they were more difficult to access. Outside of College Life outside of college significantly affected students in their pursuit of computer science and other classes. The only positive aspect cited by students was less time and money spent on commuting; barriers such as difficulty in staying focused and motivated were a struggle. Commute. With the closure of the community college campus, students cited an advantage of increased time and money saved due to no longer needing to commute to campus. Bethany stated that she "used to take the bus to go to campus. So that not having to wake up extremely early, having to get on the bus, wait for the bus, walk 45 minutes to the bus stop, that's the advantage" of the campus closure. Similarly, Raina noted that she "live [d] in Tracy, which can be, like, 45 minutes from campus with traffic and stuff, it's easier for me to just wake up and then go on my computer." In earlier semesters, "the person I carpool with has a morning class and my CS class is at 12:30. So I would have to get up at 5:00 and then drive there. But now it's, like, I don't have to wake up early anymore." Sophia appreciated that "it's nice not having to, get to class and potentially be late because I'm never late," although "that's the only advantage, because I really like being in class. I like being in-person, interacting with people. I think there's not very many positive aspects that came out of it." Before the closure, Alejandro "was commuting to the four different colleges before, that was very taxing on my person, obviously, because I was going to San Jose, I was going to Hayward, and I was going to Las Positas," but now "by doing online I think it's kind of helped to basically not be going out," although he "missed" being in-person, and did not think that the decreased commute time was "worth" the disadvantages to in-person college. Ashti noted that she didn't "have to pay for gas to go to campus," and was therefore "saving so much money." Similarly, Javier found that he had "save[d] a lot of gas in my car. That's one of the only advantages that I can think of right now. I don't have to drive to the campus and that's saved me a lot of money thus far." Juan found that "sitting in front of a computer can be tedious" as he was "stuck in my room all the time," yet "not having to drive a ton is awesome. I'm saving a lot in gas, which is good because I don't work right now 'cause I lost my job during the whole thing." Job. Several students were not working at the time of the study. Others lost jobs during the pandemic, as Juan had, but found advantages to not working. Roberto reported having more time to attend to both his health and his homework without an outside job, reporting that he had "slept more and started eating healthier because I can make home-cooked meals. And I've had a lot of more time to sit down and do more homework and study more. When everything was still normal, I had a job at the outlets next to the school." Ashti also was able to "focus so much more on my class" after losing her job. Ryan lost his job as a bartender, which meant that he could "dedicate enough time where I feel like I can have to sit down to really study the material" instead of studying for "an hour here, and then I work, and then come home at one in the morning after closing a bar, and then having to study again when it's time to sleep." Javier was unusual in that his work hours increased during the semester. He reported that he was working 20 hours per week, but when the pandemic started, he began to "work somewhere between 30 and 35 hours. . .It's been a lot, given how so many people have quit. And not only that, we're one of the few essential stores that are open at this moment." His workplace was busy on the weekends, which "in a way increased my risk of getting COVID-19, which is a scary thought. But at the same time, I try to maintain my composure and just went on with it regardless." Children. Ryan found that the campus closure was of benefit for "figuring out childcare. . .mom takes care of her, grandma takes care of her for an hour and 15 minutes during the block time I have segregated out for class time." Zahra, on the other hand, was struggling with childcare, since "my son was in kindergarten in Las Positas and they are closed," and "everything was easy being in [sic] campus for someone like me have a busy home with kids. Being in [sic] campus definitely is easy to study." Sophia found that, attending class during ERT, it was "very easy to get distracted," including by "my daughter yelling at me. . .It's harder to stay focused." Chunhua's "biggest struggle might be my kids. They have to stay at home, so I have very, very little time. . .for my studies," since her kids were five and two years old and "they're running around me all day long and night." Self-discipline and motivation. Amida had found an advantage in the campus closure in that she could "study on my own time. It's more comfortable. And I have my whiteboard here, so I can study," although she was grappling with "how to do everything by myself." Amida struggled in school, finding that "it's just hard to keep up, because it's all online, and I have five classes." She found that she worked more slowly with ERT classes and that she had to change how she studied, saying, "my plan was I study more after the class, because it's computer science. I need some kind of background information. So when he goes over it, I would try to revise it when I get home in the evening." Amida "studied extra during the weekend. . .I'm trying to figure out my program assignments, and then I have to do my labs. It takes me more time." Margaret took advantage of the campus closure to organize her studies, saying that "in the beginning it was just figuring out how to pace myself," finding that studying during ERT "taught me a lot of self-discipline. I'm able to manage myself better. I'm more independent." She did this by "taking it more onto myself to learn the material instead of relying on the professor." Margaret went on to say that the move to ERT made it "easier" for her, "especially for coding, too, because you can do that all online. Then, I can just send my code to my professor if I need help." When classes were in-person, she "was going to the computer lab at school, pretty much, to write code," but since moving to ERT she had "been forced to figure out my situation at home, with my computer and everything. I have that up and running." Chunhua struggled to work in the ERT class, saying "I think it's easier to devote all your time and energy to the course when you're in the classroom. But when online, I think, I am worried that I'll get distracted." Charlotte similarly found that "it is sometimes hard to concentrate because the internet is just a click away. You could just minimize the video and then just watch a different video while you mute it." She also found that "it feels different in the way that. . . my body just doesn't want to get up or I'm not motivated to do anything," which meant that she "might fall asleep in class more than I have before. And there's nobody who's watching you to tell me not to, unfortunately." Hilario found that he was "always home, so I do get kind of stressed out," and that "exams are harder to do" in an ERT class; Eleanor also reported that "when I do test and stuff, it's really hard for me to focus for some reason. I really like doing tests in a school environment, like in a classroom environment for some reason." Ashti was working in her bedroom instead of a common area of her house, noting that "at least I have a table to sit on," but saying "I feel like I would get it done quicker or I would do better on it if I wasn't at home." She "had to drop one of my classes because it was a little more work on my part. I had lots of homework to do, I had to study a lot, and it made it a big struggle," but "thankfully," she "didn't have to drop this [computer science] class; it was a little easier to focus." Emika found that the campus closure caused her to struggle with the "social aspect outside of just, like, necessary classes has been really difficult, especially like with mental health and making sure that I get up and have a routine and not just stay in my pajamas the entire day." She noted that "in terms of academics. . .it's really hard to stay motivated." She was, however, taking advantage of the campus closure to enjoy "staying home with my family, especially because I was planning on moving out at the end of this year. Now I can spend more time with them since they're all home. That's, like, one silver lining." Resources at home. Ashti struggled with equipment when she had to start working at home, noting that she has a different integrated development environment (IDE) on her own laptop that the one she had built familiarity with during in-person class. Inperson class was easier because "I have that system [in front of me]. I can follow the instructions on how to create a file and everything, like I see it, he's right in front of me. I could stop him, ask him question." After the move to ERT, Amida also struggled with installing the software needed for class: "Yeah, so I'm going to go to his office hours, and Eclipse [the IDE used for class] is not working very well on my computer, so that's the problem. Mostly for my labs, I use school . . . my laptop is advanced, but I think installing the IDE, it's not working very well. . .. for people with computers and getting new IDE in your computer installed, it's kind of difficult." Charlotte, on the other hand, found it simpler working on her own computer, but "it is easier to just do everything instead of doing everything on the other computers, which is what I was doing before, than downloading it to my computer. It is slightly easier to just do everything on my computer." Outside of college summary. The time and money saved from not commuting was a participant-wide benefit of the ERT situation as students attended classes from home. Other time and money consequences of the pandemic varied widely among participants, as a loss of jobs provided more study time but less income, while an increase in hours on the job created a new worry of contracting COVID-19. However, participants overwhelmingly reported the detrimental ramifications of other aspects of ERT, such as the distraction of children at home for the mothers in this study (but not the one father) and the struggle to stay motivated in study mode at home without the move to campus during the day and struggles with technology at home. Only one female, a traditionally aged student, reported increased self-discipline and motivation with ERT as she was forced to become independent and set her own pace with her schoolwork. All students were making valiant attempts to continue succeeding in their classes while grappling with a scarcity of support available to them before the pandemic. Discussion and Recommendations When community colleges across the country launched their first full semester of exclusively ERT classes in fall 2020, institutions and instructors alike had no choice but to embrace an unprecedented challenge. As colleges and individual instructors continue forward in a new and evolving "normal," the reactions and experiences of students who have experienced this transition is a rich and important source of data. The findings of this investigation suggest key considerations for STEM teaching in a community college setting and larger institutional planning. Given the evolving course of the pandemic, including accelerating vaccine distribution during the spring 2021 term, many campuses have faced complex decisions and possibilities about when and how to begin reopening. The situation reflected in our findings was not simply a product of ERT but from numerous domino effects of the pandemic itself. As such, the recommendations are informed by the totality of students' experiences during this time and the supports provided by the college that students rely on for success in the classroom. Some recommendations may hold more sensibly than others as institutions move on their own distinct paths to a post-pandemic future, but the most central recommendations supported by our findings focus on three areas: communal spaces; teaching modalities and resources; and student peer interaction. Communal Peer Spaces A crucial factor in many students' "pandemic learning" experience, the flip side of the almost universally appreciated benefit of not having to commute to school, was the lack of both a physical transition to the school day and physical spaces explicitly and implicitly structured as settings focused on doing school and being in study mode with fellow students. In addition, the lack of communal settings eliminated the proximity of knowledgeable peers and near peers that some students relied on for CS support. In combination with students' pragmatic realities at home, including specific distractions, the need to share space and/or care for other household members, and challenges with motivation or mental health, the loss of college campus spaces and embodied peer groups poses a particularly large barrier to success for the populations that community colleges serve. Going forward, it will be essential for instructors to proactively create and structure virtual space available beyond class time. While doing so, instructors should attend to students' own peer relations and work to connect isolated students with others. This could be supported with relatively simple and optional additions to a course, such as instructor-or student-hosted online "study group" sessions at predictable times during the week, or setting up a course-specific group messaging server (e.g., Slack or Discord) with channels for students to both discuss class material and connect through other shared interests. In addition, as campuses begin broader re-opening, communal spaces like libraries or tutoring centers should be prioritized as crucial contributions to students' overall learning processes and success. Teaching Modalities and Resources The modalities and tools employed in ERT teaching are experienced quite differently by different students. For instance, having class video recordings available online to re-watch is a substantial benefit to students' learning and flexibility, but recordings as a replacement for live class sessions can be experienced as less engaging or harder to focus on. Attending in-person classes with the option to view videos is an accessible practice enabling all students to reap the benefits beyond ERT, consistent with principles of universal design for learning (CAST, 2018). In our study, a particular need this serves was noted by English-language learners like Chunhua, but keeping this practice, even during in-person classes, would benefit other students for a variety of reasons. For instance, the option to re-visit videos after a life class would also better serve students with difficulty in commuting/traveling and students with disabilities or cognitive differences (Pacansky-Brock et al., 2020). ERT courses have also added to the set of modalities for question-asking, and the availability of text chat in a synchronous ERT class meeting may be an excellent communication avenue for some students, particularly those who would feel less comfortable asking questions out loud or appreciate the option of a more confidential mode of question-asking. This was noted as specifically preferable and advantageous by the study participants; it is also consistent with existing literature on gender bias in online class discussion (Baker et al., 2018) and on the particular importance of psychological safety and trustworthy course climate in online learning (Pacansky-Brock et al., 2020;Palacios & Wood, 2016). However, it is simultaneously a less effective or approachable form of the teacherstudent relationship for others, for whom the loss of natural oral dialogue is a distinct barrier. Similarly, the use of videoconferencing platforms like Zoom works well for some students, particularly in combination with the ability to review recorded sessions afterward, but it is experienced as awkward and less engaging by many others. How, then, should instructors incorporate these lessons from the pandemic era into classroom practices and structures? Our interest is particularly in lessons for those who design curricula and teach in community college settings, given the diversity of life paths among community college students and the large proportion of marginalized student populations in such institutions, including those included in this study such as first-generation college students and English-language learners. This is of particular concern in a Hispanic-serving institution (HSI) like that attended by our study participants, given the reality that the use of digital technologies in education often reproduces patterns of systemically different treatment of working class and minoritized students (Benjamin, 2019), as well as the disproportionate difficulty with access to reliable hardware and internet service among Latinx students (California Community College Chancellor's Office, 2020). Because gaps between online and face-to-face outcomes may, similarly, be stronger among less-advantaged populations (Figlio et al., 2013;Kaupp, 2012;, it is critical that online pedagogy, tools, and structures shift in response to our current crisis and the resulting lessons from students' experiences. Our findings suggest particular pedagogical and course design choices that may be especially effective in bridging this gap. First, instructors should pro-actively organize students into smaller groups, but also structure student participation in those groups to build a habit of engaging with peers in that small-group context. This may include explicit, required tasks and deliverables from group work. At the same time, it is important to provide the option for students to self-select group membership so they can choose to join with familiar peers as they would in a physical classroom. Second, within Zoom or equivalent tools, text chat should be both enabled and attended to regularly. Instructors need to build the habit of treating all channels of communication with equal weight, and an in-class chat message should receive the same urgent attention as a student speaking up or raising their hand in person. Similarly, during synchronous class sessions, it is important to provide a way for student questions and comments to be submitted anonymously if desired to increase students' degree of comfort and safety and to counteract student hesitation. Systematic, classwide feedback such as that available in Zoom's polling feature can also help increase students' experience of relationship with the instructor and active participation in the community of the (virtual) classroom. Finally, instructors should routinely provide materials to be used in a particular class session (e.g., presentation slides) in advance, and make video recordings of live class sessions, with text captions, available to be viewed and reviewed afterward. This is consistent with universal design principles for accessibility, as well as being particularly beneficial to English language learners, one key population nested within achievement gaps found for online learning across ethnic groups (Johnson & Mejia, 2014). The practice of holding synchronous sessions without a video/audio record available, as a form of incentive for class attendance, is an understandable reaction to some students' behavior, and some instructors may see this as a way to improve student engagement overall, but ultimately adds disproportionate barriers to some students' effective learning by limiting available modalities and removing a major avenue for students to engage with material asynchronously and on their own time (CAST, 2018;Pacansky-Brock et al., 2020). Our findings especially underscore how crucial it is, in an era and situation that already exacerbates existing structural inequalities, to do everything possible to maximize modalities and opportunities and for students to access the material. Peer Interactions An issue interconnected with teaching practices and resources is students' own peerto-peer interaction both in class and outside of class. The transition to ERT caused many students to have less contact with classmates-a distinct loss of both social and academic resources. This is a challenging problem to solve because no virtual equivalent can truly replace in-person gatherings and interactions. However, it will substantially benefit students for colleges to invest explicit effort in helping create opportunities for online peer socialization more broadly. Opportunities for peer social interaction hold particular importance in a community college context where students are so often commuters and live in many different locations and communities. As Stanford-Bowers (2008) puts it, "building community online is a crucial characteristic for influencing persistence" (p. 38). Moreover, it is instructors who are best positioned to facilitate students' experience of community online and to build supports for their managing of learning experiences together. One way instructors can help facilitate community online is simply to encourage students to keep their videos on as much as possible, providing a sense of connection to and familiarity with those who share the same virtual space. This is often a tricky topic for both students and faculty, with strong feelings and preferences on all sides, and it also encompasses concerns about student privacy that are important to accommodate. Moreover, in serving a diverse population of students, there is an important trade-off to consider between the potential benefits of students on video and the way that expectations or pressure for students to have cameras on will likely fall unequally and with more disruptive consequences to the learning of those students who are already marginalized (Bhallamudi et al., 2022), in the context of prevailing schooling structures using technology to shape discrepant outcomes for different populations of students (Puckett & Rafalow, 2020). One alternative in cases where students are uncomfortable or unable (e.g., due to connectivity issues) to remain on video is to incorporate asynchronous audiovisual content from a student with tools such as Flipgrid. Instructors can also communicate openly about why, for example, seeing facial expressions and body language is important to their goals for a class, expressing and reminding students of this preference without making it a requirement. Another possible support for increased online community-building is to structure time, particularly during the earliest class sessions in a given term, devoted to activities that help students become familiar with their classmates and begin establishing connections and rapport with others. Similarly, instructors are in a position to pay attention to conversational and power dynamics in breakout rooms or other forms of virtual group work, and can pro-actively form or re-form groups to maximize the effectiveness and engagement of each group. This may include keeping well-functioning groups together over time, helping those students grow a relationship with a familiar subset of the class. Finally, instructors can identify stronger students who may be able and willing to serve as informal peer helpers. Students in an in-person classroom are accustomed to figuring out which peers they can reliably go to with questions or requests for assistance, and it may be beneficial to explicitly direct students to that kind of help in an online context. Looking to the Future By the time of publication, the pandemic situation in the United States has continued to evolve. The community college the in this study shifted to a substantial proportion of classes offered in person in the Spring 2022, but it and colleges across the country continue to experiment and make changes to the modality of their course offerings. Students, similarly, continue forward from the fully remote era earlier in the pandemic with their own evolving needs and interests. Our findings are derived from students' experiences in the transition to emergency remote teaching, but they are relevant to online teaching and learning in general, as well as containing lessons for STEM teaching and broader institutional planning going forward as community college learning experiences. No matter exactly the post-pandemic future holds, community colleges will almost certainly continue to combine in-person and online modalities, and the central themes and recommendations reported here will remain crucial considerations for effective and inclusive teaching across those institutions' characteristically vast array of student identities, experiences and needs.
2023-07-16T05:15:59.873Z
2023-07-14T00:00:00.000
{ "year": 2023, "sha1": "3904b6cab56805c14618e14eac60c21a439b5352", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3904b6cab56805c14618e14eac60c21a439b5352", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
54638137
pes2o/s2orc
v3-fos-license
Efficacy of dichlorvos + tetramethrin , Bougainvillea glabra , potassium chloride and Bacillus thuringiensis on Tribolium castaneum The present research was based on determining the effect of dichlorvos + tetramethrin (3.1% emulsifiable concentrate, EC), Bougainvillea glabra, potassium chloride and Shaf I Bacillus thuringiensis alone or in combination against Tribolium castaneum (Jaculin Du val.) adults and larvae. Bt was isolated from samples, cultured on different media by applying microbial techniques and then crystal protein was isolated from bacteria. The bio-toxicity was analyzsed by SPSS software and showed that larvae are more susceptible and showed significant results as compared to adults. Separate bioassays revealed that Shaf I B. thuringiensis was more toxic at all concentrations followed by an insecticide DDVP+tetramethrin, chemical potassium chloride and the least toxic was B. glabra. In the synergistic effect, highest mortality percentage was shown by the bioinsecticide, B. thuringiensis. All elements whenever combined together, proved to be useful for the control of stored product pests. INTRODUCTION Bacillus thuringiensis crystal proteins are pore-forming toxins used as insecticides around the world also known as a biological pathogen.Crystal toxin proteins from the Gram-positive rod shaped bacterium, Bt, are used extensively to control insect pests.These range from caterpillars (Lepidoptera) and beetles (Coleoptera) that infest crops to black flies and mosquitoes (Diptera) that transmit human diseases.The red flour beetle (Tribolium castaneum) (Coleopteran: Tenebrionidae) is a common pest of wheat flour and is found worldwide.B. thuringiensis has been used in developed countries to control target pest population in medicine, forest areas and agriculture.Many types of standardized formulations *Corresponding author.E-mail: kausarbasit786@yahoo.com.are available commercially on spore-crystal combinations.The development of formulations with biodegradable property has replaced the use of chemical insecticide which threatens the environment (Garcia and Ninfa, 2009).Tribolium castaneum is commonly known as red flour beetle that attacked the store grain products like nut, beans, meal, flour, pasta, chocolate, spices, cereals, seeds, cake mix and also museum specimens (Via, 1999;Weston and Rattlingourd, 2000).Red flour beetles have mouthparts for chewing but they are not for stinging and biting.These beetles may bring forth allergic responses but do not spread disease (Alanko et al., 2000).The origin of red flour beetle is Indo-Australia and is also found in temperate regions, but will endure the winter in sheltered places where there is essential heat (Tripathi et al., 2001).The larvae and adults of T. castaneum prey on juvenile stages of rice moth, Corcyra cephalonica.Larvae and adults are big predators of eggs and pupae, so improving adult reproduction or larval development, thereby lessen competition for their descendants (Alabi et al., 2008). As a model system, T. castaneum has been used for the study of environmental relationship, infestation, large brood sizes and short generation time.So, T. castaneum is widely used as an experimental insect in bioassay (Bucher et al., 2005). A synergistic agent allows the insecticide to be more effective.T. castaneum collected from eight godowns and one silo was tested for their susceptibility to malathion, dichlorvos, fenitrothion, primiphos-methyl and phosphine.All strains were resistant against all the test insecticides in which malathion was more effective then DDVP (Rahman et al., 2007).Comparison was done between natural and five synthetic insecticides belonging to pyrethroids including bioallethrin, tetramethrin, resmethrin, allethrin and bioresmethrin.When all the insecticides were used individually against T. castaneum tetramethrin was least effective (Lloyd, 2003).There are currently no published studies in which tetramethrin and DDVP synergized efficacy has been determined for stored pests.Although a lot of studies have been done to determine the synergistic effect between pyrethroid and organophosphate, such combinations has proved to be very effective in controlling red flour beetles (Scott and Snodgrass, 2000). Leaves of different plants like Azadirachta indica, Ricinus communis, Bougainvillea glabra, Saraca indica and Eucalyptus were evaluated as grain protectants against T. castaneum.After 45 days of storage, wheat grains were tested with these leaves powder by using 5% (by weight) which showed 78-76% repellency against red flour beetle (Haq et al., 2005).Potassium chloride is a weak chemical and show very low toxicity to adult T. castaneum but highly toxic to the larvae.Research was done to examine the pathogenicity of Bt, S. marscens, by using different chemicals like boric acid, sodium citrate and potassium chloride against termites and Microcerotermes championi.1% boric acid and Bt was used against termites and 1% sodium citrate and potassium chloride with S. marscens was used against M. championi.Virulence was increased ranging from 1.5-1.8 for Bt and 1.3-1.6 for S.marscens.Boric acid is quite toxic to termites but sodium citrate and potassium chloride were nontoxic to M. championi (Khan, 2006). The present research was based on the combination effect of Bacillus thuringiensis, chemical (potassium chloride), plant powder (Bougainvillea glabra) and insecticide (DDVP and Tetramethrin) to check their synergistic effects against Tribolium castaneum. Rearing of T. castaneum For bioassay, adults and third instar larvae were collected in large amount.For rearing, about 100-200 adults of red flour beetle were added in glass jars.The jars were kept in the Zoology Research Laboratory of Lahore College for Women University, in an incubator at 37°C, with relative humidity of 75% for feeding and oviposition. Collection of Bacillus thuringiensis samples 300 samples were collected from different areas of Lahore including bird droppings, soil and grain dust.Samples were placed in glass jars properly labeled.The jars were brought to the Zoology Research Laboratory of LCWU and placed in refrigerator at 4°C. Isolation of B. thuringiensis from samples For the isolation of B. thuringiensis, Lauria Broth medium was prepared by adding 10 g Tryptone, 5 g yeast extract, 5 g NaCl and 0.2 M sodium acetate in 1 L of distilled water.Subsequently, 10 ml of LB medium was taken in a beaker and 0.5 g of soil sample was added in the beaker.It was well shaken and placed in an incubator at 30°C for 4 h.After time completion, samples were filtered and heated at 80°C for 15 min.Then, samples were diluted and spread on LB agar plates by using spreader.Plates were incubated for overnight at 30°C.Colonies with B. thuringiensis like characteristics and morphology were picked and streaked on LB agar medium. Microbial screening of B. thuringiensis samples Shaf I B. thuringiensis cells were grown on Petri plates of T3 medium and it contained 2 g Tryptose, 3 g Tryptone, 1.5 g yeast extract, 0.005 g of MnCl2.2H2O,2.5 ml of 1 M potassium phosphate (pH 6.9) and lastly 15 g of agar was added.After that, the mouth of conical flask was covered with cotton plug and aluminum foil and it was then autoclaved.T3 media was poured in plates by filling them less than half.All poured plates were placed in an incubator for 24 h at 37°C.After that, Shaf I B. thuringiensis was streaked on the T3 media from the samples, labeled and then placed in an incubator for 72 h at 37°C.For bioassay, the growth of Shaf I Bt from T3 media was collected in falcon tubes filled with autoclaved distilled water and was placed in refrigerator at 4°C. Biochemical characterization of Shaf I B. thuringiensis A single colony of Bt was subjected to gram staining, endospore staining and tests for further identification. In Gram staining method, Bt cells were taken on a sterilized slide with a drop of water, and smear was fixed by flame.Smeared portion was flooded with Crystal violet stain after 15 s, and rinsed with water.Iodine stain was applied on smear and after 15 s, rinsed with water and after that, ethyl alcohol was applied evenly on the entire smear for decolourization, and quickly rinse with water.Finally, counter stain Safranin was applied on the smear after 15 s; smear was rinsed with water, was air dried and was observed under Camera Fitter Microscope.In endospore staining, Bt cells were taken on a sterilized slide with a drop of water and thin smear was formed, and was fixed with flame.Patch of filter paper was put on the smear, then slide was placed on boiling water beaker and flooded with Malachite green stain at filter paper patch for 20 min.After that, stain was washed with water.Finally, counter stain Safranin was added on the smear, slide was air dried and observed under Camera Fitter microscope. In this biochemical test, 7% (w/v) trypticase salt was used.LB broth medium in the test tubes were inoculated with Shaf I Bt strain.Shaf I Bt cells were streaked evenly across a potato starch plate and incubated at 30°C for 96 h.Afterward, plate was stained with iodine solution. Bioassay Estimated value of LC50 used mostly in Bioassay is analyzed through probit from SPSS. Bioassay of Bougainvillea glabra B. glabra was collected from Bagh-e-Jinnah.Leaves were separated from the branches of the plant body and then air dried at room temperature for 7 to 14 days.Dried leaves were ground in electric grinder.After achieving fine plant powder, different concentrations were (1.5, 2.0, 2.5 g and Control) combined with 1 g diet containing semolina and yeast extract.B. glabra bioassays set was run in triplicate by introducing 20 adults in each vial, similarly 2nd instar larvae were introduced in each vial and repeated in triplicate with 1 g diet to determine mortality rate and LC50 values up to 3 days. Bioassay of potassium chloride Potassium chloride was used in the evaluation of bioassays in three different concentrations (1.5, 2.0, 2.5 g and control).Bioassay set was run in triplicate by introducing 20 adults in each vial with 1 g diet consisting of semolina and yeast extract.Similarly, 2nd instars larvae were introduced in each vial and repeated in triplicate with 1 g diet to observe mortality rate and LC50 values up to 3 days. Bioassay of Tetramethrin+Dichlorvos Three different concentrations were used (0.05, 0.15, 0.25% and control without insecticides but with acetone) against T. castaneum adults and larvae.Tetramethrin+DDVP were diluted in 10 ml acetone.From the stock solution of each concentration of insecticide % age was converted into ug/ml and very small quantity was used against adult and larvae.Insecticide when added into the vial was vaporized for few minutes due to their volatile nature so that insecticide runs all over the walls of vials.Bioassay set was run in triplicate by introducing 20 adults in each vial with 1 g diet consisting of semolina and yeast extract; similarly 2nd instars larvae were introduced in each vial and repeated in triplicate with 1 g diet to observe Mortality rate and LC50 values up to 3 days. Bioassay of Shaf I Bacillus thuringiensis Shaf I Bt grown cultures from Petri plates were scraped and washed three times with distilled water and centrifuged at 3000 rpm.Shaf I Bt pellet was obtained, and collected growth of pure Malik et al. 7403 culture was centrifuged three times.After that, Bt was mixed in 1 g diet of semolina and yeast extract, dried and then ground with pestle and mortar.Different concentrations were used (1.0, 1.5, 2.0 g and control without Bt but with agar).Bioassay set was run in triplicate by introducing 20 adults in each vial with 1 g diet consisting of semolina and yeast extract, and similarly 2nd instars larvae were introduced in each vial and repeated in triplicate with 1 g diet to observe mortality rate and LC50 values up to 3 days. Combined bioassay In the combined bioassay, all the elements were mixed together and three different concentrations were formed and bioassay was done against adult T. castaneum and third instar larvae separately; first combination (1.5 g B. glabra powder, 1.5 g KCl, 0.05% Tetramethrin+DDVP and 1 g Shaf I Bt) second combination (2.0 g B. glabra powder, 2.0 g KCl, 0.15% Tetramethrin+DDVP and 1.5 g Shaf I Bt) and third combination (2.5g B.glabra powder, 2.5 g KCl, 0.25% Tetramethrin+DDVP and 2.0 g Shaf I Bt).Bioassay set was run in triplicate by introducing 20 adults in each vial with 1 g diet consisting of semolina and yeast extract.Similarly, 20 larvae were introduced in each vial and repeated in triplicate with 1 g diet to observe percentage mortality for 3 days separately and in combination. Analysis of data Results of bioassays were analyzed by using probit on SPSS 17.0 software.Lethal concentration (LC50) values were observed at each day and of all three concentrations. Collection of B. thuringiensis from samples Of 300 samples collected, B. thuringiensis was isolated from bird droppings, grain dust and soil from various localities.Strain Shaf I was found in large amount. Morphological and biochemical characterization of Shaf I B. thuringiensis By growing on LB medium turbidity confirms the status of Bt.When Bt was streaked on T3 medium, it showed characteristic morphology.Grown colonies were white in color, sometimes dry, opaque, slippery, mucoid and smooth colonies (Figure 3).After Gram and Endospore staining, the spores appeared green and produced central or ellipsoidal parasporal crystals and pink vegetative cells thus retained the purple color of crystal violet, confirming their status as Gram-positive rods in chains and single form (Figures 1 and 2, Table 1).In the 7% sodium chloride test, turbidity was observed after 14 days of incubation at 37°C.In starch hydrolysis test, starch appeared blue black and clear zone was indicated after hydrolysis. Bioassay of adults Bioassay or biological standardization is such type of scientific experiments which are used to measure the effect of a substance on test organism.T. castaneum is a Bioassay of Shaf I B. thuringiensis The purpose of using bioassays based on an artificial diet is to provide worker, a rapid and standardized procedure for estimating activity of a microbial strain.At 1 g B.thuringiensis, 35% mortality was observed after 3 days.Similarly at 1.5 g B. thuringiensis, 41.6% and at 2.0 g B. thuringiensis 50% mortality was observed (Table 2).Regression line was equal or near to 1 (Figures 7 and 8).LC50 was determined by probit analysis according to days in which at 24 h of all three concentrations, LC50 value was 8.7 g/g artificial diet.However, at 48 h and 72 h, LC 50 values were 6.2 g/g and 8.2 g/g respectively (Table 3). Bioassay of potassium chloride At 1.5 g concentration, 25%, at 2.0 g 28.3% and at 2.5g 38.3% mortality was observed after 3 days (Table 4).Regression line was equal or near to 1 (Figures 9 and 10).LC 50 value of the three concentrations at 24 h was 5.5 g/g artificial diet, at 48 h 6.0 g/g and at 72 h 10.9 g/g (Table 5). Bioassay of Bougainvillea glabra At all concentrations, no mortality was observed.B. glabra enhanced the developmental period of adults T. castaneum.No statistics was applied for LC 50 . Bioassay of tetramethrin and DDVP Synergistic effect of Tetramethrin and DDVP shows high magnitude against T.castaneum adults.At 0.05 g mortality was 45%, at 0.15 g 51.6% and at 0.25 g 58.3% mortality was observed (Table 6).Regression line was equal or near to 1 (Figure 11 and 12).LC 50 was counted for 3 days separately, at 24 h, LC 50 of three concentrations was 0.8.Similarly, at 48 h and 72 h,LC 50 was 1.6 and 1.5 (Table 7). Combine bioassay In the combine bioassay, all the elements were mixed together and three different concentrations were formed and bioassay was done against adult T.castaneum. Percentage mortality was different for the three concentrations and mortality increased at higher concentration (Table 8). Bioassay of larvae Larvae of T. castaneum had a characteristic potential and showed its efficacy against different concentrations of B. thuringiensis, B. glabra, potassium chloride, Tetramethrin and DDVP separately and in combination. Bioassay of Shaf I B. thuringiensis At 1 g Bt, 40% mortality was observed after 3 days. Similarly at 1.5 g Bt, 45% and at 2.0 g Bt 51.6% mortality was observed.(Table 9).Regression line was equal or near to 1 (Figures 13 and 14).LC 50 was determined by probit analysis according to days in which at 24 h of all three concentrations LC 50 value was 6.2 g/g.However, at 48 and 72 h, LC 50 value was 8.7 and 8.2 g/g (Table 10). Bioassay of potassium chloride At 1.5 g, 45%, at 2.0 g 50% and at 2.5 g 55% mortality was observed after 3 days (Table 11).Regression line was equal or near to 1 (Figures 15 and 16).LC 50 value of the three concentrations at 24 h was 8.7 g/g at 48 h, the value was 9.2 g/g and at 72 h it was 9.0 g/g (Table 12). Bioassay of Bougainvillea glabra At all concentrations, no mortality was observed.No statistics was applied for LC 50 . Bioassay of Tetramethrin and DDVP At 0.05 g mortality observed was 50%, at 0.15 g 56.6% and at 0.25 g it was 63.3%.(Table 13) Regression line was equal or near to 1 (Figures 17 and 18).LC 50 was counted for 3 days separately, at 24 h.LC 50 of the three concentrations was 0.6%.Similarly, at 0.15 g, it was 1.6% and at 0.25 g, 1.5% LC 50 was counted (Table 14). Combine bioassay In the combine bioassay, three different concentrations were formed and bioassay was done against third instar larvae of T. castaneum.Percentage mortality was different for the three concentrations and mortality increased at higher concentration (Table 15). DISCUSSION In this study, B. thuringiensis acted as biological insecticide and showed low mortality at lower concentration and high mortality at higher concentration.Regression was equal or near 1.The adult LC 50 value for all three concentrations was about 2.1 g/g while larvae showed 1.8 g/g value of LC 50 .In both cases, by increasing the concentration, more number of insects died, while according to hours, less number of insects died; at first day high number of insects become dead as Bt was highly effective in the beginning, and is human safe, but on the 2 nd and 3 rd day, its efficacy becomes lower and it started degrading and so there were less number of insects.Larvae showed remarkable results.Bioassay of potassium chloride of both adults and larvae showed comparably different results in case of mortality percentage and LC 50 .Third instar larvae were more susceptible to chemical and thus they showed greater percentage of mortality as compared to adult.Regression was equal or near to 1.The adult showed 3.3 g/g value of LC 50 of all the three concentrations, while larvae showed 1.3 g/g value of LC 50 .In the case of adult, potassium chloride caused mortality over a longer period of time, but in the case of larvae, mortality occurred within three days.LC 50 value of larvae was lower as compared to adult, which indicates that potassium chloride was toxic to larvae as compared to adult.Bioassay of B. glabra for both adults and larvae showed similar results, no mortality was observed and insect were repelled by B. glabra leaves.B. glabra prolonged the developmental period of T. castaneum.Bioassay of Tetramethrin and DDVP showed strong efficacy against T. castaneum adults and larvae.Estimated value of LC 50 of adult was 0.2% while larvae showed 0.1% LC50 value.When two insecticides work together in synergism, their tendency to kill insect become higher.Tetramethrin do not show residual effect but DDVP shows residual effect which may lasts for 3 to together at three different concentrations; in the case of adult percentage, mortality was low at first and second combination but high at third combination.While in the case of larvae, all three combinations showed significant results and high mortality rate was revealed.This is a new study in which combining together all the essential elements are very effective against T.castaneum.All elements are safe for humans and animals.Their synergistic effect cannot only control stored target pest but also non-target organisms.The present result can trace new strategies for international pest management programs. The crystal proteins of B.t had insecticidal properties against different pests.One crystal protein may be effective for many different kinds of pests.Likewise boric acid is a chemical and cypermethrin an insecticide which have no residual effects and safe for humans and other mammals as well as for the whole environment.In godowns the spray of insecticide kept the grain products safe from the different stored product pests.Different kinds of formulations were made from these biopesticides which helped in for the control of not only stored product pests but for the control of other forest, agriculture and household pests. Figure 1 . Figure 1.Gram staining results of rod shaped and Gram positive shaf I Bacillus thuringiensis. Figure 3 . Figure 3. Petri plates are showing Streaking results of Shaf I Bacillus thuringiensis colonies grown on T3 medium. Figure 4 .Figure 5 . Figure 4. Regression lines of mortality percentage versus different concentration of Shaf I B. thuringiensis against adults of T. castaneum at different time interval. Figure 6 . Figure 6.Regression lines of mortality percentage versus different concentration of potassium chloride against adults of T. castaneum at different time interval. Figure 7 .Figure 8 . Figure 7. Regression lines of total mortality percentage versus 3 concentration of potassium chloride against adults of T. castaneum. Figure 9 .Figure 10 . Figure 9. Regression lines of total mortality percentage versus 3 concentrations of Tetramethrin and DDVP against adults of T. castaneum. Figure 11 .Figure 12 . Figure 11.Regression lines of mortality percentage versus different concentration of Shaf I B. thuringiensis against third instar larvae of Tribolium castaneum at different time interval. Figure 14 . Figure 14.Regression lines of mortality percentage versus different concentration of Tetramethrin and DDVP against third instar larvae of Tribolium castaneum at different time interval. Figure 15 .Figure 16 . Figure 15.Regression lines of mortality percentage versus 3 concentrations of Tetramethrin and DDVP against third instar larvae of Tribolium castaneum. Figure 17 .Figure 18 . Figure 17.Regression lines of mortality percentage versus different concentration of Tetramethrin and DDVP against third instar larvae of Tribolium castaneum at different time interval. Table 1 . Gram staining and endospore staining Shaf I of Bacillus thuringiensis. Table 2 . Bioassay results of Shaf I B. thuringiensis at different concentrations showing percent mortality. Table 4 . Bioassay results of Potassium chloride at different concentrations showing percent mortality. Table 5 . Toxicity of potassium chloride against T. castaneum showing LC50 value. Table 6 . Bioassay results of Tetramethrin+DDVP at different concentrations showing percent mortality. Table 8 . Comparison of individual and combine effect of Shaf I Bacillus thuringiensis, Potassium chloride, Bougainvillea glabra and Tetramethrin+DDVP on T. castaneum larvae at 24, 48 and 72 h. Table 9 . Bioassay results of Shaf I B. thuringiensis at different concentrations showing percent mortality. Table 11 . Bioassay results of potassium chloride at different concentrations showing percent mortality. Table 13 . Bioassay results of Tetramethrin+DDVP at different concentrations showing percent mortality. 4 days.Both insecticides either used in low concentration or high concentration, killed adult and larvae of T. castaneum within 3-4 days or were effective for stored grain pests.As the concentration increased, more number of insects died, but rate of mortality became low day after day.They are very useful for integrated pest management strategies.In the combine bioassay, all the elements were mixed Table 15 . Comparison of Individual and Combine effect of Shaf I Bacillus thuringiensis, Potassium chloride, Bougainvillea glabra and Tetramethrin+DDVP on Tribolium castaneum larvae.
2018-12-07T17:00:56.211Z
2012-12-18T00:00:00.000
{ "year": 2012, "sha1": "f9430ede5459230876aeda3c6ddbd6df9ad8f961", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJMR/article-full-text-pdf/9407ED621141.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f9430ede5459230876aeda3c6ddbd6df9ad8f961", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
81155638
pes2o/s2orc
v3-fos-license
Effect of intraoperative trypan blue on lens epithelial cells – Histomorphological analysis Introduction Trypan Blue is an acid azo dye commonly used as a stain to distinguish viable from non-viable cells. It is a vital stain used intra operatively during cataract surgery to stain the external surface of the anterior lens capsule for better visualization. Aim To analyze the histomorphological effects of trypan blue on Lens Epithelial cells and the Basement Membrane on direct exposure by staining the internal surface of the anterior lens capsule during Small Incision Cataract Surgery. Methods Analytical cross sectional case control study. Anterior capsule specimens of 14 Patients undergoing small incision cataract surgery at Department of Ophthalmology, Govt Medical College Hospital, Thrissur were studied. Two specimens of anterior capsule taken from the same eye form the case and control. Control specimen (sample A) was removed first, after the routine external staining with trypan blue 0.06% (w/v) for 10 seconds. The stain was washed off by balanced salt solution in every case. Then trypan blue was injected under the remaining anterior capsule and case (Test) specimen (sample B) was obtained after direct contact of trypan blue to the internal surface (lens epithelial cells) for 1 minute. Histomorphological (qualitative and quantitative) examination of both specimens done. Results Qualitative data analysis was done by EPI INFO software.v.7. Intactness of LECs throughout the length was statistically significant in Sample A (p = 0.000027). Partial and complete detachment of Lens Epithelial Cells, degeneration, and nuclear smudging were significantly higher in Sample B. Qualitative analysis of the basement membrane showed significant edema of the basement membrane in sample B. Basement membrane splitting observed in sample B was not statistically significant. Quantitative data analyzed using independent t test. There was a statistically significant decrease in cell density in sample B with p value less than 0.05. Discussion Our study demonstrated that direct staining of the internal surface of anterior capsule with trypan blue affected LECs and the basement membrane. There were reduction in cell density, irreversible degeneration of Lens Epithelial Cells and basement membrane edema. Hence treating the internal surface of capsular bag with trypan blue may reduce incidence of Posterior capsular opacification. Introduction Trypan Blue is an acid azo dye commonly used as a stain to distinguish viable from non-viable cells. 1 Use of Trypan blue in Ophthalmology dates back to the 1970s, when it was used to stain the corneal endothelium preoperatively. 2 It emerged in the late 1990s as one of the known staining material used intra operatively for visualization of the anterior lens capsule during cataract surgery. 3 FDA approved staining of anterior lens capsule with Trypan blue in 2004(NDA 21-670). The routine method is staining the external surface of anterior capsule for 10 seconds for better visualization during capsulorrhexis and for capsule-cortex differentiation during cortical aspiration. The lens epithelium is a single layer of cuboidal cells located on inner surface of the anterior capsule. These cells are not present in the posterior capsule. Lens epithelial cells are responsible for production of lens fibres. The cells show morphological variations according to the location. The centrally located are polygonal, in the pre equatorial zone cuboidal, at the equator, they are columnar. These cells serve as progenitor for new lens fibres by a mechanism of epithelial mesenchymal transition. Posterior capsular opacification (PCO) usually develops from the lens epithelial cells that remain on the lens capsule after cataract surgery. Direct exposure in vivo by staining the internal surface of the anterior lens capsule by trypan blue during Small Incision Cataract Surgery (SICS) is found to destroy the lens epithelial cells in various studies. In this study we are analyzing the histomorphological (qualitative) and quantitative effects of trypan blue on Lens Epithelial Cells (LECs) on direct exposure in vivo by staining the internal surface of the anterior lens capsule during Small Incision Cataract Surgery (SICS). Methodology This was an analytical cross sectional case control study done in the Department of Ophthalmology and Department of Pathology, Govt. Medical College, Thrissur, for a period of 3 months after getting ethical committee clearance. Subjects Patients undergoing small incision cataract surgery for congenital cataract, pre senile cataract and immature senile cataract were included in this study after informed consent. Cases of lens induced glaucoma, hyper mature cataracts, complicated cataracts, traumatic cataracts and subluxated cataracts were excluded. Inadequate samples for histomorphological analysis were also excluded from the study. Sample size was calculated using the formula for analytical comparative study. Sample size calculated by using the formula Anterior capsule specimens (case and control from the same patient) of 14 patients were studied. 4 cases were congenital cataract, 4 cases were presenile cataract and 6 cases were immature senile cataract. Specimen collection During SICS, after routine external staining of the anterior lens capsule with Trypan blue 0.06%(w/v) for 10 seconds, continuous curvilinear capsulorrhexis (CCC) was initiated using double bend 26 G needle and continued halfway and that half of anterior capsule was removed by cutting with Vannas scissors. This was kept in a formalin bottle and labeled as sample A (control). Then, Trypan blue 0.06% (w/v) was injected under the remaining half of anterior lens capsule and the LECs were directly exposed to the dye for 1 min. Capsulorrhexis was completed to remove that half of anterior capsule and was kept in another formalin bottle and labeled as sample B (case/Test). This comparative cross sectional study was conducted between two samples from the same patient's lens capsule, thereby eliminating all the other variables like age and sex of the patient, type of Cataract etc. All surgeries were done by same surgeon using same technique, again minimising other bias. Both specimens were sent for histomorphological (qualitative and quantitative) analysis. Pathology specimen preparation and parameter observation The specimens were received in the pathology department as 2 samples from each patient in separate bottles as sample A-Control and sample B-Case/test. Routine processing for histopathological examination were done, embedded in paraffin wax and 5 micron thick sections were stained with Hematoxylin and Eosin stain. Histomorphological observation was done under Leica-DM 750 image analysis microscope. The following observations were made in both control and case samples. Results Anterior capsule specimens of 14 patients were studied. Histomorphology of lens epithelial cells, histomorphology of basement membrane and quantitative morphometry were analyzed and compared between Sample A and Sample B. Qualitative data (histomorphology of lens epithelial cells and basement membrane) analysis was done by EPI INFO SOFTWARE version7. Data analyzed using proportions. Association of different variables was analyzed using X 2 test (Tables 1 and 2). Qualitative analysis of LECs showed statistically significant intactness of LECs throughout the length of the capsule in the control group Sample A. Partial and complete detachment of LECs, degeneration of LECs and nuclear smudging were significantly higher in the case group Sample B. There was no statistically significant difference between the groups regarding intermittent fall off of cells. Qualitative analysis of the basement membrane showed statistically significant edema in Sample B. Basement membrane was intact in most of the samples in both groups. Basement membrane splitting observed in 28.5% of Sample B was not statistically significant. Quantitative Data: Number of LECs per 100 micrometer length of the capsule and thickness of basement membrane were analyzed using mean and standard deviation. Comparison was analyzed using independent t test. Significant level was kept at 5% Chart 1. There was statistically significant difference in the number of lens epithelial cells in the most cellular area, number of cells in the least cellular area and average cells per 100 micrometer length of capsule. LECs were significantly less in Sample B. The average width of basement membrane was more in sample B. But the difference was not statistically significant (Table 3). Discussion Posterior capsular opacification (PCO) popularly known as ''after cataract'', is a major problem following cataract surgery, especially in young patients even with modern microsurgical techniques. It has been demonstrated that the remaining lens epithelial cells (LECs) of the anterior capsular rim and equatorial region of the capsular bag undergo hyperplasia and change to spindle shaped myofibroblast-like cells. 4 Histopathologically, these changes in the epithelial cells are accompanied by formation of multi-layered basement membrane material composed of proteoglycans and collagen fibrils. 5 This PCO causes visual axis obscuration, capsular contraction, IOL decenteration etc. Studies showed that trypan blue has got some effect in the count and viability of LECs, structure of lens capsule, and incidence of PCO. But most of these studies were done after external staining of the anterior capsule. Since LECs are located in the internal surface of anterior lens capsule, there was no direct exposure of the cells to the dye. In our study, internal staining of the anterior capsule with trypan blue 0.06% (w/v) resulted in direct exposure of LECs to the dye for a period of 1 minute in the case Sample B. The effects were compared to that of the control Sample A taken from the same patient which was initially removed after routine external staining for 10 seconds. 'Intactness of LECs throughout the length of anterior capsule' was significantly high in the control group Sample A where only routine external staining was done. There were statistically significant histomorphological changes like detachment of cells from basement membrane, irreversible degenerative changes in LECs and basement membrane edema in Sample B. Cell density decrease in Sample B was also statistically significant. All these changes may be due to the effect of the dye since all other variables affecting the study has been excluded. A similar study was done by Pankaj Sharma, et al. In their study-''Trypan blue injection into the capsular bag during phacoemulsification: Initial postoperative posterior capsule opacification results.'' They found that Intraoperative injection of trypan blue 0.1% into the capsular bag reduced PCO after phacoemulsification with hydrophilic acrylic IOL implantation. 6 Here they have done internal staining of the anterior lens capsule, though histomorphology was not assessed. Our results very well explain their finding. In another study, Nanavaty MA et al. assessed the effect of anterior capsule staining with trypan blue 0.0125% on the density and viability of the lens epithelial cells and concluded that staining the anterior capsule with trypan blue affected the density and viability of LECs. 7 This study was done after routine external staining of anterior capsule. Our study also showed definite decrease in cell density though viability was not directly assessed. The effects of trypan blue staining in the mechanical characteristics on anterior lens capsule have been investigated by Dick et al. 8 Staining led to the decrease in elasticity but increase in the stiffness of the membrane when measured with a modified rheometer. Minu M Mathan et al studied the effect of concentration of trypan blue and the length of exposure of trypan blue on the human anterior lens capsule using Raman spectroscopy. They found that trypan blue staining of the lens capsule leads to increased cross linking and reduction in the hydrogen bonding leading to increased capsule stiffness and reduced elasticity. We also had changes in the collagen of the basement membrane in the form of edema and splitting. André Luís F. Portes, MD et al. in their study found that the TEM images of sub capsular epithelium cells showed mitochondrial rupture, dilation of the cisterns of the endoplasmic reticulum, increased cytoplasmic and nuclear electron density, and abnormalities in the nuclear profile of trypan bluestained cells. 10 This study supports the hypothesis that staining with trypan blue 0.1% can help reduce the incidence of posterior capsule opacification after cataract surgery. In our study under Leica-DM 750 image analysis microscope we found that there were partial and complete detachment of LECs from basement membrane, degeneration of cells (cytoplasmic vacuolation, cell membrane rupture) and nuclear smudging (loss of character and blurring). Our comparative study results showed that LECs are much more affected by adequate direct exposure to trypan blue than routine external staining as there were significantly increased irreversible degeneration of LECs, reduced density of LECs and basement membrane edema in Sample B. This supports our hypothesis that direct staining of LECs with trypan blue can reduce the incidence of posterior capsule opacification after cataract surgery. Conclusion In our study we have found that partial and complete detachment of LECs, degeneration of LECs, nuclear Chart 1. Comparative analysis of quantitative data. smudging and basement membrane edema were significantly higher in the case group where internal staining and direct exposure of LEC to trypan blue was done for 1 minute. This shows that adequate direct exposure of the internal surface of anterior capsule with trypan blue promoted irreversible degeneration of LECs and reduced the density of LECs. Hence intra operative exposure of the internal surface of capsular bag to trypan blue may be helpful to enhance the loss of LECs and prevention of Posterior capsular opacification as It has been demonstrated that the remaining lens epithelial cells of the capsular bag is the main cause for Posterior capsular opacification. More studies are needed for further evaluation of this hypothesis.
2019-03-18T14:06:42.791Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "4be3706e57346ed73bf994e4280f1460cfedf636", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.sjopt.2018.12.006", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c48c18decb94c96112760fa12f36a3f37e67d88c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210181530
pes2o/s2orc
v3-fos-license
COLR Acinetobacter baumannii sRNA Signatures: Computational Comparative Identification and Biological Targets Multidrug-Resistant (MDR) and Extensively Drug Resistant (XDR) Acinetobacter baumannii (Ab) represent a serious cause of healthcare-associated infections worldwide. Currently, the available treatment options are very restricted and colistin-based therapies are last-line treatments of these infections, even though colistin resistant (COLR) Ab have rarely been isolated yet. In bacteria, small non-coding RNAs (sRNAs) have been implicated in regulatory pathways of different biological functions, however, no knowledge exists about the sRNA role on the biological adaptation in COLR Ab. Our study investigated two Italian XDR isogenic colistin-susceptible/resistant (COLS/R) Ab strain-pairs to discover new sRNA signatures. Comparative sRNA transcriptome (sRNAome) analyses were carried out by Illumina RNA-seq using both a Tru-Seq and a Short Insert library, whilst Ab ATCC 17978 and ACICU Reference Genome assembly, mapping, annotation and statistically significant differential expression (q-value ≤ 0.01) of the raw reads were performed by the Rockhopper tool. A computational filtering, sorting only similarly statistically significant differentially expressed (DE) sRNAs mapping on the same gene in both COLR Ab isolates was conducted. COLR vs. COLS sRNAome, analyzed integrating the DE sRNAs obtained from the two different libraries, revealed some statistically significant DE sRNAs in COLR Ab. In detail, we found: (i) two different under-expressed cis-acting sRNAs (AbsRNA1 and AbsRNA2) mapping in antisense orientation the 16S rRNA gene A1S_r01, (ii) one under-expressed cis-acting sRNA (AbsRNA3) targeting the A1S_2505 gene (hypothetical protein), (iii) one under-expressed microRNA-size small RNA fragment (AbsRNA4) and its pre-microAbsRNA4 targeting the A1S_0501 gene (hypothetical protein), (iv) as well as an over-expressed microRNA-size small RNA fragment (AbsRNA5) and its pre-microAbsRNA5 targeting the A1S_3097 gene (signal peptide). Custom TaqMan® probe-based real-time qPCRs validated the expression pattern of the selected sRNA candidates shown by RNA-seq. Furthermore, analysis on sRNA ΔA1S_r01, ΔA1S_2505 as well as the over-expressed A1S_3097 mutants revealed no effects on colistin resistance. Our study, for the first time, found the sRNAome signatures of clinical COLR Ab with a computational prediction of their targets related to protein synthesis, host-microbe interaction and other different biological functions, including biofilm production, cell-cycle control, virulence, and antibiotic-resistance. INTRODUCTION The Multidrug-Resistant (MDR) Gram-negative pathogens included recently in the WHO black list (Tacconelli et al., 2018), i.e., Acinetobacter, Pseudomonas, and various Enterobacterales (including Klebsiella, E. coli, Serratia, and Proteus) represent a serious health problem worldwide. In particular, focusing on the Acinetobacter baumannii infections, the therapeutic options for their treatment are very limited and mainly draw on colistinbased therapies (Hancock and Chapple, 1999;Zavascki et al., 2007;Vila and Pachón, 2012). Therefore, the increasing use of colistin (COL) and the spread of colistin resistant (COL R ) can consequently determine an increase in polymyxin resistance onset (Ko et al., 2007;Gales et al., 2011). Small non-coding RNAs (sRNAs) [∼30-300 nucleotides (nt) length] have been recognized as a major class of regulatory molecules in bacteria (Barquist and Vogel, 2015). They unroll a regulatory function affecting gene expression -by base pairing to their related target mRNA -modulating transcription, translation, mRNA stability, DNA maintenance, and silencing. Functionally, sRNAs are involved in the regulation of a wide range of physiological responses, reacting to environmental signals, such as pH or temperature shifts (Wassarman, 2002). They can help the modulation of changes in cellular metabolism to optimize use of available nutrients and improve the probability for survival, as well as contributing to virulence (Chabelskaya et al., 2010;Gottesman and Storz, 2011;Sayed et al., 2012;Álvarez-Fraga et al., 2017;Kröger et al., 2018). sRNAs act in trans or cis depending on the transcription start position where the sRNA is transcribed with respect to their regulated gene. Trans-encoded sRNAs are transcribed at a genetic locus separated from the gene that they regulate and they often work via an imperfect base pairing with their target mRNAs. Conversely, cis-acting sRNAs are transcribed from the same genetic locus that they regulate even thought in an antisense orientation to the target gene. Since transcribed proximally to their target, cis-antisense RNAs share a perfect match with their targets allowing duplex formation and a stringent regulation (Thomason and Storz, 2010;Chang et al., 2015;Georg and Hess, 2018). A third class of small RNA, the microRNA-size small RNA fragments (∼15-26 nt) have been identified in a few different species of bacteria by next generation sequencing (NGS). These molecules could originate from a cut of a longer precursor (pre-sRNA) in analogy with maturation of the eukaryotic microRNAs (Bloch et al., 2017). Few mechanisms have been proposed to the underlying colistin resistance in Ab (Li et al., 2006;Adams et al., 2009;Falagas et al., 2010;Moffatt et al., 2010;Arroyo et al., 2011;Cai et al., 2012;Hood et al., 2013;Parra-Millán et al., 2018;Cafiso et al., 2019), however, no investigations have been conducted so far on the role that sRNAs exert on the biological adaptations of colistin resistance acquisition. Few studies have analyzed the sRNA contribution to the A. baumannii biology and antimicrobial resistance. Weiss et al. (2016) offer a detailed view of the sRNA content of Ab and provide new insights into the evolution and role of these regulatory molecules. In detail, using RNA-seq, 78 Ab sRNAs were identified in the AB5075 background, grouped in six classes of similar sRNAs, with one particularly abundant and homologous to regulatory C4 antisense RNAs found in bacteriophages. Sharma et al. (2014) found three new sRNAs, namely AbsR11, AbsR25, and AbsR28, hypothesizing an sRNA involvement in the regulation of antibiotic resistance in bacteria, specifically in cryptic A. baumannii. Our study aimed to gain new insight on the sRNA signature and biological target of clinical COL R Ab by high-throughput technology RNA-seq. Our data define the distinctive signatures of COL R Ab sRNAs revealing computational predicted targets involved in the protein synthesis machinery, host-microbe interactions, pathways involved in biofilm production, cell cycle control, virulence, and antibiotic-resistance. RNA-Seq To optimize data, RNA-seq was carried out using the Illumina Mi-seq Standard pipeline on two biological replicates consisting of two different libraries of each strain i.e., a Single-end library with 50 bp reads (Short-Insert library) and a Paired-end Read library with 150 bp reads (Tru-Seq library). For RNA extraction, a single colony of each Ab strain was grown in 10 ml of Cation-adjusted Mueller-Hinton broth (Ca-MHB) (Becton Dickinson) and incubated at 37 • C overnight. The overnight cultures were then diluted 1:50 in 50 ml of Ca-MHB in a sterile 250 ml flask and incubated with shaking at 250 rpm at 37 • C. Bacterial pellets were harvested at mid-log growth phase (OD 600 0.6 ∼1.5 × 10 7 CFU/ml, 18 h) according to the strain growth-curves (Supplementary Figure S1). RNA extraction was performed using the NucleoSpin RNA kit (Macherey-Nagel, Düren, Germany) following the manufacturer's protocol with minor modifications according to the previously published protocols (Cafiso et al., 2019). The total RNA quality was checked by the 2200 TapeStation RNA Screen Tape device (Agilent, Santa Clara, CA, United States) and the RNA concentrations were determined using an ND-1000 spectrophotometer (NanoDrop, Wilmington, DE, United States). RNA Integrity Number (RIN) values, ranging from 1 to 10 with 10 being the highest quality, was determined by the Agilent TapeStation 2200 system. Only RNAs with preserved 16S and 23S peaks and with RIN values >8 were used for library construction. The RIN values >8 indicated intact and high quality RNA samples usable for downstream applications as previously published (Fleige and Pfaffl, 2006). Library Preparation and Sequencing The Tru-Seq library (TS) was an A Paired-end library with reads of 150 bp and average insert size of 350/400 bp. After sequencing, raw reads were processed using FastQC v0.11.2 to check data quality and, then, reads were trimmed by Trimmomatic v.0.33.2 to remove the adapters for Paired-end reads. A minimum base quality of 15 over a 4-bases slidingwindow was required. Only trimmed reads with a length above 36 nucleotides were included in the downstream analysis (Cafiso et al., 2019). The Short-Insert library (SI) was processed with an A Singleend stranded library with reads of 50 bp. After sequencing, raw reads were processed using FastQC v0.11.2 to evaluate data quality. Reads were then trimmed using Trimmomatic v.0.33.2 to remove sequencing adapters for Single-end reads, requiring a minimum base quality of 15 (Phred scale) and a minimum read length of 15 nucleotides. Only trimmed reads were included in the downstream analysis (Cafiso et al., 2019). To obtain the data, analyses were carried out with default parameters and verbose output. Rockhopper normalizes read counts for each sample using the upper quartile gene expression level. Starting from the p-values calculated according to the Anders and Huber approach, differentially expressed genes (DEGs) were selected as statistically significant by computing q-values ≤ 0.01 based on the Benjamini-Hochberg correction with a false discovery rate of <1%. In addition, Rockhopper is a tool using biological replicates when available and surrogates when biological replicates for two different conditions are unavailable, considering the two conditions under investigation surrogate replicates for each other (McClure et al., 2013). Comparative sRNA Prediction A double computational filtering was carried out on the library analysis data outputs, first for the DE sRNAs in the COL R strains versus their COL S counterparts and, thus, to identify only those mapping on the same gene in both COL R Ab isolates with a statistically significant q-value ≤ 0.01. Determination of Small RNA Functional Categories Functional categories of small RNA target genes were investigated by different bioinformatic tools including BLAST, PANTHER (Protein ANalysis THrough Evolutionary Relationships) Classification System, Gene Ontology (GO) Consortium, ExPASy, and KEGG. RNA-Seq Data Accession Number RNA-seq data of the two different libraries have been deposited in the NCBI GEO database under study accession no. GSE109951. I-Tasser ab initio Structure Modeling For the I-Tasser (Iterative Threading asseMBLY Refinement) analysis (Zhang, 2009;Roy et al., 2010Roy et al., , 2012Yang and Zhang, 2015), the first N-terminal 120 AA of the Ab ATCC 17978 AbsRNA 2 target (A1S_2505) was selected as a target to obtain more significant predictions, as the C-terminal part of the A1S_2505 protein was excluded because of its low coverage with threading templates identified from the PDB-library: MTYQYHDESIVTELPEDTVFVFGSNMAGQHGSGAARVASQ HFGAVEGVGRGWAGQSFAIPTLNEHIQQMPLSQIEHYVEDF KVYAKNHPKMKYFVTALGCGIAGYKVSEIAPLFKGIHHN. For the analysis of the microAbsRNA 4 target (A1S_0501) the whole FASTA sequence of the protein was used as a template. Validation of sRNA-Seq Expression AbsRNA 3 and microAbsRNA 5 were selected to be validated since they are representative of under-expression and over-expression, respectively. Particularly, their RNA-seq expression levels were validated by Custom TaqMan R Small RNA Assays as follow: a single colony of each strain was grown in 10 ml of Ca-MHB (Becton Dickinson) and incubated at 37 • C overnight. The overnight cultures were diluted 1:50 in 50 ml of Ca-MHB in a sterile 250 ml flask and incubated with shaking at 250 rpm under normal atmospheric conditions at 37 • C. Bacterial pellets were harvested in mid-log growth phase, lysed by adding lysozyme (10 mg/ml) and incubated for 1 h at 37 • C. Small RNA extraction was performed by using the mirVana TM miRNA Isolation Kit (Ambion, Austin, TX, United States) according to the manufacture's protocol following the enrichment procedure for small RNAs. Extracted sRNAs were quantified using Eppendorf BioPhotometer D30 to assess their quality and to properly dilute them to the amount suggested by Custom TaqMan R Small RNA Assays protocol (Applied Biosystems TM ), then a stem-loop qRT-PCR was performed, one of the most commonly used real-time PCR approaches to quantify sRNAs. The quantification assay was divided into two steps: i) RNA was reverse-transcribed into cDNA using a stem-loop specific primer, and ii) the quantified RT The stem-loop specific primer used for RT and the specific TaqMan R probes and primers used for real-time qPCR were provided by Custom TaqMan R Small RNA Assay Design Tool on the Applied Biosystems website TM . All real-time qPCRs were performed in triplicate, using Agilent AriaMx Real-Time PCR System, with three different biological replicates using one of the thermal profiles suggested by the Custom TaqMan R Small RNA Assays protocol, which provided a first enzymatic activation step of 95 • C for 10 min, followed by 40 cycles of 95 • C for 15 s and 60 • C for 60 s (Salone and Rederstorff, 2015). The expression levels of AbsRNA 3 and microAbsRNA 5 are shown as the increment/decrement fold-change (FC) in COL R (1-R, 2-R) vs. COL S strains (1-S, 2-S) in RNA-seq and real-time qPCR. Statistics AbsRNA 3 and microAbsRNA 5 expression levels found by Custom TaqMan R probe-based real-time qPCRs were expressed as means ± standard deviations and analyzed by the oneway analysis of variance (ANOVA) using the on-line Free Statistics Calculators-DanielSoper (Soper, 2019) considering a p-value ≤ 0.01 as statistically significant. Ab ATCC 17978 Mutant Construction Due to the difficult to manipulate the clinical A. baumannii strain-pairs related to the lack of antibiotic markers for the mutants selection, A. baumannii ATCC 17978 strain was used to generate A1S_r01 and A1S_2505 mutants as well as to overexpress the A1S_3097 gene. Gene inactivation was carried out as previously described (Aranda et al., 2010). Briefly, an internal fragment of the target gene was PCR-amplified from the A. baumannii ATCC 17978 genome, using the appropriated primers (listed in Table 1). The internal fragment was cloned into the pCR-BluntII-TOPO plasmid (Invitrogen), introduced by electroporation in Escherichia coli DH5α (Clontech) and selected in kanamycin-containing LB plates. Purified plasmids were then introduced by electroporation in A. baumannii ATCC 17978 strain and selected on kanamycin-containing plates. Recombinant clones were confirmed by sequencing (Macrogen) of the PCR products obtained by using the appropriate primers (Table 1). For overexpression, the A1S_3097 gene was cloned, using the indicated primers (Table 1), into XbaI-NcoI sites of the pET-RA vector (Aranda et al., 2010). The recombinant plasmid was introduced in E. coli DH5α and once correct construction was verified by both PCR and sequencing (Macrogen), in ATCC 17978. Finally, A. baumannii transformants overexpressing the A1S_3097 gene were selected on rifampicin-and kanamycin-containing plates and confirmed by PCR with the pETRAFW and pETRARV primers ( Table 1). Comparative Transcriptome Analysis To define the characterizing sRNA traits of the two COL R vs. COL S clinical Ab strains, a comparative analysis of the DE sRNAs was conducted by a computational double cross-filtering. As shown in Tables 2, 3 and Supplementary Table S1, the comparative statistically significant filtering-analysis of the sRNAome sorting for DE sRNAs of the COL R vs. COL S Ab parental strains returned two different underexpressed cis-acting sRNAs (AbsRNA 1 and AbsRNA 2 ) mapping in antisense orientation the 16S rRNA gene A1S_r01, one under-expressed cis-acting sRNA (AbsRNA 3 ) targeting the A1S_ 2505/ACICU_02783 gene (hypothetical protein), one underexpressed microRNA-size small RNA fragment (AbsRNA 4 ) and its pre-microAbsRNA 4 targeting the A1S_0501 gene (hypothetical protein), as well as an over-expressed microRNAsize small RNA fragment (AbsRNA 5 ) and its pre-microAbsRNA 5 targeting the A1S_3097 gene coding a signal peptide involved in the cytokinin biosynthesis. In detail, the two different cisacting sRNAs -mapping on the Ab ATCC 17978 Reference Genome -targeted two different positions ( Table 2) of the A1S_r01 gene with a size of 35 bp in strain-pair 1 (AbsRNA 1 ) and 39 bp in strain-pair 2 (AbsRNA 2 ), no mapped regions were found on the Ab ACICU Reference Genome. The 70-75 nt cis-acting AbsRNA 3 mapped in an analog position both on Ab ATCC 17978 and Ab ACICU Reference Genomes targeting the A1S_2505/ACICU_02783 gene in both the Ab strain-pairs. The 107 nt pre-microAbsRNA 4 covering the A1S_0501 mapping on Ab ATCC 17978 Reference Genome and its smaller fragment of 21 bp (microAbsRNA 4 ) were found in Ab strain-pair 2 and in Ab strain-pair 1, respectively. Similarly, a 228 pre-microAbsRNA 5 targeting the A1S_3097 gene in Ab strain-pair 2 and its inner fragment of 20 nt in Ab strain-pair 1 were found. Furthermore, both Ab strain-pairs presented the same aforementioned sRNAs with similar expression profiles (over-or under-expression), though the q-value did not allow, in some cases, to return the same fragments in both strain-pairs, considering the two different genomic annotations. Moreover, the only sRNA with a statistically significant expression in Ab ACICU Reference Genome was ACICU_02783 (AbsRNA 3 ) in both strain-pairs (Supplementary Table S1). The sRNA nucleotide positions (transcription start and stop) reported by Rockhopper tool were shown in Table 2. In addition, none of these sRNAs targeted the 5 or 3 untranslated regions (UTR) of their target genes. I-Tasser To predict the putative role of AbsRNA 2 , an A1S_2505/ACICU_02783 conserved domain (CD) BLAST search and I-Tasser ab initio protein structure prediction were computationally investigated. The three-dimensional structure of a protein can be very informative and useful to understand functional characteristics of proteins with unknown functions. This is because the structure of a protein provides the precise molecular details that often facilitate experimental characterization of an expected function. In a case in which there is no expected function, the structure of a protein can be used to facilitate its functional predictions by using the structure as a search template for better-characterized proteins that share regions of structural similarity (Kemege et al., 2011). The CD-BLAST search provided a match with the PHA00684 super family (cl10259) domain related to a protein of unknown function. On the contrary, analyzing the concordances of the highest significant prediction of the I-Tasser TM-align structural alignment and the COACH Predicted biological function, we resolved the structure and biological function as similar to the Orphan Macrodomain Protein (human C6orf130) with O-Acyl-ADP-ribose deacylase activity. In particular, the closest structural similarity of the targeted A1S_2505 was the PDB-Hit 2lgrA (TM-score 0.925) matching the human protein C6orf130, previously published as an Orphan Macrodomain Protein (human C6orf130) with an O-Acyl-ADP-ribose deacylase activity, which catalyzes the deacylation of O-acetyl-ADP-ribose, O-propionyl-ADP-ribose, and O-butyryl-ADP-ribose to produce ADP-ribose (ADP-r) with acetate, propionate, and butyrate, respectively. Due to the structural similarity, we can speculate that A1S_2505/ACICU_02783 could have a function similar to O-Acyl-ADP-ribose deacylase. This structural prediction was also supported by the COACH Predicted biological function, 2l8rA, defining the human protein C6orf130 in complex with ADP-ribose (C-score 0.59), matching the O-acetyl-ADPribose deacetylase receptor binding the ADP-ribose in the AA residues -G19 D20 L21 F22 H32 C33 I34 S35 R39 A42 I44 A45 L47 A87 P118 R119 I120 G121 C122 G123 L124 D125 Y150 L152-representing the binding sites. This ligand binding site (BL0101984) showed GO Molecular Functions of purine nucleoside binding (GO:0001883), hydrolase activity (GO:0016787) and deacetylase activity (GO:0019213) as well as the GO Biological Process of the purine nucleoside metabolic process (GO:0042278). Regarding the computational prediction of the putative role of AbsRNA 3 , the A1S_0501 hypothetical protein referred to an integral membrane component (GO:0016021), whilst the CD BLAST search provided a CD of the cytoskeletal protein RodZ containing Xre-like HTH and DUF4115 domains related to the cellcycle control, cell division, and chromosome partitioning (cl34261 Superfamily). This result was also supported by the I-Tasser predictions. By LOMETS, the A1S_0501 protein showed homology with the highest Norm Z-score (1.71) and 0.33 coverage with 2wus hit referred as a bacterial structural protein actin MreB that can be complexed with the cell shape protein RodZ. As regards the A1S_0501 GO and the consensus prediction of GO term, obtained from I-Tasser, the A1S_0501 protein showed a molecular function of DNA polymerase activity (GO:0034061) (GO-score 0.38) and DNA binding (GO:0003677) (GO-score 0.34) and the biological process of nucleic acid metabolism (GO:0090304) (GO-score 0.38). Validation of sRNA Expression Profile Custom TaqMan R probe-based real-time qPCRs, dedicated for the analysis of bacterial sRNAs, validated and confirmed the RNA-seq expression profiles of two selected sRNA candidates: AbsRNA 3 and microAbsRNA 5 in both Ab strain-pairs, as shown in Figure 1. In detail, AbsRNA 3 had a statistically significant under-expression (p-value ≤ 0.01), whilst microAbsRNA 5 showed a statistically significant overexpression (p-value ≤ 0.01) in both COL R Ab strains compared with their COL S parents. sRNA Target Mutants No COL MIC changes with respect to the wild-type Ab ATCC 17978 (COL-MIC 1 mg/L) were observed in the A1S_r01 Ab ATCC 17978 and A1S_2505 Ab ATCC 17978 mutants as well as in the WT + A1S_3097 (Table 4). DISCUSSION The comparative (sRNAome) integrated to bioinformatics, computational cross-double filtering and experimental validations of COL R versus COL S Ab strain-pair revealed distinctive small RNA signatures in COL R Ab. Small non-coding RNAs have been identified so far as crucial regulatory elements in bacteria showing high structural diversity and molecular action mechanisms. The most intensively studied prokaryotic sRNA regulators are cis-acting sRNAs or trans-encoded sRNAs, however, the more recent discovery of microRNAs in prokaryotes represents a challenging field of investigation regarding bacterial regulatory mechanisms. Our clinical COL R Ab were distinguished by 3 cis-acting sRNAs and 2 microRNA-size small RNA fragments involved in different biological networks. These distinctive features span a different area of bacterial biology involving protein synthesis apparatus, host-microbe interactions, biofilm production, cellcycle control, virulence and antibiotic resistance that emerge as sRNA targets. Notably, they do not seem apparently related to colistin resistance mechanism as demonstrated by our preliminary data on A1S_r01, A1S_2505 and high expressed A1S_3097 Ab ATCC 17978 mutants showing no COL MIC variations of mutants compared to the WT Ab ATCC 17978reflecting, however, the wide biological adaptations that the coexistence of colistin resistance and the XDR profile implies. Regarding the novel AbsRNA 1 , AbsRNA 2 and AbsRNA 3 , we can speculatively assume that via a cis-antisense regulation mechanism they could post-transcriptionally regulate the target translation. On the contrary, no regulation appears at the transcriptional level as demonstrated by the lack of statistically significant differential expression in these AbsRNA targets according to previously published transcriptomic data (Cafiso et al., 2019). On top, in COL R strains, AbsRNA 1 and AbsRNA 2 under-expression could indicate that they are characterized by sRNAs modulating the protein synthesis machinery and the amount of active ribosomes in agreement with other previous findings (Fernández-Reyes et al., 2009). The occurrence of the under-expressed AbsRNA 3 -targeting the O-acetyl-ADPribose -provided evidence that COL R Ab are distinguished by an AbsRNA 3 involved in the RNase III inhibition previously associated with different biological functions, including biofilm production, virulence, and antibiotic resistance in Gram-negative bacteria (Chen et al., 2011;Kim et al., 2013;Song et al., 2014). In fact, the O-acetyl-ADP-ribose is a substrate for several related macrodomain proteins such as the human MacroD1, human MacroD2, Escherichia coli YmdB. In E. coli, YmdB is an RNase III inhibitor that modulates many different functions including biofilm formation (Kim et al., 2013), and E. coli adaptive resistance to aminoglycosides via an antibiotic stress-induced sequential modulation of the endoribonucleolytic activity of RNase III and RNase G (Song et al., 2014). Regarding the microRNA-size small RNA fragments (microAbsRNA 4 and its pre-microAbsRNA 4 as well as the microAbsRNA 5 and its pre-microAbsRNA 5 ), our data suggest that these sRNAs exist in a premature form (pre-microAbsRNA) that act as precursors of their mature form, microAbsRNAs, likely obtained cutting the premature form. Moreover, the underexpressed microAbsRNA 4 and its pre-microAbsRNA 4 targeting a gene coding a structural membrane protein with a CD similar to cytoskeletal protein RodZ related to the cell-cycle control, cell division and chromosome partitioning, could speculatively address its regulation of these functions. In fact, the GO-term prediction highlighted two possible molecular functions, a DNA polymerase activity and a DNA-binding function. Furthermore, the microAbsRNA 4 target (A1S_0501) was previously listed as a transcript that decreased significantly upon exposure to NaCl in A. baumannii, but no relationship with the mechanisms of antimicrobial tolerance in response to monovalent cations was previously found (Hood et al., 2010). For the over-expressed microAbsRNA 5 and its pre-microAbsRNA 5 , targeting a signal peptide involved in the cytokinin biosynthetic process, we have to keep in mind that A. baumannii is a versatile pathogen that can adhere and invade numerous cell types displaying varying degrees of susceptibility to invasion, stimulating the proinflammatory immune response (Choi et al., 2008). The stimuli and signaling pathways implicated in cell death are not yet established; however, they involve imbalanced calcium homeostasis, pro-inflammatory cytokines, and oxidative stress, all traits related to strain virulence (Smani et al., 2011;Mortensen and Skaar, 2012). MicroAbsRNA 5 can regulate host-microbe A. baumannii interactions that shape the pathogenesis of Ab infection mediated by the host immune response. In this study, we experimentally and computationally discovered five statistically significant DE sRNAs characterizing COL R Ab strains speculatively implicated, as cis-antisense sRNAs, in the regulation of protein synthesis machinery via under-expressed AbsRNA 1 and AbsRNA 2 and in different biological functions, including biofilm production, virulence and aminoglycoside-resistance via an under-expressed AbsRNA 3 . Likewise, we found two microAbsRNAs that may be involved in cell-cycle control via an under-expressed microAbsRNA 4 and in the host-microbe interaction via an over-expressed microAbsRNA 5 . Colistin resistance onset in A. baumannii entails dissimilar biological adaptations -not exclusively related to colistinresistance -supporting the extremely complex and dynamic nature of this life-threatening microorganism and the urgent need to elucidate the role of small RNAs, whose only the tip of the iceberg is known. This work offers a model for the identification of sRNA signatures and the prediction of their targets in A. baumannii. Although we do not have clear information on their functions yet, our bioinformatic analysis may provide indications regarding the cellular roles of these new sRNAs. DATA AVAILABILITY STATEMENT The datasets generated for this study can be found in the NCBI GEO Database: GSE109951. AUTHOR CONTRIBUTIONS VC and SSte conceived and designed the study. VC, SStr, FL, VD, and AZ performed the genomics, transcriptomics, real time qPCR, and bioinformatics. GP contributed to the bioinformatics analysis. JA carried out the mutant construction. All authors analyzed the data and contributed and approved the manuscript. FUNDING This study was supported by a research grant PRIN 2017SFBFER from MIUR, Italy. This study was also supported by grant BIO2016-77011-R from the Ministerio de Economía y Competitividad. JA is a Serra Húnter Fellow, Generalitat de Catalunya, Barcelona, Spain.
2020-01-17T14:12:36.937Z
2020-01-17T00:00:00.000
{ "year": 2020, "sha1": "b70c4b71cba8a90355cb2052c363977a34cd162e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.03075/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b70c4b71cba8a90355cb2052c363977a34cd162e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
9592428
pes2o/s2orc
v3-fos-license
Generating unique IDs from patient identification data using security models Background: The use of electronic health records (EHRs) has continued to increase within healthcare systems in the developed and developing nations. EHRs allow for increased patient safety, grant patients easier access to their medical records, and offer a wealth of data to researchers. However, various bioethical, financial, logistical, and information security considerations must be addressed while transitioning to an EHR system. The need to encrypt private patient information for data sharing is one of the foremost challenges faced by health information technology. Method: We describe the usage of the message digest-5 (MD5) and secure hashing algorithm (SHA) as methods for encrypting electronic medical data. In particular, we present an application of the MD5 and SHA-1 algorithms in encrypting a composite message from private patient information. Results: The results show that the composite message can be used to create a unique one-way encrypted ID per patient record that can be used for data sharing. Conclusion: The described software tool can be used to share patient EMRs between practitioners without revealing patients identifiable data. INTRODUCTION The use of electronic health records (EHRs) has expanded in response to the burgeoning financial, administrative, and technological demands associated with modern health care. The first generation of EHRs was restricted to simple electronic medical records (EMRs), digital copies of paper charts limited to a single physician or hospital, which were stored in centralized in-house databases. [1] Over time, these simple EMRs progressed into EHRs that combined multiple EMRs with patient information such as allergies, prescriptions, contact information, and laboratory information. [1][2][3] The World Health Organization has defined the ideal EHR as one's entire health care record, which is continually updated over the course of their life, by all healthcare providers in all contexts. [1] Multiple benefits of EHR adoption have been identified and include increased administrative efficiency, improved patient safety, decreased costs, easier data collection for research, more complete documentation, and increased ability of patient to access their healthcare information. [2,4,5] Multiple instances of increases in administrative efficiencies have been noted with the use of EHRs. For example, an EHR system with barcode readers was observed to take 97% less time to locate patient records when compared to using paper charts. [5] In addition, an EHR in Uganda was associated with a 91% reduction in costs. [5] EHRs increase patient safety by reducing duplicitous laboratory tests; documenting allergic reactions and other elements of patient history; enhancing communication between care providers; providing integrated, point-of-care clinical decision-making tools; reducing inappropriate antibiotic use; and alerting the user to possible drug-drug interactions. [2][3][4][6][7][8] A recent cost-benefit analysis in Europe suggests that EHR usage allows researchers to identify and enroll patients in clinical trials faster, better determine research protocol feasibility, and provide data that can be analyzed to see if patient safety outcomes were met. [4] A small study of breast cancer patients showed that patient's anxiety was decreased when they had access to their own healthcare information. [9] Despite the aforementioned benefits of EHRs, three key barriers have limited their adoption. [10] First, the large upfront cost has deterred both individual users and countries unable to afford it. Ninety-one percent of health centers without EHRs cited lack of capital as the most important barrier to adoption. [11] Upfront adoption costs were noted to range from $16000 to $36000 per physician. [12] In another study, 86.7% of Canadian hospital managers also cited financial resources as the main barrier to providing patients access to their own EHR. [13] The second main barrier identified was privacy and security concerns. [1,2,10,[13][14][15][16] In a survey of 309 physicians who were nonusers of EHRs, 55.3% stated that privacy or security concerns were a barrier to EHR adoption. [14] Patients also reported privacy of their health information as a primary concern. [15] The third major barrier to widespread adoption of EHRs is lack of trained employees to create and maintain the system. [1,10,13] The aforementioned obstacles have largely constrained the adoption of EHRs to physicians, hospitals, and countries wealthy enough to afford the initial investment, employee training, and software to address security and privacy concerns. Therefore, EHRs have the potential to serve as a source of health disparity, their usage reserved to those able to pay for them, and depriving those unable to afford the initial investment of the long-term savings and patient safety benefits. [6,7,11,16,17] Those not wealthy enough to pay will get second-tier access to their healthcare information and may subsequently have diminished safety. This represents a potential violation of the basic bioethical tenets of autonomy and beneficence. [18] EHRs that do not align with these fundamental bioethical principles are unlikely to be successful as evidenced by the failure of an attempt to create a nationwide EHR in the UK. [1,19] As previously mentioned, security and privacy concerns are one of the fundamental barriers to the adoption of EHRs. Various initiatives to address security concerns pertaining to EHR have been undertaken. The ISO/ TS 18308 standard defined the secure storage and communication of health information as a fundamental component of an EHR. [20] The International Medical Informatics Association was established to address these security and privacy issues and contributed in creating guidelines and educational training program to address the concerns of healthcare providers, managers, biomedical, health informatics specialists to the confidentiality, privacy, and security of patient data. [21] The Advanced Informatics in Medicine/Secure Environment for Information Systems in MEDicine project has taken into account the traditional and proved principles of healthcare data processing, the various regulations within the European Union, the enormous and subtle risks of healthcare information technology systems, the cost of changing existing technology, and the mandatory need for encryption software to keep patient information secure from different privacy violations during data sharing. [22] Until such time that the universal adoption of high-level EHRs is a reality, there exists a need to handle pathology and laboratory data files in such a way that privacy is not breached. Data files with test results, patient names, sex, and birthdate are commonly generated but may lack a unique identifier that can be used to anonymize the data. For example, the Mosoriot Medical Record System, an EMR system developed for a primary care center in rural Kenya, required the creation of unique patient identifiers, as Kenya lacks the equivalent of a social insurance number. [23] In this paper, we describe an open-source tool to encrypt private patient information using MD5 and secure hashing algorithm (SHA)-1 implemented in R statistical software package (R digest package, Version 0.6.10, https://CRAN.R-project.org/package=digest). [24] Cryptography is the process of storing and sending information in a secure manner that limits access to intended recipients. [25] The basic goals of cryptography are as follows: [25] • Confidentiality/privacy: Ensuring that only the intended receiver is able to read the message • Data integrity: Ensuring that the message content received is not altered during sharing process • Authentication: Identifying the intended recipients. Message digest (MD) is a security model that generates a unique code for the purpose of providing a message authentication code. [26] MD5 and SHA [26] are one-way hashing functions (security models), which are easy to generate but are harder to reverse. MESSAGE DIGEST 5 ALGORITHM An MD is a cryptographic hash function encompassing a string of digits created by a one-way hashing function to protect the integrity of exchanged data. The original MD algorithm (MD1) was shortly followed by a modified version (MD2). [27] However, MD2 was soon found to be quite weak and shortly followed by MD3, which however was never released. MD3 was further developed and MD4 [27] was released; however, it was unsatisfactory, but it provided the theoretical foundations for MD5 and SHA-0. [27] MD5 produces 128-bit MD from input messages of variable length. MD5 operates iteratively on all message subblocks as explained in the following: Step 1: Preprocessing (Padding, Block Preparation, and Initialization) A processed message is padded such that its length (in bits) is corresponding to 448 mod 512. Shorter messages are padded with the first bit set to "1" and all the rest set to zero. The message length is then appended to the original message in the remaining 64 bits to form a block of 512 bits. MD5 operates on two inputs: the input message block and the output hash from the previous MD. In the first step, the initial hash values are constants provided by the algorithm. The initial values for MD5 are provided into four 32-bit words. A four-word buffer is used to store those values which are then replaced by the output hash values after each step. Step 2: Length Attaching A 64-bit delineation of the length of the message before the padding is attached to the result of the previous step. The resulting message has a length that is exactly a multiple of 512 bits. Step 3: Initialize Message Digest Buffer A four-word buffer (A, B, C, D) is used to compute the MD. Here, each of A, B, C, D is a 32-bit register. Step 4: Process Message in 16-Word Block Four auxiliary functions are defined to process the three 32-bit words and produce as output one 32-bit word. Step 5: Output The MD output is the processed words, A, B, C, D, with the low-order byte of A and end with the high-order byte of D. SECURE HASHING ALGORITHM The SHA algorithm is a cryptography hash function and used in digital certificate and data integrity. [26] The MD output is calculated using the final padded message as "n" 512-bit blocks. The algorithm makes use of two 160-bit registers, each consisting of five 32-bit sub-registers. The basic SHA-1 algorithm is described as follows: IMPLEMENTATION AND VALIDATION The validation data used were supplied by the Department of Pathology and Laboratory Medicine, University of Calgary and Calgary Laboratory Services, Calgary, AB, Canada. The validation data have 1,205,973 patient records, each of which has patient identification information, i.e., first name, middle name, last name, gender, and date of birth (DOB), in addition to clinical laboratory test results. We bind the patient's DOB, gender, and last name to form a composite identification field per record that is encrypted using the MD5 and SHA-1 algorithms. We compare the uniqueness of the composite ID to the corresponding encrypted ID and the results show that the encrypted composite message can be used as a new patient ID to share patient EMRs among practitioners. However, faulty data entry may cause inconsistency in the encrypted IDs due to last name change from single to married names, gender change, in case of twin patients, and other data entry errors that may generate different composite ID for the same patient. Availability The encryption tool is freely available from the authors. The software can be accessed online through the following link: https://github.com/ClinicalLaboratory/ Generating-Unique-IDs-from-Pateint-identification-Data-Using-Security-Models Using the Software To us, the encryption tool, R and RStudio, must be installed on the machine that has the patient record file. Place the provided R code file (UIDGen.R) in the folder where the data file is located. Open RStudio and then open the downloaded R code file. Change the path to your file as outlined in code, press Ctrl + A to select all the code, and finally, press Ctrl + Enter to run the code. The execution time may vary depending on your file size, the encryption algorithm selected, and the processing platform. Both algorithms, i.e., MD5 and SHA-1, are called in the R code file and their output are attached to the original composite message and the user is free to choose the encrypted message that is most suitable to use for patient record sharing. Validation Results These encryption algorithms were applied to the validation dataset of 1,205,973 patient records. As expected, both algorithms resulted in no duplicated identifiers for different patients. Furthermore, in all instances when the same patient had multiple records, both algorithms always generated a single unique identifier for that patient. DISCUSSION AND CONCLUSION Designing a secure EHR sharing environment has attracted a lot of attention within healthcare industry and academic community. However, this extensively mandates the need for security models to assure the privacy of patient identification information. A hash function receives a variable length message and produces a fixed-length digested message as its output. It is estimated that the efforts of coming up with two messages having the same MD are on the order of 2 64 computations and that the difficulty of coming up with any message having a given MD is on the order of 2 128 operations. [26,27] The SHA-1 algorithm is used the Digital Signature Algorithm for digital signatures. The SHA-1 algorithm belongs to a set of cryptographic hash functions similar to the MD family. However, the main difference between the SHA-1 and the MD family is the more frequent use of input bits during the hash function in the SHA-1 algorithm than in MD4 or MD5. This fact results in SHA-1 being more secured compared to MD4 or MD5 but at the expense of slower execution. [26] A major barrier to the adoption of EHRs in developing countries has been the perception that they are not secure. [1,23] However, when adequate policies and technologies are implemented, EHRs have several security advantages over paper records. [28] A trail of who has accessed the record can easily be created, and partial access can be controlled on a need to know basis. [28] This is often extremely important in developing countries as patients may face severe financial, social, and psychological ramifications if their private health information is disclosed. One such example is the significant stigma patients face if their HIV status is revealed to their community. [29] This software can be used as a cost-effective method of generating encrypted patient identifiers from data sets in limited-resource settings. EMRs in resource-limited settings may use spreadsheet or access-based datasets, and our software tool could be used to easily generate anonymized patient identifiers in these settings. [23,29,30] Similarly, other applications of this software could be to anonymize data sets assembled from manual chart reviews or historical data sets. The open-source software presented in this paper can be used solve identity (private patient information) encryption concerns in many different settings amongst them are as follows: • EMRs in resource-limited settings that are stored or created as spreadsheets. [31][32][33] • Clinical data sets assembled from manual chart reviews. [34,35] • Historical data sets created before modern EMRs. [33] In these settings, the presented software tool can be both time and cost effective to encrypt the extracted data and/or EMRs. Moreover, the tool does not require trained personnel to use, which is not the case in many modern EMR systems. Message Digest 5 and Secure Hashing Algorithm-1 Limitations It is observed that if the input size is increased the program became slower performing the SHA-1 than the MD5 algorithm. The SHA-1 algorithm is claimed to be secure because it is practically infeasible to compute the message corresponding to a given MD. [26] Furthermore, it is extremely improbable to detect two messages hashing to the same value. [26] Both algorithms performed as expected, with the SHA-1 being slightly slower but believed to be more secure than MD5. Therefore, we suggest that when computing capability or time is not a concern that SHA-1 may be better to use than MD5 as it may be more secure than MD5 due to the more bits used in the encrypted output message (ID). The encryption algorithms will produce the same identifier for different patients due to data entry errors or in some situations where the personal data are the same for different patients. One suggested solution for these situations is to sort out the EHR records before encryption by name, gender, and DOB. This will group the identical data records next to each other and the encryption algorithm can check the similarity of the generated identifiers and create a count field for the replica of the type unsigned long integer that is composed of four bytes. The unsigned long integer can accommodate a count start from 0 to 4,294,967,295 that is big enough to accommodate any possible duplicates in the EHR data. This will assure one-to-one mapping between the patient personal data and the generated identifier even in the case of multiple patients with the same personal data. However, this solution is computationally expensive and may require distributed processing to handle the massive data due to several sort and count updates. It worth noting that after the identifier is generated by the encryption algorithms, different physicians can have
2018-04-03T00:11:01.792Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "5ff32f34123dee6df197d978ae4993e62ce906d7", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2153-3539.197203", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "5ff32f34123dee6df197d978ae4993e62ce906d7", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
249454874
pes2o/s2orc
v3-fos-license
DDX3 acts as a tumor suppressor in colorectal cancer as loss of DDX3 in advanced cancer promotes tumor progression by activating the MAPK pathway Objective: The treatment and prognosis of patients with advanced colorectal cancer (CRC) remain a difficult problem. Herein, we investigated the role of DEAD (Asp-Glu-Ala-Asp) box helicase 3 (DDX3) in CRC and proposed potential therapeutic targets for advanced CRC. Methods: The expression of DDX3 in CRC and its effect on prognosis were explored by databases and CRC tissue microarrays. Stable DDX3 knockdown and overexpression cell lines were established with lentiviral vectors. The effects of DDX3 on CRC were investigated by functional experiments in vitro and in vivo. The molecular mechanism of DDX3 in CRC was explored by western blotting. Molecular-specific inhibitors were further used to explore potential therapeutic targets for advanced CRC. Results: The expression of DDX3 was decreased in advanced CRC, and patients with low DDX3 expression had a poor prognosis. In vitro and in vivo experiments showed that low DDX3 expression promoted the proliferation, migration and invasion of CRC. DDX3 loss regulated E-cadherin and β-catenin signaling through the mitogen-activated protein kinase (MAPK) pathway as shown by western blotting. In addition, the MEK inhibitor, PD98059, significantly reduced the increased cell proliferation, migration and invasion caused by knockdown of DDX3. Conclusions: DDX3 acts as a tumor suppressor gene in CRC. DDX3 loss in advanced cancer promotes cancer progression by regulating E-cadherin and β-catenin signaling through the MAPK pathway, and targeting the MAPK pathway may be a therapeutic approach for advanced CRC. Introduction Colorectal cancer (CRC) is a malignant tumor of the colorectal mucosal epithelium caused by the accumulation of genetic and environmental factors. According to the International Agency for Research on Cancer (IARC), the incidence of CRC in the world ranked third among all cancers in 2020, and the mortality rate ranked second among all cancers, accounting for 10% of the world's annual diagnosed tumors; CRC is a worldwide public health problem [1][2][3]. Especially in the advanced stage of cancer, most patients have a poor prognosis due to their high risk of recurrence and metastasis [4][5][6]. However, the underlying mechanisms of CRC progression remain unclear, and the treatment and prognosis of patients with advanced CRC remain a difficult problem. Therefore, it is urgent to elucidate the molecular Ivyspring International Publisher mechanism that influences CRC progression and to identify more effective intervention targets to improve patient outcomes. The DEAD (Asp-Glu-Ala-Asp) box protein family is an ATP-dependent RNA helicase superfamily that is highly conserved in evolution and widely distributed in eukaryotes [7,8]. As a member of the human DEAD-box protein family, DEAD box helicase 3 (DDX3) not only unwinds dsRNA but is also involved in almost all RNA-related activities, including mRNA splicing, RNA editing, RNA export, transcription and translation regulation [9][10][11][12][13]. Due to the important role of DDX3 in RNA metabolism, its dysfunction may lead to a variety of diseases [14][15][16][17]. A large body of evidence indicates that abnormal expression or dysfunction of DDX3 is closely related to tumorigenesis. A previous study on head and neck squamous cell carcinoma has found that high DDX3 expression is associated with lymph node metastasis and poor prognosis [18]. Chen et al. found that DDX3 promotes cell migration and invasion through the DDX3-Rac1-β-catenin axis in some cancer cell lines [19]. Vellky et al. showed that high expression of cytoplasmic DDX3 is associated with proliferation and metastasis of metastatic prostate cancer [20]. In contrast, male breast cancer patients with high cytoplasmic DDX3 expression have a higher 10-year survival rate [21]. In non-small-cell lung cancer, DDX3 loss caused by p53 inactivation contributes to malignancy and poor prognosis through the MDM2/Slug/E-cadherin pathway [22]. DDX3 acts as an oncogene or a tumor suppressor gene in different tumor types and is closely related to the characteristics of invasion. However, the role and mechanism of DDX3 in CRC remain unclear. Herein, we explored the role and mechanism of DDX3 in CRC progression and potential targets for the treatment of advanced CRC. Our study demonstrated that DDX3 acts as a tumor suppressor in CRC. The loss of DDX3 in advanced cancer promotes CRC progression by activating the mitogen-activated protein kinase (MAPK) pathway, and targeting the MAPK pathway may be a therapeutic approach for advanced CRC. Database analysis The molecular structure of the active catalytic center of the ATP-dependent RNA helicase, DDX3, was retrieved from the RCSB Protein Data Bank (PDB). The mRNA expression of DDX3 in CRC tissues was analyzed by the Oncomine database from the Gene Expression Omnibus (GEO) dataset. The protein expression of DDX3 in different clinical stages of CRC was analyzed through the UALCAN database, and these data were obtained from the Clinical Proteomic Tumor Analysis Consortium (CPTAC). We used the R2: Genomics Analysis and Visualization Platform and The Human Protein Atlas to analyze the relationship between DDX3 expression and the prognosis of CRC patients at the mRNA and protein levels, respectively. Tissue microarray (TMA) The human CRC TMA (HColA180Su15; Outdo Biotech, Shanghai, China) contained 101 tumor tissues from 101 CRC patients undergoing surgery from July 2006 to May 2007. The follow-up time was up to July 2015, and the follow-up interval was 9 years. The clinical staging was based on the seventh edition of the American Joint Committee on Cancer (AJCC) TNM staging system, including stages I, II, III and IV. Lentiviral transfection The LV-DDX3X-RNAi lentiviral vector (Gene-Chem Co., Ltd., Shanghai, China) was transfected into SW480 and HCT116 cells to construct the DDX3 knockdown cell line. The HBLV-h-DDX3X-3xflag-ZsGreen-PURO lentiviral vector (Hanbio Biotechnology Co., Ltd., Shanghai, China) was transfected into DLD-1 cells to construct the DDX3 overexpression cell line. All lentivirus-transfected cells were checked for transfection efficiency by observing green fluorescent protein (GFP) under an inverted fluorescence microscope (IX73; Olympus, Tokyo, Japan). The cells were screened with 2 ng/mL puromycin for 3 weeks to construct stable expression cell lines. Cell proliferation assay Cell Counting Kit-8 (CCK-8, Beyotime, Shanghai, China) was used to detect cell proliferation ability. Cells were seeded in 96-well plates (5000 cells/well) and cultured for 24, 48, 72 and 96 h. For the dosing group, an additional 6.25 μmol/L PD98059 was added to each well. The medium in the wells was then discarded. Reconstituted fresh medium containing 10% CCK-8 was added to the 96-well plate in an amount of 100 μL per well. The 96-well plate was placed in a 37 °C constant temperature incubator (Thermo Scientific, Waltham, MA, USA) for 1 h. Finally, the absorbance value of each well at 450 nm was measured with a microplate reader (Thermo Scientific, Waltham, MA, USA). Colony formation assay Cells were seeded in a 6-well plate (400 cells/well) and cultured for approximately 2 weeks. For the dosing group, an additional 6.25 μmol/L PD98059 was added to each well. After colonies of more than 50 cells were observed under the microscope (Olympus, Tokyo, Japan), the colonies were fixed with methanol and stained with 0.1% crystal violet solution. Finally, ImageJ software was used to count the number of colonies. Migration and invasion assays Transwell chambers (Kennebunk, ME, USA) with 8.0 μm polycarbonate membranes were used to measure the migratory and invasive abilities of cells. All Transwell chambers were placed in a 24-well plate. For the migration assays, 200 μL of serum-free DMEM containing approximately 1.0×10 5 cells was added to the upper chamber, and 700 μL of DMEM containing 20% FBS was added to the lower chamber. The cells were cultured for approximately 48 h. For the invasion assays, Matrigel (BD Biosciences, Franklin Lake, NJ, USA) was diluted at a ratio of 1:8 in serum-free DMEM. The bottom surface of the upper chamber was evenly covered with 60 μL of the prepared Matrigel and placed in an incubator for 4-5 h to solidify. Then, 200 μL of serum-free DMEM containing approximately 1.5×10 5 cells was added to the upper chamber, and 700 μL of DMEM containing 20% FBS was added to the lower chamber. The cells were cultured for approximately 72 h. For the dosing group, the cells were pretreated with 6.25 μmol/L PD98059 for 24 h, and 6.25 μmol/L PD98059 was added to each upper chamber during the culture. The cells that migrated or invaded to the lower chamber were fixed with methanol and stained with 0.1% crystal violet solution. Finally, five high-magnification fields of view were randomly selected, and the number of cells was counted with ImageJ software. Cell adhesion assay Matrigel (BD Biosciences, Franklin Lake, NJ, USA) was diluted at a ratio of 1:30 in serum-free DMEM. The bottom surface of a 96-well plate was evenly covered with 30 μL of the prepared Matrigel and placed in an incubator for 2 h to solidify. Subsequently, cells were seeded in 96-well plates and cultured for 40 min. Then, 100 μL of medium containing approximately 1.0×10 4 cells was added to each well. Cells were then fixed with methanol and stained with 0.1% crystal violet solution. Finally, ImageJ software was used to count the number of cells. Wound-healing assay A marker pen was used to draw several horizontal lines on the back of a 6-well plate for positioning. Cells were seeded in a 6-well plate and cultured until the bottom of the well was completely covered. Several vertical wounds were made at the bottom of the well with a 1 mm sterile tip. Wounds were imaged at 0, 24 and 48 h and analyzed with ImageJ software. Mouse subcutaneous and intraperitoneal xenograft model Eighteen athymic nude mice (BALB/c, male, 4 weeks old) were purchased and bred from the Laboratory Animal Center of the Medical Department of Xi'an Jiaotong University (Xi'an, China). Among them, 8 nude mice were divided into 2 groups to establish subcutaneous xenograft models. Each nude mouse was injected subcutaneously with 1.0×10 6 cells near the axilla. One month later, the nude mice were sacrificed by cervical dislocation, and the subcutaneous tumors were removed for follow-up studies. The following formula was used to calculate subcutaneous tumor: tumor volume=length×width 2 ×0.5 [23]. Another 10 nude mice were divided into two groups to establish intraperitoneal xenograft models. Each nude mouse was intraperitoneally injected with 1.0×10 6 cells. Two months later, the nude mice were sacrificed by cervical dislocation for exploratory laparotomy, and the livers were removed for follow-up studies. All experiments were approved by the Ethics Committee of the Medical Department of Xi'an Jiaotong University and performed in accordance with the NIH's Guide for the Use of Laboratory Animals. Western blot analysis Proteins were extracted from cells using cell lysis buffer (P0013; Beyotime, Shanghai, China) and protease inhibitor cocktail (B14001; Bimake, Houston, TX, USA). Gels were prepared using the SDS-PAGE Gel Kit (P0012A; Beyotime, Shanghai, China), and proteins were separated by SDS-PAGE. The separated proteins were transferred to PVDF membranes (1060023; GE Amersham, Chicago, IL, USA) by a semidry transfer unit (TE70X; Hoefer, San Francisco, CA, USA). The PVDF membrane was blocked with 10% milk at room temperature for 2-3 h and then incubated with primary antibody for 12 h at 4 °C. The following primary antibodies were used The excess primary antibody was washed off the PVDF membrane with Tris-buffered saline containing 0.1% Tween 20 (TBST). The PVDF membranes were then incubated with goat polyclonal secondary antibody to mouse IgG (1:6000; EK010; Zhuangzhi, Xi'an, China) or rabbit IgG (1:6000; EK020; Zhuangzhi) for 1.5 h. Finally, the protein bands were visualized using a chemiluminescence imaging system (GeneGnome XRQ; Syngene, Cambridge, England), and quantitative analysis was performed using FusionCapt Advance software. Molecular targeted drugs PD98059 (Selleck, Houston, Texas, USA), which is a ligand of aryl hydrocarbon receptor (AHR) and functions as an antagonist, was used as a specific inhibitor of MEK [25]. RK33 (Selleck, Houston, Texas, USA), which inhibits DDX3 helicase activity [26], was used as a specific inhibitor of DDX3. Statistical analysis Statistical analysis was performed using SPSS Statistics 23 and GraphPad Prism 8.0.2 software. Pearson's chi-squared test was used to determine whether the level of DDX3 expression was different in the categories of clinicopathological indicators. Univariate Cox regression analysis investigated the relationship between all clinicopathological indicators and survival. Multivariate Cox regression analysis was used to screen independent risk factors affecting prognosis. Survival analysis was performed using the Kaplan-Meier method with the log-rank test. Differences between two independent samples were compared using Student's t test and a paired t test. One-way ANOVA was used to compare DDX3 protein expression levels and the number of clones in the six CRC cell lines. Two-way ANOVA was used to analyze the differences in cell viability among different groups at different time points. The Pearson correlation test was used to analyze the correlation between two groups of things. Data are presented as the mean ± SD. All tests were two-tailed, and a P value <0.05 was considered statistically significant. DDX3 is downregulated in advanced CRC tissues, and low DDX3 expression in CRC is associated with metastasis and poor prognosis Information from the GEO dataset revealed that the DDX3 mRNA expression in CRC was significantly lower than that in normal tissues ( Figure 1A). We analyzed the protein expression of DDX3 in four clinical stages of CRC using the CPTAC database. In stage IV, DDX3 protein expression was significantly lower than that in stages II-III ( Figure 1B). The effect of DDX3 protein expression on patient survival was analyzed by The Human Protein Atlas, and the results showed that the survival rate of patients with low DDX3 protein was significantly reduced ( Figure 1C). In addition, we analyzed the effects of DDX3 mRNA expression on the overall survival (OS), event-free survival (EFS) and relapse-free survival (RFS) of patients through the R2 database. Patients with low DDX3 mRNA expression had significantly lower OS, EFS and RFS ( Figure 1D). We further detected DDX3 protein expression in the CRC TMA by IHC staining and analyzed the protein expression results in combination with clinical data in the TMA. These clinical data included sex, age, tumor location, gross type, tumor size, histological type, pathology grade, invasion depth, node metastasis, distant metastasis and AJCC stage. Low DDX3 protein expression was closely related to old age, advanced tumor stage and distant metastasis ( Figure 1E and Table 1). Moreover, the protein expression of DDX3 in stage IV was significantly lower than that in stages I-III ( Figure 1F), and the result was nearly consistent with that analyzed in the CPTAC database ( Figure 1B). Moreover, we performed univariate and multivariate Cox regression analyses using the survival information provided by the TMA. Univariate Cox regression analysis showed that patients with old age, right-sided CRC, mucinous adenocarcinoma, lymph node metastasis, distant metastasis, advanced tumor stage and low DDX3 expression had poor prognosis ( Table 2). Multivariate Cox regression analysis of these potential prognostic indicators showed that lymph node metastasis, distant metastasis (or AJCC staging) and low DDX3 expression were independent prognostic factors for CRC (Table 3). Finally, the Kaplan-Meier method was used for survival analysis of DDX3 expression levels, and the results showed that patients with low DDX3 expression had significantly lower overall survival ( Figure 1G and Table 4). These results suggested that DDX3 may act as a tumor suppressor, inhibiting CRC progression. Low DDX3 expression promotes CRC cell proliferation in vitro We explored the role of DDX3 in CRC progression through a series of in vitro experiments. First, the protein expression of DDX3 in six CRC cell lines (HT29, HCT116, SW480, SW620, Caco-2 and DLD-1) was determined by western blot analysis. One-way ANOVA showed that the expression of DDX3 was significantly different among these six cell lines with relatively high expression in HT29, HCT116, SW480 and SW620 cells as well as relatively low expression in Caco-2 and DLD-1 cells (Figure 2A). We measured the proliferation of these CRC cell lines by colony formation assays and CCK-8 assays and explored the correlation between cell proliferation and the baseline level of DDX3 expression. The number of clones of these CRC cell lines were significantly different ( Figure 2B), and with the increase of DDX3 protein expression, the number of clones gradually decreased with a Pearson correlation coefficient of -0.812, which was statistically significant ( Figure 2B lower right panel). The growth curves of these CRC cell lines were significantly different ( Figure 2C upper panel), and with the increase of DDX3 protein expression, the 450 nm absorbance of cells at 96 th hour gradually decreased with a Pearson correlation coefficient of -0.985, which was statistically significant ( Figure 2C lower panel). These results showed that cell proliferation was negatively correlated with DDX3 expression level in CRC cells. According to the expression level of DDX3 in the above cell lines and the characteristics of different CRC cell lines, we selected SW480 and HCT116 cells to construct stable DDX3 knockdown (DDX3-KD) cells, and we selected DLD-1 cells to construct stable DDX3 overexpression (DDX3-OE) cells. The expression level of DDX3 in these cells was verified by western blot analysis ( Figure 2D). The effect of DDX3 on cell proliferation was evaluated by CCK-8 and colony formation assays. In the CCK-8 assay, DDX3-KD SW480 and HCT116 cells had significantly increased proliferation ability compared to DDX3-NC cells ( Figure 2E left and middle panel), while the proliferation ability of DDX3-OE DLD-1 cells was significantly reduced compared to DDX3-EV cells ( Figure 2E right panel). In the colony formation assay, knockdown of DDX3 significantly increased the clonogenicity of SW480 and HCT116 cells ( Figure 2F), while overexpression of DDX3 significantly decreased the clonogenicity of DLD-1 cells ( Figure 2F). These findings showed that low DDX3 expression promotes CRC cell proliferation. Low DDX3 expression promotes CRC cell migration and invasion in vitro We explored the effects of DDX3 on cell adhesion, migration and invasion by adhesion, Transwell migration and Transwell invasion assays. In SW480 and HCT116 cells, the adhesion ability of the DDX3-KD group was significantly reduced compared to the DDX3-NC group ( Figure 3A left and middle panel), and the migration and invasion abilities were significantly improved ( Figure 3A left and middle panel). In DLD-1 cells, the DDX3-OE group had significantly improved adhesion ( Figure 3A right panel) but significantly decreased migration and invasion abilities ( Figure 3A right panel) compared to the DDX3-EV group. The migrationpromoting ability of low DDX3 expression was further confirmed by wound-healing assays, in which wounds were photographed at 0, 24 and 48 h ( Figure 3B upper panel). In SW480 and HCT116 cells, the wound-healing rate of the DDX3-KD group was significantly higher than that of the DDX3-NC group ( Figure 3B left and middle panel), while in DLD-1 cells, the wound-healing rate in the DDX3-OE group was significantly lower than that in the DDX3-EV group ( Figure 3B right panel). The above in vitro results indicated that low DDX3 expression reduces adhesion but promotes migration and invasion of CRC cells. Low DDX3 expression promotes tumor growth and metastasis in vivo The in vitro experiments initially revealed the tumor suppressor function of DDX3. We next verified the function of DDX3 in vivo by establishing nude mouse xenograft models. By subcutaneously injecting the same number of DDX3-NC and DDX3-KD SW480 cells into nude mice, subcutaneous xenograft models were successfully established ( Figure 4A left panel). The isolated xenograft tumors are shown in the right panel of Figure 4A. The tumor weights and volumes of the subcutaneous xenograft tumors in the DDX3-KD group were significantly greater than those in the DDX3-NC group ( Figure 4B and 4C). In addition, the subcutaneous xenograft tumor tissues were stained with hematoxylin and eosin (H&E), and representative staining images are shown in the left panel of Figure 4D. H&E staining showed that the number of cells per unit area in the DDX3-KD group was greater than that in the DDX3-NC group ( Figure 4D right panel). The subcutaneous xenograft tumor tissues were further evaluated by IHC staining. Compared to the DDX3-NC group, the protein expression of DDX3 was significantly decreased in the DDX3-KD group, but the expression of Ki67, a proliferation marker, was significantly increased in the DDX3-KD group ( Figure 4E). These results indicated that the tumor proliferation ability in the DDX3-KD group was significantly enhanced in vivo. To explore the effect of DDX3 on tumor metastasis, we established intraperitoneal xenograft models by intraperitoneal injection of the same number of DDX3-NC and DDX3-KD SW480 cells into nude mice ( Figure 4F left panel). Severe ascites developed in the abdominal cavity of nude mice injected with DDX3-KD SW480 cells after 2 months of feeding ( Figure 4G lower panel and 4H). Exploratory laparotomy showed that compared to the DDX3-NC group, nude mice in the DDX3-KD group had significantly more intraperitoneal metastases and bloody ascites ( Figure 4G upper panel). The livers of nude mice were removed from the abdominal cavity, and are shown in the right panel of Figure 4F. Macroscopically, 80% (4/5) of the nude mice in the DDX3-KD group developed massive liver metastases, while 0% (0/5) of the nude mice developed obvious liver metastases in the DDX3-NC group ( Figure 4F right panel). H&E staining of the liver tissues of nude mice showed that the DDX3-KD group had diffusely distributed atypical cell clusters of different sizes but that the DDX3-NC group only had a few of these atypical cell clusters ( Figure 4I). The animal experiments further confirmed that the low expression of DDX3 promotes the growth and metastasis of CRC through establishing subcutaneous and intraperitoneal xenograft models of nude mice. Knockdown or functional inhibition of DDX3 activates the MAPK pathway and β-catenin signaling in CRC cells The above experiments suggested that DDX3 promoted the growth and metastasis of CRC. We next explored the molecular mechanism of DDX3 in CRC by western blot analysis. The expression levels of p-Erk1/2 and p-β-catenin were significantly increased in DDX3-KD SW480 and HCT116 cells but were significantly decreased in DDX3-OE DLD-1 cells ( Figure 5A). In addition, IHC staining of subcutaneous xenograft tumor tissues from nude mice showed that the expression of p-Erk1/2 in the DDX3-KD group was significantly higher than that in the DDX3-NC group ( Figure 4J left panel). These results suggested that DDX3 may be involved in the regulation of the MAPK pathway and β-catenin signaling. Moreover, in SW480 and HCT116 cells, downregulation of DDX3 decreased E-cadherin expression, while in DLD-1 cells, upregulation of DDX3 increased E-cadherin expression ( Figure 5A). The IHC staining intensity of E-cadherin protein in the subcutaneous xenograft tumor tissues of the DDX3-KD group was also significantly weaker than that in the DDX3-NC group ( Figure 4J right panel). In addition, Snail and Slug protein expression was significantly increased in DDX3-KD SW480 cells, while Slug protein expression was significantly decreased in DDX3-OE DLD-1 cells ( Figure 5A). Snail and Slug, as upstream negative regulators of E-cadherin, participate together with E-cadherin in the process of epithelial-mesenchymal transition (EMT) and are key genes in tumor invasion and metastasis [27]. These results indicated that low DDX3 expression promotes the invasion and metastasis of CRC cells by regulating the Snail/Slug/E-cadherin pathway. RK33 is a small molecule inhibitor of DDX3 that interferes with the helicase activity of DDX3 by docking with the ATP-binding gap of DDX3 [26]. The crystal structure of the active catalytic core of DDX3 was obtained from the PDB database and is shown in Figure 5C. The minimum concentration of RK33 inhibiting DDX3 helicase activity in lung cancer cells is 50 nmol/L [26]. To estimate the working concentration of RK33 in CRC cells and explore the effect of DDX3 loss-of-function on the MAPK pathway, we treated SW480 cells with RK33 in a serial concentration gradient for 12 h and extracted total proteins for western blot analysis. With the increase in RK33 concentration, DDX3 protein expression did not significantly change, while p-Erk1/2 expression gradually increased with a Pearson correlation coefficient of 0.793, which was statistically significant ( Figure 5B). SW480 and HCT116 cells were treated with 25 μmol/L RK33 for 12 h, and the related protein expression was analyzed by western blot analysis. In SW480 and HCT116 cells treated with RK33, the protein expression of K-Ras did not significantly change, while the expression levels of p-Raf1, p-MEK1/2, p-Erk1/2 and p-β-catenin were significantly upregulated ( Figure 5D). These results suggested that both knockdown and functional inhibition of DDX3 activate the MAPK pathway and β-catenin signaling, which may be related to DDX3 helicase dysfunction. The PD98059 MEK inhibitor partially inhibits CRC cell proliferation, migration and invasion caused by downregulation of DDX3 The MEK inhibitor, PD98059, was used to investigate whether DDX3 affects CRC progression through the MAPK pathway. To estimate the working concentration of PD98059 in CRC cells, we treated SW480 cells with PD98059 in a serial concentration gradient for 24 h and extracted total proteins for western blot analysis. In the range of 0.00 to 6.25 μmol/L, the expression of p-Erk1/2 gradually decreased with increasing PD98059 concentration with a Pearson correlation coefficient of -0.997, which was statistically significant ( Figure 6A), while PD98059 had little effect on the expression of DDX3 ( Figure 6A). Therefore, we used 6.25 μmol/L as the working concentration of PD98059 in subsequent experiments. We treated DDX3-KD SW480 and HCT116 cells with PD98059 for 24 h to explore whether inhibition of the MAPK pathway reduces the tumorigenicity caused by DDX3 downregulation. In the CCK-8 assay, the cell proliferation ability of the DDX3-KD SW480 and HCT116 cells treated with PD98059 was significantly reduced compared to the DDX3-KD group without PD98059 ( Figure 6B). Consistently, PD98059 significantly attenuated the enhanced clonogenicity induced by DDX3 knockdown in the colony formation assay of SW480 and HCT116 cells ( Figure 6C). In the Transwell assay, the migration and invasion abilities of the DDX3-KD SW480 and HCT116 cells treated with PD98059 were significantly attenuated compared to the DDX3-KD group without PD98059 ( Figure 6D). These experiments suggested that the DDX3-KD-induced cell proliferation, migration and invasion are partially inhibited by the PD98059 MEK inhibitor and that targeting the MAPK pathway may be a treatment approach for advanced CRC. DDX3 regulates the expression of E-cadherin and β-catenin through the MAPK pathway in CRC cells In addition to activating the MAPK pathway, we found that DDX3 loss also upregulated the expression of β-catenin, Slug, and Snail but downregulated the expression of E-cadherin. However, it was unclear whether these proteins are regulated by the MAPK pathway. Therefore, we treated DDX3-KD SW480 and HCT116 cells with 6.25 μmol/L PD98059 and performed western blot analysis. Compared to DDX3-KD SW480 cells, the expression of E-cadherin was significantly increased in cells treated with PD98059, and the expression levels of p-Erk1/2 and p-β-catenin were significantly decreased in the cells treated with PD98059 ( Figure 7A upper panel). Compared to DDX3-KD HCT116 cells, the expression levels of p-MEK1/2, p-Erk1/2 and p-β-catenin were significantly decreased in cells treated with PD98059 ( Figure 7A lower panel). These results suggested that the protein expression of E-cadherin and β-catenin was regulated by the MAPK pathway. Thus, DDX3 regulates E-cadherin and β-catenin signals through the MAPK pathway. Based on the above results, we concluded that DDX3 loss activates Snail/Slug/ E-cadherin and β-catenin signals through the MAPK pathway, which promotes EMT, thus leading to CRC progression ( Figure 7B). Discussion As an RNA helicase, DDX3 is involved in gene regulation and almost all RNA metabolic processes [28]. Because of the importance and variety of its functions, the role of DDX3 in tumorigenesis and progression is complex. Previous studies have reported that DDX3 has dual roles as an oncogene or tumor suppressor in different cancer types [29,30]. In the few CRC articles, the conclusions about DDX3 function are inconsistent [31,32]. The present study was the first to confirm the tumor suppressor effect of DDX3 in CRC by multiple means, including databases, CRC TMA, in vitro experiments and in vivo experiments. DDX3 was expressed at low levels in CRC, and DDX3 expression was decreased in advanced CRC. Low DDX3 expression was closely related to distant metastasis and may be used as an independent risk factor for poor prognosis in CRC patients. In addition, the in vitro and in vivo experiments showed that low DDX3 expression promoted CRC progression by regulating E-cadherin and β-catenin signals by activating the MAPK pathway, while the PD98059 MEK inhibitor partially inhibited the proliferation and invasion of CRC cells, suggesting that targeting the MAPK pathway may be a therapeutic approach for advanced CRC. The process of tumor occurrence and development is often accompanied by hypoxia in the microenvironment. Hypoxia can lead to cellular stress responses. It has been reported that cells under stress produce stress granules that promote cell survival and NLRP3 inflammasomes, which promote apoptosis. The two substances compete for DDX3 to activate their own functions, thereby regulating the survival or death of cells under stress [33,34]. DDX3 is a key factor regulating cell fate under stress. A previous experiment in nude mice has confirmed that DDX3 loss in the myeloid compartment results in low levels of IL-1β in plasma, which reduces NLRP3 inflammasome production, thereby contributing to cell survival under stress [33]. We hypothesize that low DDX3 expression in CRC may promote tumor cell survival under hypoxia by reducing inflammasome production, but this hypothesis needs to be verified by many experiments in CRC. At present, many studies have confirmed that Ras gene mutation is one of the initiating events of CRC [35]. Activation of the MAPK pathway caused by K-Ras mutation reduces the expression of APCs through β-catenin/TCF signaling, resulting in the development of CRC [36]. K-Ras mutations are present in most human CRC cell lines, including the SW480, HCT116 and DLD-1 cells used in this study, indicating that our experiment was based on K-Ras mutant CRC cells. The results indicated that DDX3 loss in K-Ras mutant CRC further activated the MAPK pathway and that targeting this pathway partially inhibited the proliferation and metastasis of tumor cells caused by DDX3 loss in K-Ras mutant CRC. Zhou et al. suggested that targeting the MAPK pathway may be used as a treatment for CRC with K-RAS mutation [36]. Therefore, targeting the MAPK pathway may be of great significance for the clinical treatment of CRC with DDX3 loss and K-RAS mutation. We showed by western blot analysis that in CRC, DDX3 loss inhibited E-cadherin expression and activated β-catenin signaling by activating the MAPK pathway ( Figure 7B). In fact, there may be a protein interaction network between E-cadherin and β-catenin. Adhesive junctions between cells require high levels of the E-cadherin-β-catenin complex. If E-cadherin expression is reduced, free β-catenin is abundantly localized to the nucleus and activates the TCF/LEF transcription factor, thereby further decreasing E-cadherin expression by increasing Slug transcription. Moreover, the downregulation of E-cadherin and the upregulation of β-catenin are involved in the EMT process [37]. Furthermore, Stockinger et al. showed that low expression of E-cadherin promotes cell growth and proliferation by activating β-catenin/TCF signaling in an adhesion-independent manner [38]. Gottardi et al. further confirmed this finding in the human SW480 CRC cell line [39]. These conclusions are highly consistent with our experimental results and indirectly demonstrate that low DDX3 expression promotes the proliferation and metastasis of CRC cells by inhibiting E-cadherin and activating β-catenin signaling through the MAPK pathway. In conclusion, our findings confirmed the tumor suppressor role of DDX3 in CRC. Low expression of DDX3 in CRC suggests poor prognosis, and targeting the MAPK pathway may be a therapeutic option for advanced CRC. Our conclusions may have important clinical significance for the prognosis and treatment of CRC, especially for advanced CRC.
2022-06-08T05:19:43.081Z
2022-06-06T00:00:00.000
{ "year": 2022, "sha1": "3ec95ced1a1cfb2b42962c7706c394884629eafb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "3ec95ced1a1cfb2b42962c7706c394884629eafb", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16986317
pes2o/s2orc
v3-fos-license
Aquaporin7 plays a crucial role in tolerance to hyperosmotic stress and in the survival of oocytes during cryopreservation Hyperosmotic stress may induce apoptosis of different cells. However, oocytes show tolerance to osmotic stress during cryopreservation by vitrification, which is an assisted reproductive technique. The underlying mechanism is still not understood. Here, we demonstrated that hyperosmosis produced by high concentrations of cryoprotectants, including DMSO, ethylene glycol and sucrose, significantly upregulated the protein levels of aquaporin (AQP) 7, but not AQP3 and AQP9, in mouse oocytes. Knockdown of AQP7 expression by siRNA-injection significantly reduced the survival of oocytes after vitrification. In oocytes, AQP7 was shown to bind with F-actin, a protein involved in almost all biological events. Moreover, we found that hyperosmosis could upregulate the phosphorylation levels of CPE-binding protein (CPEB) and Aurora A. Inhibition of the PI3K and PKC pathways blocked the hyperosmosis-induced upregulation of AQP7 and the phosphorylation of CPEB and Aurora A in oocytes. In conclusion, hyperosmosis could upregulate the expression of AQP7 via Aurora A/CPEB phosphorylation mediated by the PI3K and PKC pathways, and upregulation of AQP7 plays an important role in improving of tolerance to hyperosmotic stress and survival of oocytes during cryopreservation by vitrification. Results Hyperosmosis induces upregulation of AQP7, not AQP3 and AQP9, protein levels in mouse oocytes. Our previous studies have shown that human and mouse oocytes expressed AQP3 and AQP7, two members of the aquaglyceroporin family 9,12 . In this study, we examined AQP9, another member of the aquaglyceroporin family, in mouse oocytes, and found that mouse oocytes also express AQP9 ( Supplementary Fig. S1). To examine whether hyperosmotic stress induced alterations of aquaglyceroporin expression in oocytes, we treated mouse oocytes with a high concentration of two penetrating cryoprotectants, EG and DMSO, and a non-penetrating cryoprotectant, sucrose, respectively. We found that all three hyperosmotic cryoprotectant solutions could upregulate AQP7 protein expression in oocytes (Fig. 1A,B). However, these hyperosmotic cryoprotectant solutions did not upregulate AQP3 and AQP9 protein expression in mouse oocytes ( Fig. 1C-F). On the other hand, expression of F-actin in oocytes was not changed after treatment with the EG, DMSO or sucrose solutions (Fig. 1G,H). The results suggest that hyperosmotic stress may selectively upregulate AQP7 expression in mouse oocytes, and AQP7 may play a main function in water and cryoprotectant transport during oocyte cryopreservation. To examine the functions of AQP7 in oocytes during cryopreservation, we knocked down AQP7 in mouse oocytes by injecting them with siRNA targeting AQP7 at the GV stage. Expression levels of AQP7 mRNA and protein in oocytes injected with AQP7 siRNA were significantly lower than in oocytes injected with scrambled siRNA (Supplementary Fig. S2). Then we performed the cryopreservation by vitrification using EG as the cryoprotectant. After thawing, the oocytes injected with AQP7 siRNA were dark in colour and had shrunk (Fig. 1I). The survival rate of the oocytes injected with AQP7 siRNA was 0%, which was significantly lower than that of the oocytes injected with scrambled siRNA (64%) (Fig. 1J, P < 0.05, Chi-square test). On the other hand, we treated mouse oocytes with AQP3 siRNA and found that the survival rate of these oocytes was 44%, which was lower than that of the oocytes injected with scrambled siRNA, but higher than the AQP7-knockdown oocytes after thawing (Fig. 1I). These results indicate that AQP7 is a main water channel involved in tolerance to hyperosmotic stress in oocytes during cryopreservation. Hyperosmosis induced redistribution of AQP7 in the cell membrane. It has been shown that osmotic pressure can stimulate aquaporin gene expression in rat astrocytes 20 . We treated mouse oocytes with different concentrations of a non-penetrating cryoprotectant, sucrose, which resulted in different osmotic pressures, and found that gradual increases in osmotic pressure induced gradual upregulation of AQP7 levels in the cell membrane ( Fig. 2A,B). As a cytoskeleton protein, F-actin has been reported to be involved in the trafficking of many intracellular proteins 21 . To examine whether F-actin might facilitate the translocation of AQP7 from the cytoplasm to the cell membrane, we observed the interaction between AQP7 and F-actin. Immunofluorescence analysis showed co-localization of AQP7 with F-actin in oocytes (Fig. 2C, supplementary Fig. S3). This co-localization was further confirmed by a co-immunoprecipitation experiment in 293FT cells expressing GFP-hAQP7. Immunoblot analysis showed that F-actin was present in the GFP-hAQP7 immunoprecipitate (Fig. 2D). In the converse experiment, GFP-hAQP7 was present in the F-actin immunoprecipitate (Fig. 2D). This result suggests that (C) Immunofluorescence analysis of AQP7 and F-actin colocalization in mouse oocytes (red: AQP7; green: F-actin). Scale bar (A-C), 20 μ m, (D) Co-immunoprecipitation of AQP7 and F-actin. IP, immunoprecipitation. (E) Green fluorescent protein (GFP) intensity in 293FT cells transfected with the GFP-hAQP7 fusion protein expression plasmid or with the GFP vector alone in the presence of 8% EG, 9.5% DMSO, and 0.5 M sucrose, respectively. Scale bar, 10 μ m. (F) Summary data of the immunofluorescence analysis (calculated for 20 cells for each condition obtained from three independent experiments). Data are presented as the mean ± SE, **P < 0.01 compared to the corresponding control. AQP7 proteins may be transported by F-actin from the cytoplasm to the cell membrane, where AQP7 facilitates water and cryoprotectant transport and improves osmotic balance. To confirm that hyperosmotic cryoprotectant solutions had a direct effect on AQP7 expression in the cells, we constructed a GFP-hAQP7 fusion protein expression plasmid and transfected this plasmid into 293FT cells. Treatment of GFP-hAQP7 plasmid-transfected 293FT cells with EG, DMSO or sucrose induced a significant increase in GFP fluorescence intensities (Fig. 2E,F). Additionally, we noticed that the fluorescence intensities in the cell membrane also increased (Fig. 2E), suggesting that hyperosmotic stress could induce the translocation of more AQP7 proteins to the cell membrane where they perform their function of facilitating water and cryoprotectant transport. However, treatment of 293FT cells transfected with a plasmid that expressed GFP alone with DMSO, EG or sucrose did not alter GFP fluorescence intensities (Fig. 2E,F). Hyperosmosis induced CPEB phosphorylation in oocytes. CPEB is a translational regulatory sequence-specific RNA-binding protein that controls oocyte development. It binds with mRNA to prevent translation. When it is phosphorylated, the mRNAs are released and translated. After treating mouse oocytes with a hyperosmotic solution containing EG, DMSO or sucrose, we did not detect a significant change in total CPEB protein levels in any of the groups (Fig. 3A,B). However, the immunofluorescence intensities of phosphorylated CPEB were significantly increased in the three groups (Fig. 3C,D). Western blotting showed the same results (Fig. 3E,F). Hyperosmosis induced Aurora A phosphorylation in oocytes. Oocyte maturation requires Aurora A-catalysed CPEB serine 174 phosphorylation and CPE-dependent cytoplasmic polyadenylation 22 . Aurora A undergoes several phosphorylation events, which are cell-cycle-controlled. T288 phosphorylation of Aurora A allows this kinase to be activated 23 . When mouse oocytes were treated with hyperosmotic EG, DMSO or sucrose solutions, the levels of phosphorylated Aurora A (pAurora A) protein in the oocytes was significantly increased in the three groups (Fig. 4C,D), although there was no significant change in the total Aurora A protein level (Fig. 4A,B). Western blotting showed that pAurora A increased significantly in the three groups (Fig. 4E,F). PI3K and PKC pathways mediate the upregulation of the Aurora A/CPEB phosphorylation induced by hyperosmosis. To clarify the upstream signal pathway that activates Aurora A/CPEB, selective inhibitors were used to identify the signaling pathway involved. We examined hyperosmotic stress-induced CPEB and Aurora A phosphorylation in mouse oocytes that were pretreated with a PI3K inhibitor (LY294002), a PKC inhibitor (staurosporine), a MEK inhibitor (U0126) and a JNK inhibitor (SP600125), respectively. We found that the upregulation of CPEB and Aurora A phosphorylation by the hyperosmotic EG solution was blocked by the PI3K and PKC inhibitors, but not by the MEK and JNK inhibitors (Fig. 5). These results suggest that the PI3K and PKC pathways may mediate the upregulation of Aurora A/CPEB phosphorylation induced by hyperosmosis. Inhibition of PI3K and PKC blocked the upregulation of AQP7 expression induced by hyperosmosis. To clarify the signalling pathway mediating the upregulation of AQP7 by hyperosmotic stress, we examined hyperosmotic stress-induced AQP7 expression in mouse oocytes that were pretreated with LY294002, staurosporine, U0126 and SP600125, respectively. Immunofluorescence analysis showed that both LY294002 and staurosporine blocked the hyperosmotic EG solution-induced upregulation of AQP7 expression, whereas the other inhibitors had no effect (Fig. 6A,B). In addition, these inhibitors had no effects on AQP7 expression without treatment with EG solution (Supplementary Fig. S4A). We used another kind of PKC inhibitor (GF109203X) to treat oocytes. We found that the upregulated AQP7 expression which induced by hyperosmosis was also inhibited by PKC inhibitor GF109203X ( Supplementary Fig. S4B). These results indicate that the hyperosmosis-induced expression of AQP7 may be mediated by the PI3K and PKC signalling pathways. To confirm that PI3K and PKC signalling pathways mediated the hyperosmotic stress-induced upregulation of AQP7 expression, we repeated the same experiments in 293FT cells that were transfected with the GFP-hAQP7 fusion protein expression plasmid. We also found that both LY294002 and staurosporine, but not U0126 and SP600125, attenuated the fluorescence intensities of GFP-hAQP7 induced by the hyperosmotic EG solution (Fig. 6C,D). Meanwhile, Western blotting showed the same results (Fig. 6E,F). Discussion Permeability of the plasma membrane to water and cryoprotectants is crucial for cell survival during cryopreservation. Aquaporins, in the cell membrane, and especially members of the aquaglyceroporin subfamily, which includes AQP3, AQP7 and AQP9, play an important role in facilitating water and cryoprotectants movement because aquaglyceroporins are permeable not only to water but also to small neutral solutes 4,8,13 . Our previous study demonstrated that AQP7 is expressed in human and mouse oocytes 9,12 . In the present study, we found that hyperosmosis could upregulate AQP7 expression in mouse oocytes. However, no effect of hyperosmosis on AQP3 and AQP9 expression was detected. These results suggest that hyperosmosis may selectively upregulate aquaporin expression in oocytes. We noticed that high osmotic pressure could alter the distribution of AQP7 that was translocated from the cytoplasm to the cell membrane. When AQP7 gene expression was knocked down, the survival rate of the oocytes was significantly reduced, indicating that AQP7 is a main aquaporin subtype involved in promoting oocyte tolerance to hyperosmotic stress during cryopreservation. Osmolarity plays an important role in cellular homeostasis. A given osmotic gradient across the cell membrane induces osmotic flow. The osmotic flow can be achieved through the lipid bilayer as well as through proteins. Among these proteins, aquaporins are the main water transporters and play a crucial role in modulation of the osmotic permeability of the cell membranes 24 . Upregulation of the water transport capacity of the cell membrane with aquaporins can improve the osmotic flux generated by an osmotic gradient. The expulsion of intracellular water and a reduction in intracellular ice formation are facilitated by exposure to penetrating cryoprotectants, including propanediol, DMSO and EG, which may penetrate the cell membrane to displace water via an osmotic gradient and/or non-penetrating cryoprotectants, including sucrose, which provide a continuous osmotic gradient. Penetrating cryoprotectants also aid in balancing other intracellular solutes, which are lethal at high concentrations 3 . One of the critical steps in cryopreserving oocytes is the loading of penetrating cryoprotectants, which may result in severe osmotic perturbations and cryoprotectant toxicity depending on the specifics of the experimental protocol 25 . Although several studies of the oocyte on cryopreservation were conducted to assess the efficiency, reliability, and biosafety of cryoprotectant loading methods, these studies mainly focused on physical or chemical approaches to improve cryoprotectant loading and removal. In the present study and in previous studies, we found that hyperosmotic stress and cryoprotectants induced the upregulation of AQP7 expression in the cell membrane where aquaporins perform their functions in the facilitation of water and cryoprotectant transport. This may be one of mechanisms involved in the tolerance of oocytes to hyperosmotic stress during cryopreservation. Translocation of aquaporins to the cell membrane plays crucial roles in oocyte maturation 26 . Rapidly accumulating evidence indicates that actin and actin-based cytoskeletal complexes are involved in protein trafficking 27 . In the kidney, AQP2 is transported to the cell membrane through actin-based microtubules formed after the new proteins are synthesized and processed 28 . The present study showed that AQP7 might bind to F-actin, suggesting that the translocation of AQP7 from the cytoplasm to the cell membrane may also depend on actin filaments to achieve better and faster completion of water and cryoprotectant exchange and maintenance of the osmotic pressure balance in oocytes. Gene expression is regulated in several steps including transcription, post-transcriptional modification, translation, and post-translational modification. Among these processes, post-transcriptional and post-translational regulation provides cells with the ability to respond rapidly and sensitively to internal or environmental changes. When an oocyte enters the first meiotic division, transcription is stopped until the first cleavage after fertilization 17,18 . However, protein synthesis is still in progress, and oocytes rely on post-translational regulation, such as phosphorylation, dephosphorylation, ubiquitination, sumoylation and post-transcriptional regulation of pre-existing transcripts to precisely regulate the maturation process 29 . A large number of transcripts in cells are stored by combining with the CPEB and other proteins to form RNA protein complexes. CPEB is the critical protein that controls mRNA translation in oocytes 30 . When CPEB is phosphorylated, it is activated. Phosphorylated CPEB promotes depolymerization of RNA protein complexes, and then mRNA can be translated 31 . In our study, we found that the hyperosmotic cryoprotectant solutions upregulated the phosphorylation levels of CPEB in oocytes, suggesting that hyperosmosis might promote mRNA, including AQP7 mRNA, to be liberated and promote the translation of AQP7 mRNA. However, the hyperosmotic cryoprotectant solutions did not upregulate AQP3 and AQP9 expression. The underlying mechanism is unclear, and further studies are needed to clarify it. CPEB and Aurora A play important roles in the regulation of gene expression in oocytes. After restarting meiosis, gene expression in oocyte is regulated by CPEB19. Aurora A is a member of a family of mitotic serine/threonine kinases, and is activated by phosphorylations at one or more sites, such as T288 23 . Phosphorylated Aurora A activates CPEB via phosphorylation of the serine 174 residue 22 . It has been shown that osmotic pressure can stimulate pressure sensors on the cell membrane to activate intracellular kinases 31 and cause a series of physiological changes, including changes in gene expression 32 . The activated kinases may induce the phosphorylation of Aurora A. Our study demonstrates that hyperosmosis significantly increased the phosphorylation levels of both CPEB and Aurora A. On the other hand, we found that the upregulated AQP7 expression and increased phosphorylation levels of CPEB and Aurora A were blocked by PI3K and PKC inhibitors. It has been demonstrated that the hyperosmotic pressure produced by high NaCl induces phosphorylation of p85 on Y508, which is involved in the activation Scientific RepoRts | 5:17741 | DOI: 10.1038/srep17741 of PI3K 33 , and that hyperosmotic stress also increases phosphorylation of PKCδ 34 . Our results suggest that the hyperosmosis-increased upregulation of AQP7 expression may be due to the activation of the Aurora A/CPEB pathway. This pathway might be mediated by PI3K and PKC, although further studies are needed for clarification. Because tolerance to hyperosmotic stress is very important in cells during cryopreservation, it is important to find the underlying mechanism of tolerance to hyperosmotic stress in oocytes and other cells to improve cryopreservation protocols. Our study provides the previously undocumented insight that AQP7 in oocytes may mediate tolerance to hyperosmotic stress during cryopreservation. All together, we observed that hyperosmotic stress activated the Aurora A/CPEB pathway mediated by PI3K and PKC to upregulate AQP7 expression. F-actin may play an important role in AQP7 intracellular trafficking from the cytoplasm to the cell membrane. A schematic diagram depicting the potential interactions between these cellular processes is shown in Fig. 7. Most significantly, these observations suggest that AQP7 may be a potent candidate for predicting the quality of oocytes after cryopreservation in clinical practice. Materials and Methods Collection of mouse oocytes. All animal experiments were performed according to the appropriate guidelines for animal used approved by the Institutional Animal Care and Use Committee of the School of Medicine, Zhejiang University. C57BL/6J female mice, an inbred strain, were purchased from the Zhejiang University Animal Centre at 6 weeks of age and were housed, fed and maintained under identical conditions, with a regulated light-dark cycle (14 h light/10 h dark, starting at 6:00 A.M. each day). Superovulation was induced in 8-wee-old adult mice by intraperitoneal injection of 10 IU pregnant mare serum gonadotropin (PMSG; Sigma-Aldrich, Saint Louis, USA), followed by 5 IU hCG 48 h after the PMSG administration. Unfertilized oocytes were collected from the ampullary portions of the oviducts 14 h after the injection of hCG and were freed from cumulus cells by suspending them in human tubal fluid (HTF) containing 80 units/ml hyaluronidase (Sigma-Aldrich), followed by washing with fresh HTF. Only MII oocytes showing a normal appearance with a visible first polar body were used in this study. The number of oocytes obtained from each animal was 20. The number of animals used for this experiment was 350. Solutions and treatments. To evaluate the effects of cryoprotectants on the biological properties of oocytes, mouse oocytes were treated with HTF containing 8% EG, 9.5% DMSO or 0.5 M sucrose for 20 min. To evaluate the osmostic stress-dependent response, oocytes were treated with HTF containing 0.25 M, 0.5 M, 0.75 M or 1 M sucrose for 20 min. Oocytes treated with HTF alone were used as a control. After treatment, the oocytes were collected and immediately fixed at room temperature for 20 min in PBS Figure 7. Schematic diagram depicting the potential interactions between hyperosmotic stress and the intracellular processes, including upregulation of AQP7. During the cryopreservation process, the hyperosmotic stress produced by the cryoprotectant solution activates phosphorylation of Aurora A (pAurora A) through PI3K and PKC signalling pathways. pAurora A phosphorylates CPEB, a regulator of intracellular translated protein, might result in upregulation of AQP7 protein expression. The increased AQP7 binds to F-actin and might be translocated from the cytoplasm to the cell membrane where they facilitate water and cryoprotectant transport. The GFP-hAQP7 plasmid and the pEGFP-C1 control plasmid were transfected into 293FT cells as described previously 12 . After 48 hours of transfection, cells were treated with culture medium containing 8% EG, 9.5% DMSO or 0.5 M sucrose for 30 min. The cells were washed once with sterile PBS and fixed with 4% PFA for 20 min at room temperature, followed by three washes with PBS. Immunofluorescence and image processing. The fixed oocytes were blocked in 1× PBS containing 5% BSA and 1% saponin (Sigma-Aldrich) for 30 min, followed by incubation with primary antibody at a 1:200 dilution at 4 °C overnight. After washing three times, the oocytes were incubated with Alexa Fluor 488/594 goat anti-rabbit IgG (Invitrogen, Carlsbad, USA) at a 1:400 dilution for 30 min. Oocytes were imaged with an Olympus FV1000 laser-scanning confocal microscope (Olympus, Tokyo, Japan) under a Olympus UPlanSApo 20× /0.75 objective lens. The fluorescence intensity of each image was analysed using Image-J software (U.S. National Institutes of Health). The image signal was obtained by measuring the mean pixel intensity of the cell subtracting by the mean background and then multiplying this value by the area of the cell. Finally, the relative intensity corresponded to the treatment group signal divided by the control signal. Images of fixed 293FT cells were taken with an Olympus FV1000 laser-scanning confocal microscope under a 60× /1.4 oil objective lens. The presented images are representative of 3 experiments. The fluorescence intensities from 20 different cells of one representative experiment were quantified using Image-J software. Microinjection of siRNA in mouse oocytes. For knockdown of AQP7 expression, oocytes at the germinal vesicle stage (immature oocytes) were obtained by puncturing follicles on the ovaries of female mice without the injection of hCG, 46-50 h after the injection of PMSG. Mouse oocytes exhibiting a normal appearance were stored at 37 °C in HEPES-buffered HTF containing 10% serum substitute supplement (pH 7.4) covered with mineral oil (Sigma-Aldrich). These oocytes were injected with scrambled RNA (Ambion, Austin, USA), mouse AQP3 siRNA (S62527, sense: 5′ -GGAUUGUUUUUGGGCUGUATT-3′ ; antisense: 5′ -UACAGCCCAAAACAAUCCCA-3′ , Ambion) or mouse AQP7 siRNA (sense: 5′ -GCAGCUACCACCUACUUAATT-3′ ; antisense: 5′ -UUAAGUAGGUGGUAGCUGCAG-3′ , Ambion) and then cultured until they matured to the metaphase II stage. In microinjection experiments, each oocyte was held with a holding pipette connected to a micromanipulator on an inverted microscope and injected with 10 pl scrambled RNA solution (1 pg/pl, as a control) or with AQP3 or AQP7 siRNA solution (1 pg/pl) with an injection needle connected to another micromanipulator. Injected oocytes with a normal appearance were cultured in HTF containing 10% serum substitute supplement, 0.1 IU/ml FSH, 0.5 IU/ml HCG and 0.05 mg/ml penicillin under 5% CO 2 at 37 °C. AQP7 mRNA and protein expression levels were examined 16-18 hours after the injection, using qPCR and immunofluorescence, respectively. qPCR. Total RNA was extracted from 30-50 mouse oocytes using the RNeasy Plus Micro Kit according to the manufacturer's instructions (Qiagen, Hilden, Germany). The cDNA was prepared by reverse transcription, using the RT reagent Kit (Takara, Dalian, China). RT-nested PCR was repeated at least three times. Oocyte vitrification and survival rate analysis. Oocytes injected with scrambled, AQP7 siRNA or AQP3 siRNA were vitrified using a two-step media protocol and thawed as described previously 12,36 . Briefly, oocytes were placed in cryoprotectant solution I (8% (v/v) EG in HTF medium) for 5 min, followed by placement in cryoprotectant solution II (15% (v/v) EG, 10 mg/ml Ficoll 70 (Pharmacia Biotech, Uppsala, Sweden) and then in 0.5 M sucrose in HTF medium for less than 30 s at room temperature. Oocytes were sealed in a cryovial and stored in liquid nitrogen. The oocytes were washed twice in HTF medium containing 0.5 M sucrose before being cultured again. The survival rate of the thawed oocytes was assessed by examining the appearance of their cytoplasm and plasma membranes under a stereomicroscope (Nikon, Tokyo, Japan) after culturing at 37 °C and 5% CO 2 for 2 hours. Oocytes showing a clear outline of the plasma membrane and normal size and colour were considered as surviving cells. Signalling pathway inhibitor treatment and analysis. The oocytes were randomly divided into five groups and pretreated with the PKC inhibitor staurosporine (15 nM), the PI3K inhibitor LY294002 (25 μ M), the MEK inhibitor U0126 (10 μ M), the JNK inhibitor SP600125 (25 μ M), or vehicle (control) for 10 min. After pretreatment, oocytes were washed four times with HTF. Oocytes were treated with 8% EG in HTF for 20 min, and then fixed immediately. The protein levels of CPEB, pCPEB, Aurora A, pAurora A and AQP7 were analysed with immunofluorescence as described above. On the other hand, 293FT cells transfected with GFP-hAQP7 or GFP alone were treated with the same protocol. Cells were collected and fixed for imaging or lysed in RIPA buffer for Western blotting. Expression levels of GFP-hAQP7 and GFP were analysed by the fluorescence image processing as described above and by Western blotting. Western blotting. Treated oocytes and transfected 293FT cells were lysed in 1× RIPA buffer containing protease inhibitors (1 μ g/ml PMSF), respectively. Samples were separated using a 10% SDS gel. The separated samples were transferred to a nitrocellulose transfer membrane. After incubating for 1 h with blocking buffer, the membrane was exposed to primary antibody (1:1000) at 4 °C overnight, followed by incubation with secondary antibodies (DyLight 680 or 800, KPL, Gaithersburg, USA) for 2 hours. The membranes were scanned by Odyssey (Li-cor bioscience, Lincoln, USA). The bands were analysed with Quantity One software (Bio-Rad Laboratories, Hercules, USA). The primary antibodies used in this study included rabbit polyclonal anti-phosphorylation of CPEB (T171) antibody (Epitomics), rabbit polyclonal anti-phosphorylation of Aurora A (T288) antibody (Abcam), rabbit polyclonal anti-AQP7 antibody (Santa Cruz Biotechnology) and mouse monoclonal anti-β -actin antibody (Santa Cruz Biotechnology). Statistical analysis. All data were normally distributed and are presented as the mean ± SEM, An independent-samples t test was used to evaluate the statistical significance between two groups. One-way analysis of variance (ANOVA) and Turkey's post-hoc tests were used to evaluate the statistical significance of differences between more than two groups. A chi-square test was used to compare the survival rates of oocytes between the two groups. The WINDOWS version of SPSS 16.0 was used for the statistical analysis. P-values less than 0.05 were considered statistically significant.
2018-04-03T05:38:01.464Z
2015-12-04T00:00:00.000
{ "year": 2015, "sha1": "dcc63ab0ecdc012b9bfa742a2a3298f10b359184", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep17741.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dcc63ab0ecdc012b9bfa742a2a3298f10b359184", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }