id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
178986
pes2o/s2orc
v3-fos-license
The Relationship between Serum Bilirubin and Elevated Fibrotic Indices among HBV Carriers: A Cross-Sectional Study of a Chinese Population The study probed the association between bilirubin and hepatitis B virus (HBV) infection and progression. A cross-sectional analysis of 28,500 middle aged and elderly Chinese participants was performed to analyze the differences of bilirubin in terms of hepatitis B surface antigen (HBsAg) positive or negative and the correlation between bilirubin and severity of hepatic fibrosis estimated by non-invasive indices. Bilirubin was significantly higher in the HBsAg (+) group than the HBsAg (−) group. Higher bilirubin levels were consistently associated with elevated liver fibrosis indices among HBsAg carriers. Compared with quartile 1 of total bilirubin (TBil), the multivariable-adjusted ORs (95% CIs) for elevated fibrosis indices of quartile 4 were 2.24 (95% CIs, 1.57–3.21) estimated by fibrosis 4 score (FIB-4) and 2.22 (95% CIs, 1.60–3.08) estimated by aspartate transaminase to platelet ratio index (APRI). In addition, direct bilirubin (DBil) had a stronger association with elevated liver fibrosis indices than did indirect bilirubin (IBil). Furthermore, the relationship between DBil and elevated fibrosis indices was more robust among participants who were female, overweight or had central fat distribution. These findings suggested that bilirubin levels, especially DBil, were independently associated with an increased risk of increased fibrosis indices. Introduction Although the hepatitis B vaccine is available, more than 350 million people are chronically infected with hepatitis B virus (HBV) [1], and about 30% of the world's population shows serological evidence of current or past infection [2]. HBV infection is a major threat to public health, especially in China [3]. It has been estimated that more than 80% of liver cancer worldwide is attributable to hepatitis B or C virus infections [4]. Patients with HBV infection have a high risk of progressive liver fibrosis which can lead to cirrhosis and hepatocellular carcinoma (HCC). In addition, inflammatory milieu caused by chronic viral infections might influence hepatic glucose sensitivity and increase insulin resistance [5], which could be determined from the findings that diabetes and prediabetes were prevalent among HBV-infected patients [6]. HBV infection is the tenth leading cause of death worldwide, with about 786,000 related deaths every year [7]. Therefore, HBV infection causes high mortality and creates a social burden. Bilirubin, a primary end product of heme catabolism, processes cytoprotective properties because of the antioxidant nature of the bile pigment. In 1995, it was first suggested by Breimer that bilirubin might be implicated in the protection of specific kinds of diseases resulting from oxidative damage [8]. Then, as observed by related reports, bilirubin appeared to have the innate capacity to resist oxidative damage [9][10][11]. Meanwhile, all patterns of serum bilirubin, including total bilirubin (TBil), direct bilirubin (DBil) and indirect bilirubin (IBil), display protective properties in cardiovascular diseases [12]. Several studies clarified that the robust anti-oxidative properties of bilirubin could largely explain its protective effects [13][14][15], and the findings that subjects with higher serum bilirubin had elevated total antioxidant status also confirmed its anti-oxidative property [16]. On the other hand, bilirubin has previously been proven to be a marker of liver injury and is incorporated in several prognostic scoring models, such as the Child-Pugh (CP) score and the model of end-stage liver disease (MELD) [17]. In recent years, relevant studies focused on the effect of bilirubin on several hepatic disorders. Recent study suggested that DBil independently reduced non-alcoholic fatty liver disease (NAFLD) risk [18]. Patients who were with liver biopsy-proved non-alcoholic steatohepatitis (NASH) had significantly lower bilirubin levels compared with those without NASH, and there was also an inverse association between bilirubin levels and histological features including fibrosis [19,20]. Serum IBil levels were negatively correlated with the progression of liver fibrosis in chronic hepatitis C (CHC) patients [21]. However, the concentrations of serum bilirubin increased along with the increased severity of fibrosis among CHC patients [22]. High levels of bilirubin or combined prognostic index including bilirubin were able to predict short-term mortality in the patients with acute-on-chronic liver failure [23,24]. Meantime, abnormal bilirubin values were even more strongly associated with poor clinical outcome at baseline and up to five years follow-up in the patients with primary billiary cirrhosis [25]. Related studies have illustrated the associations between bilirubin and liver disease. However, the study which is performed on participants with HBV infection is lacking. Secondly, the most of these studies did not investigate the associations between all subtypes of bilirubin and liver disease. Finally, the study that assesses the relationship between bilirubin and HBV-related fibrosis based on large sample sizes might be needed. Considering the apparently bewildering complexity of bilirubin's function in different milieus and the high prevalence of HBV infection, the underlying association between bilirubin and liver fibrosis with HBV infection needs to be warranted. The task for this study is to untangle the intrinsic relationship between bilirubin and the different indices reflecting liver function in health check-ups with HBV infection. Characteristics of Participants A total of 28,500 participants (26,549 with HBsAg negative, 1951 HBsAg positive) were included. Demographics and laboratory data of the subjects are listed in Table 1. Firstly, individuals with HBsAg positive were younger than control subjects. HBsAg seropositive subjects had a higher prevalence of current smoking and drinking, which might be explained by the fact that the individuals with HBV infection had more males compared to HBsAg negative subjects. Then, HBsAg seropositive individuals had a lower prevalence of traditional cardiovascular risk factors like hypertension, diabetes, coronary heart disease (CHD) and fatty liver. Next, the mean levels of platelet count, total cholesterol (TC), triglycerides (TG) and low-density lipoprotein cholesterol (LDL-C) were significantly lower in the HBsAg (+) group. As expected, subjects who were HBsAg-positive had higher levels of liver injury markers (aspartate transaminase (AST) and alanine transaminase (ALT)) compared with HBsAg-negative subjects. It was notable that mean levels of TBil, IBil and DBil were significantly higher in the HBsAg (+) group than the control group. Associations between Serum Bilirubin and Demographic, Biochemical Parameters, Non-Invasive Liver Fibrosis Indices among HBsAg (+) Participants Among the carriers of HBV, the associations between serum bilirubin and demographic, biochemical parameters, non-invasive liver fibrosis indices are presented in Table 2. TBil showed significant associations with age, waist-to-hip ratios (WHR), AST, hemoglobin, platelets count, TG, TC, LDL-C, aspartate transaminase to platelet ratio index (APRI) and Fibrosis 4 score (FIB-4). Surprisingly, DBil exhibited significant associations with more parameters than IBil, possibly reflecting the differences of the two forms of bilirubin. In both DBil and IDil, we observed positive associations with age, WHR, hemoglobin, APRI and FIB-4, and inverse association with platelets count and TC. The positive correlations between DBil and the liver injury makers (AST) were statistically significant, but the significant associations between IBil and certain markers of liver injury (AST and ALT) were not shown. Table 3 shows that there are sequentially higher odds of elevated liver fibrosis indices with ascending quartiles of TBil in multivariate models. After adjusting for age, sex, body mass index (BMI), WHR, smoking, drinking, education, marriage status and physical activity, TBil level increase was still significantly linked to the risks of elevated APRI and FIB-4. The positive relationship was decreased by additional adjustment for medical history but still statistically significant. The corresponding odds ratios (ORs) (95% confidence intervals (CIs)) for risks of elevated APRI comparing the upper 3 TBil quartiles with the lowest TBil quartile were 1. 26 Associations between Different Forms of Bilirubin (IBil and DBil) and Fibrosis Scores among HBsAg (+) Participants We further analyzed the associations between different forms of bilirubin (IBil and DBil) and fibrosis score among HBsAg (+) participants. The positive associations were found between IBil with the elevated APRI or FIB-4 (Table S1) A similar tendency was exhibited between DBil and the two fibrosis indices (Table S2). Corresponding ORs (95% CIs) for elevated APRI comparing the highest DBil quartile with the lowest IBil quartile was 2.64 (1.89, 3.70) after full adjustment. As for elevated FIB-4, the corresponding ORs (95% CIs) comparing the highest DBil quartile with the lowest DBil quartile was 3.07 (2.10, 4.50). The trend between bilirubin and elevated fibrotic indices was statistically significant for IBil (P FIB-4 < 0.001 and P APRI < 0.001) and DBil (P FIB-4 < 0.001 and P APRI < 0.001 Comparisons of TBil, IBil and DBil in HBsAg(+) Participants The areas under the receiver operating characteristic curve (AUROC) that predicts elevated APRI and FIB-4 for each form of bilirubin is presented in Table 4 Table 4). The DBil had higher AUROCs than TBil and IBil. Serum DBil Levels in Relation to Fibrotic Features in Subgroups among HBsAg(+) Participants To better understand the effect of DBil levels on the liver fibrotic progression across sex, BMI and WHR, we inquired into the relationship between DBil and fibrosis indices in subgroups. Figure 1 shows the risks of elevated fibrosis indices with each standard deviation (SD) increase in DBil in the different sex, overweight or not, central and peripheral fat distribution subgroups. Positive associations between DBil and elevated FIB-4 or APRI were consistent in different subgroups after full adjustment. Subjects who were female were inclined to have a higher risk of elevated fibrosis indices. In addition, after full adjustment, subjects who were overweight or had central fat distribution were inclined to have a higher risk of elevated fibrosis index reflected by APRI than those without these metabolic disorder features. The ORs (95% CIs) for elevated APRI per 1 SD increase of DBil across overweight (yes or no) were 2. for male and WHR < 0.81 for female). Adjusted for age (continuous), sex (male, female), BMI (continuous), WHR (continuous), smoking (never smoking, quit smoking, current smoking), drinking (never drinking, quit drinking, current drinking), education (≤6/7-9/10-12/≥13), marriage status (yes/no), physical activity (yes/no) and medical history (yes/no for hypertension, CHD, diabetes, fatty liver). APRI, aspartate transaminase to platelet ratio index; FIB-4, Fibrosis 4 score; SD, standard deviation; WHR, waist-to-hip ratios; ORs, odds ratios; CIs, confidence intervals. Discussion In this study, we observed the significant differences in serum bilirubin between individuals with or without HBV infection. Moreover, higher bilirubin levels might indicate more advanced liver fibrosis in a large group of retired workers with serum evidence of HBV infection. In addition, such associations were statistically significant when adjusted for multiple parameters, especially in female, overweight or central fat distribution individuals. Our observations could probably best explain the relationship between bilirubin and validated non-invasive fibrosis indices among HBV carriers. For a start, a lower prevalence of traditional cardiovascular risk factors like hypertension, diabetes, CHD and fatty liver was observed in the report, which was different from the findings that diabetes and prediabetes were prevalent among HBV-infected patients [6]. One possible reason for this would be the distinct characteristics of the two study groups. Following the phenomenon that the bilirubin levels among HBV carriers were higher than the others, we sought to investigate the relationship between bilirubin and the liver fibrosis among individuals with HBV infection. Unlike the meaningful findings that higher bilirubin concentrations were associated with reduced risk of cardiovascular disease, respiratory illness and mortality in epidemiological studies [13,15], the protective effect of bilirubin was not meaningful in our study. Similar tendency was observed in patients with hepatitis C virus (HCV) associated fibrosis [22], but the concentrations of bilirubin in this study were far beyond the"normal range". Conversely, the principal results in our study were different from another HCV-related fibrosis study [21] in for male and WHR < 0.81 for female). Adjusted for age (continuous), sex (male, female), BMI (continuous), WHR (continuous), smoking (never smoking, quit smoking, current smoking), drinking (never drinking, quit drinking, current drinking), education (≤6/7-9/10-12/≥13), marriage status (yes/no), physical activity (yes/no) and medical history (yes/no for hypertension, CHD, diabetes, fatty liver). APRI, aspartate transaminase to platelet ratio index; FIB-4, Fibrosis 4 score; SD, standard deviation; WHR, waist-to-hip ratios; ORs, odds ratios; CIs, confidence intervals. Discussion In this study, we observed the significant differences in serum bilirubin between individuals with or without HBV infection. Moreover, higher bilirubin levels might indicate more advanced liver fibrosis in a large group of retired workers with serum evidence of HBV infection. In addition, such associations were statistically significant when adjusted for multiple parameters, especially in female, overweight or central fat distribution individuals. Our observations could probably best explain the relationship between bilirubin and validated non-invasive fibrosis indices among HBV carriers. For a start, a lower prevalence of traditional cardiovascular risk factors like hypertension, diabetes, CHD and fatty liver was observed in the report, which was different from the findings that diabetes and prediabetes were prevalent among HBV-infected patients [6]. One possible reason for this would be the distinct characteristics of the two study groups. Following the phenomenon that the bilirubin levels among HBV carriers were higher than the others, we sought to investigate the relationship between bilirubin and the liver fibrosis among individuals with HBV infection. Unlike the meaningful findings that higher bilirubin concentrations were associated with reduced risk of cardiovascular disease, respiratory illness and mortality in epidemiological studies [13,15], the protective effect of bilirubin was not meaningful in our study. Similar tendency was observed in patients with hepatitis C virus (HCV) associated fibrosis [22], but the concentrations of bilirubin in this study were far beyond the"normal range". Conversely, the principal results in our study were different from another HCV-related fibrosis study [21] in which the inverse relationship between bilirubin and liver fibrosis was found. The possible reasons for the inconsistent results of the researches above might be the sample size and diverse study design. The results presented indicated that bilirubin might act as an independent risk factor for significant liver fibrosis. Is this biologically plausible? The process of liver fibrosis is the excessive accumulation of extracellular matrix proteins including collagen. Most types of chronic liver disease without clinical symptoms have developed into liver fibrosis which could result in cirrhosis, liver failure, and other severe complications. In advanced cirrhosis, glucuronyl conjugation of bilirubin and biliary excretion of DBil are markedly impaired and jaundice appears [26]. Therefore, the concentration of bilirubin in serum may be a good prognostic marker for patients with decompensated liver cirrhosis. Hepatic fibrosis, as the onset of liver cirrhosis, might disturb the bilirubin's normal production and excretion in the liver. Although it is difficult to determine the exact mechanisms behind the relationship between bilirubin and fibrotic progression because of complexity of the disease, several hints could be identified. Firstly, related studies proposed that bilirubin was able to induce cytotoxic effects [27][28][29][30], unbalance the redox homeostasis [31], and finally affect the mitochondrial integrity and induce apoptosis [32]. Second, activated retinoid-storing hepatic stellate cells might contribute more to the elevation of DBil levels in blood [33], and the medicine associated with reversion and prevention of cirrhosis could also reduce the levels of serum bilirubin [34]. Lastly, slightly elevated bilirubin could induce a stress response to the endoplasmic reticulum, resulting in a decreased proliferative and metabolic activity of hepatocytes [35]. Three important findings were achieved about evaluating serum bilirubin in individuals with HBV infection. First, our results showed the positive associations between all forms of bilirubin and severity of liver disease. DBil was more correlative with the indices of liver fibrosis. Also, DBil had higher AUROCs than TBil and IBil. To be noticed, there were slight differences between DBil and IBil. IBil had more potent anti-oxidant capacity than the DBil [36] might explain the weaker association between IBil and the risk of elevated fibrosis indices. Second, our results suggested that females should pay more attention to an increase of DBil level. Related study suggested a protective effect of estrogens on fibrogenesis via the inhibition of stellate cell proliferation [37]. The individuals in the current study were featured with old age and estrogens levels were substantially decreased. The findings suggested that elderly females should pay more attention to the increase in DBil. In addition, overweight subjects and those with central fat distribution should also be careful of an increase in DBil level. The intricate interplay between HBV infection and metabolic factors might be involved in the positive relationship between bilirubin and fibrosis [38]. The data in the present study confirmed this idea which the participants with higher DBil levels had higher risks of elevated fibrosis scores among subjects with central fat distribution or overweight compared to those without such metabolic disorders. Lifestyle advice should be offered to all HBsAg carriers due to its easy implementation with little risk of side-effects or cost. The data displayed here can be explained only in the context of the study design. Firstly, due to the limitations of the cross-sectional study, the potentially important function of bilirubin needed to be further investigated. Second, the severity of liver fibrosis was assessed only by noninvasive indices. Although liver pathology is the gold standard, patients' discomfort and expense should be considered also. Noninvasive methods had overcome the limitations of liver biopsy and were also used as prognostic indices for subjects with hepatitis B-associated HCC [39]. Furthermore, the accuracy of FIB-4 and APRI were 78% and 76% [40], suggesting they were suitable for regular monitoring of disease progression [41,42]. Thus, using APRI and FIB-4 to assess liver fibrosis was acceptable in the circumstances of the study. Finally, we failed to acquire information on concentrations of virus titer that were potentially linked to the pathophysiology of CHB. The proportion of HBeAg(+) was 1.8%, and relatively small in the study. In summary, our study demonstrates a robust association between bilirubin levels and higher surrogate indices of liver fibrosis among participants with HBV infection. This suggests that bilirubin levels, especially DBil, were independently associated with an increased risk of increased fibrosis indices. Study Population We conducted a cross-sectional study using the data from the Dongfeng-Tongji cohort study of retired worker as described previously [43]. In 2013, a total of 38,295 individuals were subjected to physical examination, laboratory tests and accomplished semi-structured questionnaires which included socio-demographic information and other concerned information during face-to-face interviews. Among these subjects, 28,500 underwent laboratory testing for HBV infection. HBV infection was defined as the presence of hepatitis B surface antigen (HBsAg) in peripheral blood. Participants were divided into 2 study groups: (a) participants with HBsAg; (b) controls without HBsAg. Ethics Statement Participants were enrolled after obtaining their written informed consent to the study protocol that was approved by the Medical Ethics Committee of the School of Public Health, Tongji Medical College, Huazhong University of Science and Technology and Dongfeng General Hospital, DMC (approval No. 03, 1 August 2008). Measurements Participants who underwent an overnight fast were given a physical examination at Dongfeng Central Hospital with trained physicians, nurses and technicians. Body mass index (BMI) was calculated as weight in kilograms/(height in m) 2 . A reasonable estimation of fat distribution might be made using waist-to-hip ratios (WHR). Subjects were separated into those with central fat distribution (WHR ≥ 0.81 for women and ≥ 0.92 for men) and those with peripheral fat distribution (WHR < 0.81 for women and <0.92 for men), as described in related study [44]. After an overnight fast, all blood specimens were collected to test blood lipids, fasting glucose, hepatic function and renal function at the hospital's laboratory by the ARCHITECT Ci8200 automatic analyzer (Abbott, Chicago, IL, USA) with corresponding reagent kits. The laboratory also provided a complete blood count and urine routine test. Commercially available enzyme immunoassays were used to determine serum HBsAg, hepatitis B e antigen (HBeAg), antibodies to hepatitis B surface antigen, hepatitis B e antigen and hepatitis B core antigen at the same laboratory using a fully automatic immunoanalyzer, Uranus AE 120 (AIKANG, Shenzhen, China). In addition, abdominal B-type ultrasound was inspected using Aplio XG (TOSHIBA, Tokyo, Japan), by experienced radiologists. A history of regular smoking was defined as having smoked at least one cigarette per day for more than six months. Smokers who met the definition of regular smoking were divided into current smokers and former smokers according to whether these subjects quitted smoking at the time of the interview. The participants never smoking were defined as non-smokers. Likewise, drinking status was classified into three groups: never drinking, quit drinking, current drinking. CHD, hypertension and diabetes were self-reported chronic diseases. The diagnosis of these conditions was used in accordance with well-accepted international standards [45]. The existence of fatty liver was based on the information of abdominal B-type ultrasound. Indices of Liver Fibrosis We calculated two validated non-invasive indices for liver fibrosis, including Fibrosis 4 score (FIB-4), aspartate transaminase to platelet ratio index (APRI). All of them were obtained according to the published formula, as previously described [46][47][48]. (# where ULN = upper limit of normal for that laboratory, the upper limit of normal for both AST and ALT was 40 U/L). Elevated FIB-4 and APRI were defined as FIB-4 ≥ 1.45 [40], APRI ≥ 0.5 [40,49], in both male and female. Statistical Analyses All statistical analyses displayed in tables were performed using SAS 9.4 software (SAS Institute Inc., Cary, NC, USA). A two-sided p value (<0.05) was considered to be of statistical significance. Continuous variables were represented as mean (SD) and further compared using independent t tests. Categorical variables were represented as percentages. A chi-square test was used to determine the distribution of categorical variables among various groups. Spearman rank correlation was performed to test the relationship between continuous variables. To identify whether bilirubin levels were associated with the possibility of increased fibrosis-related indices, multivariate logistic regression analysis was conducted to correct important factors. The Cochran-Armitage trend test was used to investigate the trend among binomial proportion of disease progression. Acknowledgments: The authors would like to thank all study subjects for participating in the DFTJ-cohort study as well as all volunteers for collecting the samples and data. This work is supported by the grant from the National Natural Science Foundation of China (Nos. 81472979 and 81402673). Author Contributions: All authors contributed significantly to this work; Xiaoping Miao, Ping Yao and Min Du conceived and designed the study strategy; Min Du, Yanyan Xu, Peiyi Liu, Lin Xiao, Shanshan Zhang, Sheng Wei and Ping Yao recruited the participants and collected their information and blood samples; Min Du, Shanshan Zhang, Lin Xiao, Yanyan Xu, Peiyi Liu, Yuhan Tang and Mingyou Xing contributed to data collection and statistical analyses; Min Du contributed to the writing of the manuscript and preparing the tables and figures; Yuhan Tang, Ping Yao and Min Du contributed to the critical revision of the article; All authors reviewed the manuscript. Conflicts of Interest: The authors declare that they have no conflict of interest.
2018-04-03T03:06:48.160Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "f9d9518a40892d680db5e8f507ccd75e1dc5c488", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/17/12/2057/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f9d9518a40892d680db5e8f507ccd75e1dc5c488", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13820108
pes2o/s2orc
v3-fos-license
Bounds on the Inverse Signed Total Domination Numbers in Graphs Let G = (V, E) be a simple graph. A function f : V → {−1, 1} is called an inverse signed total dominating function if the sum of its function values over any open neighborhood is at most zero. The inverse signed total domination number of G, denoted by γ 0 st (G), equals to the maximum weight of an inverse signed total dominating function of G. In this paper, we establish upper bounds on the inverse signed total domination number of graphs in terms of their order, size and maximum and minimum degrees. INTRODUCTION In the whole paper, G is a simple graph without isolated vertices and with vertex set V (G) and edge set E(G) (briefly V and E).For every vertex v ∈ V , the open neighborhood N (v) is the set {u ∈ V | uv ∈ E} and the open neighborhood of a set S ⊆ V is the set N (S) = v∈S N (v).The minimum and maximum degree of G are respectively denoted by δ(G) = δ and ∆(G) = ∆.If X ⊆ V (G), then G[X] is the subgraph of G induced by X.For disjoint subsets X and Y of vertices of a graph G, we let E(X, Y ) denote the set of edges between X and Y .For a tree T , a leaf of T is a vertex of degree 1 and a support vertex is a vertex adjacent to a leaf.The set of leaves and the set of support vertices in T are denoted by L(T ) and S(T ), respectively.Consult [3] for terminology and notation which are not defined here. For a real-valued function f : 1} be a function which assigns to each vertex of G an element of the set {−1, 1}.Zelinka [4] defined the function f to be a signed total dominating function A function f : V → {−1, 1} is said to be an inverse signed total dominating function The inverse signed total domination number of G, denoted by γ 0 st (G), is the maximum weight of an inverse signed total dominating function of G.An inverse signed total dominating function of weight γ 0 st (G) is called a γ 0 st (G)-function.Huang et al. [2] introduced the concept of an inverse signed total domination number and obtained the exact values of this parameter for paths, cycles, complete graphs, stars and wheels.In this paper, we establish upper bounds on the inverse signed total domination number of graphs in terms of their order, size and maximum and minimum degree. Throughout this paper, if f is a STDF (respectively, ISTDF) of G, then we let P and M denote the sets of those vertices in G which are assigned +1 and -1 under f , respectively, and we let |P | = p and |M | = m.Thus, w(f For any γ st (G)-function f of G, we can define an ISTDF on G by assigning +1 to every vertex in M and −1 to every vertex in P that implies We make use of the following results in this paper. Theorem 1.1 ([1] ).If G is a graph of order n with minimum degree δ ≥ 2 and maximum degree ∆, then and this bound is sharp.In this section, we present bounds on inverse signed total domination numbers of graphs in terms of their order, size, maximum and minimum degree.Lemma 2.1.If G is s graph with minimum degree δ and maximum degree ∆ and f is a , and the proof is complete. Theorem 2.2.If G is a graph of order n with minimum degree δ ≥ 1 and maximum degree ∆, then Using this inequality and , the desired bound is easy to verify. We show next that the bound given in Theorem 2.2 is sharp.For this purpose, we shall need the following two observations proved by Henning [1]. where δ = δ(G) and ∆ = ∆(G).In particular, every vertex in P has degree δ and every vertex in M has degree ∆.Let f : V (G) −→ {−1, +1} be a function that assigns 1 to all vertices in P and −1 to all vertices in M .By construction, f is an ISTDF.Hence, Next we give a sharp upper bound on the inverse signed total domination number of a graph in terms of its order. Each vertex in P is adjacent to at least one vertex in M .Thus, by the pigeonhole principle, at least one vertex v of M is adjacent to at least |P | |M | vertices of P .It follows, therefore, that Assume that G is obtained from a complete graph K t with vertex set {v 1 , v 2 , . . ., v t } by adding the set of vertices t i=1 {x i1 , . . ., x it−1 } and edges v i x ij for each 1 ≤ i ≤ t and 1 ≤ j ≤ t − 1.Then G is a graph of order t 2 .Define f : Now we give an upper bound on the inverse signed total domination number of a graph in terms of its order, size and minimum degree. Theorem 2.7.Let G be a graph of order n, size m and minimum degree δ ≥ 1.Then This leads to the second bound, and the proof is complete. Theorem 2.8.For any tree T of order n ≥ 2, γ 0 st (T ) ≤ n−4 3 with equality if and only if n ≡ 1 (mod 3) and each vertex v ∈ V (T ) \ L(T ) has even degree and is adjacent to leaves. The signed total c AGH University of Science and Technology Press, Krakow 2016 146 M. Atapour, S. Norouzian, S.M. Sheikholeslami, and L. Volkmann domination number, denoted by γ st (G), of G is the minimum weight of a STDF on G.A signed total dominating function of weight γ st (G) is called a γ st (G)-function. 147 2 . BOUNDS ON THE INVERSE SIGNED TOTAL DOMINATION NUMBERS Observation 2 . 3 .Observation 2 . 4 . If k and n are integers with k < n and n is even, then we can construct a k-regular graph on n vertices.Let k, m and p be integers satisfying 1 ≤ k ≤ mp, m|k and p|k.Then there exists a bipartite graph of size k with partite sets P and M such that |P | = p and |M | = m, and each vertex in P has degree k p while each vertex in M has degree k m .Theorem 2.5.Let δ and ∆ be integers with 2 ≤ δ ≤ ∆.Then there exists a graph G such that 2 √ be a graph of order n with γ 0 st (G) = n − n and let f be a γ 0 st (G)− function.Then |M | = √ n and |P | = n − √ n and therefore |P | = |M | 2 − |M |.Each vertex in P is adjacent to at least one vertex in M .Thus, |E(P, M )| ≥ |P | = |M | 2 − |M |. (2.1)On the other hand, since f [v] ≤ 0, for each v ∈ V , we have |N (v) ∩ M | ≥ |N (v) ∩ P | and so each vertex in M is adjacent to at most |M | − 1 vertices in P , which implies that |E(P, M )| ≤ |M |(|M | − 1).(2.2) By (2.1) and (2.2), we have |E(P, M )| = |P | = |M | 2 − |M |.Thus, each vertex in M is adjacent to exactly |M | − 1 vertices of P and each vertex of P is adjacent to exactly one vertex of M .Also, G[M ] is a complete graph and P is an independent set as desired.
2016-01-25T19:18:26.375Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "d45acd52ed57154f655a26fe8ce72251b9b44de8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.7494/opmath.2016.36.2.145", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "f1a2433d87dc1f497aa71d44b5dd57582dd76c11", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
212418825
pes2o/s2orc
v3-fos-license
Effects of a Rotating Cone on the Mixed Convection in a Double Lid-Driven 3D Porous Trapezoidal Nanofluid Filled Cavity under the Impact of Magnetic Field Effects of a rotating cone in 3D mixed convection of CNT-water nanofluid in a double lid-driven porous trapezoidal cavity is numerically studied considering magnetic field effects. The numerical simulations are performed by using the finite element method. Impacts of Richardson number (between 0.05 and 50), angular rotational velocity of the cone (between −300 and 300), Hartmann number (between 0 and 50), Darcy number (between 10−4 and 5×10−2), aspect ratio of the cone (between 0.25 and 2.5), horizontal location of the cone (between 0.35 H and 0.65 H) and solid particle volume fraction (between 0 and 0.004) on the convective heat transfer performance was studied. It was observed that the average Nusselt number rises with higher Richardson numbers for stationary cone while the effect is reverse for when the cone is rotating in clockwise direction at the highest supped. Higher discrepancies between the average Nusselt number is obtained for 2D cylinder and 3D cylinder configuration which is 28.5% at the highest rotational speed. Even though there are very slight variations between the average Nu values for 3D cylinder and 3D cone case, there are significant variations in the local variation of the average Nusselt number. Higher enhancements in the average Nusselt number are achieved with CNT particles even though the magnetic field reduced the convection and the value is 84.3% at the highest strength of magnetic field. Increasing the permeability resulted in higher local and average heat transfer rates for the 3D porous cavity. In this study, the aspect ratio of the cone was found to be an excellent tool for heat transfer enhancement while 95% enhancements in the average Nusselt number were obtained. The horizontal location of the cone was found to have slight effects on the Nusselt number variations. Introduction Mixed convection in cavities due to moving surfaces has been a subject of many important heat transfer applications in electronic cooling, convective drying, solar power, some chemical engineering processes, and many others [1,2]. The interaction of the forced flow and other effects such as natural convection will be made even more complex by including magnetic field effects and complicated geometries. There are many investigations that simplified the thermal engineering problem in two dimensional cavities with simple shapes. However, in real engineering problems considering three dimensional geometry is more realistic and two dimensional modeling may not be adequate to represent the three dimensional fluid flow features. In the current work, mixed convection in a three dimensional trapezoidal cavity double lid driven which accounts for geometrical non-uniformity is also considered. In convective heat transfer applications in cavities, many active and passive heat transfer enhancement techniques are offered. In one of these methods, stationary or rotating circular cylinders are recently used in many applications [3][4][5][6][7][8]. The rotating cylinders immersed in fluid in cavities have many applications, such as in rotating tube heat exchangers, drilling of oil wells, rotating shafts ,and many others. Rotational speed, size, location, and thermal conductivity are among the most important parameters that can be considered for heat transfer enhancement. These effects were found to be effective in thermal performance for an inner rotating cylinder in a differentially heated cavity problem as studied in ref. [9]. In the numerical work of Hussain and Hussein [10], mixed convection in a two dimensional cavity with an inner rotating cylinder is studied by using the finite volume method and it was observed that the location of the cylinder has significant impacts on the convection enhancement. Ghaddar and Thiele [11] used spectral element method for analyzing the effects of a rotating cylinder in an isothermal cavity for considering various rotational speeds of the cylinder while heat transfer was found to enhance with rotation of the cylinder at low Rayleigh number. Fu et al. [12] examined the convection in a cavity with rotating cylinder effects by using penalty finite-element method. It was found that the direction of the rotation contributes significantly to the heat transfer enhancement. In the literature, there are a few studies that considered the impact of rotating cylinders within cavities on convection for three dimensional configurations [13]. In the work of Kareem and Gao [14], three dimensional mixed convection in a differentially heated cavity containing an inner adiabatic rotating cylinder was simulated for the range of non-dimensional rotational speeds of −5 and 5 in the turbulent flow regime. Heat transfer increment was observed with rotation but their impacts on the different surfaces was found to be different. Selimefendigil and Oztop [15] examined the impacts of two rotating inner circular cylinders in three dimensional cavity for steady, laminar flow regime. Both enhancement and deterioration of average heat transfer rate are observed depending upon the rotational direction of the cylinders. Magnetic field effects are recently used for convective heat transfer control [16][17][18][19]. The effects of magnetic field are encountered in geothermal energy extraction, glass float, coolers of nuclear reactors and many others. For convection in cavities, the magnetic field was found to reduce the heat transfer rate [20][21][22]. However, recent studies showed that the magnetic field effects can be beneficial to enhance the convection for configurations that produce multiple re-circulations as in vented cavities [23] or in separated flows as encountered in sudden area expansion geometries [24,25]. Magnetic field effects are recently used with nanofluids [26][27][28][29]. The technology of nanofluids were successfully implemented in various technological applications related to thermal science such as in solar power, thermal energy storage, refrigeration, thermal management and convective heat transfer control [30][31][32][33][34][35][36][37]. Convective heat transfer in 3D cavities with nanofluid considering magnetic field effects were performed by several researchers [38,39]. Sheikholeslami et al. [40] numerically studied the effects of Lorentz forces in 3D cavity with nanofluids and the numerical results showed that the heat transfer rate is reduced with higher magnetic field strength. In another study, Al-Rashed et al. [41] used finite volume method for analyzing 3D natural convection in a cubic enclosure with carbon nano-tube (CNT)-water nanofluid and magnetic field. The average Nu number was found to reduce by 50% when magentic field strength is increased from Hartmann number of 50 to Hartmann number of 100. In a recent work, Ghasemi and Siavashi [42] examined the magneto-hydrodynamic Cu-water nanofluid in a three-dimensional cavity with moving surfaces bu using the MRT-lattice Boltzmann method. The negative impact of Hartmann number on heat transfer rate was observed but it was noted that magnetic field can be aligned such that negative impact can be reduced. Magnetic field effects with nanofluid are also used in many porous media applications [43][44][45][46][47]. Convection in porous media may find important application areas such as in solar collectors, solidification, thermal insulation and many others. In the present study, a rotating cone is used in three dimensional mixed convection of double lid driven trapezoidal porous enclosure considering magnetic field effects with CNT-water nanofluid. These particles were found to be very promising in the heat transfer enhancement in comparison with the other nanoparticles [48,49]. A rotating cone which can be considered to be a generalization of a rotating cylinder is used in 3D cavity which adds novelty to the current configuration. The aspect ratio of the eccentric cone can be adjusted along with other parameters encountered in rotating cylinder such as rotational speed, size and location. There are a few studies of mixed convection in 3D cavities with rotating cylinder in cavities; however, it is the first time a rotating cone is used in a 3D double lid-driven porous cavity. Owing to the diversity in the application of mixed convection in lid-driven cavities for many thermal engineering problems, use of magnetic field with very highly conductive nanoparticles and using a rotating cone provide promising multiple methods for convective heat transfer control in many heat transfer engineering problems. Geometric Model and Governing Equations A schematic view of the 3D representation and 2D view with boundary conditions are shown in Figure 1. A trapezoidal 3D cavity with side surface inclination of 10 • and size H is considered. A rotating cone is located in the mid of the cavity with rotational speed of ω. r1 and r2 are the radius of the base surfaces which are circular and AR denotes the aspect ratio as AR = r1/r2. The upper and lower horizontal surface of the cavity are moving with constant speeds of u0 in the positive and negative × directions. The cavity side surfaces are at fixed temperatures of T h and T c with T h > T c . Other surfaces of the 3D cavity and surfaces of the rotating cone are adiabatic. As the heat transfer fluid SWCNT-water nanofluid is used considering the impacts of magnetic field and the properties are shown in Table 1. The base fluid Prandtl number is 6.9. The fluid is incompresible and Newtonian. Laminar, steady and three dimensional flow assumptions are used. The viscous dissipation and radiation effects are also not taken into account. The Brinkman-extended Darcy porous model is used. Table 1. Thermophysical properties of base fluid and nanoparticles [41]. Property Water SWCNT MWCNT The conservation equations are written in compact notation as: The inclusion of CNT particles affects the variation of electrical conductivity along with the other thermophysical properties. The magnetic Reynolds number is much smaller than one and the induced magnetic field effects are not taken into account. The magnetic field is assumed to be uniform throughout the computational domain. A transverse magnetic field which is uniform and parallel to the z-axis is used. Joule heating effects are neglected along with the electric field and induced magnetic effects. The last term in the above given momentum equation The dimensional boundary conditions are given as: Thermal performance of the system is evaluated by using the Nusselt numbers. The local and average Nusselt number for the hot surface are calculated as in the following: The solution of the equations is made by using the Galerkin weighted residual finite element method. In the formulation, the flow variables are approximated by using the interpolation functions: Ψ u,v,w , Ψ p and Ψ T are the shape functions for field variables. U, V, P and T denote the values of the respective variables at the nodes of the element. The residuals are set to be zero as: F k is the weight function. The Newton-Raphson method was used for the solution of nonlinear residual equations. CNT-Water Nanofluid Property Equations CNT-water nanofluid effective thermo-physical relations are given as [50]: A correlation which considers the space distribution of the CNTs in the nanofluid is used for thermal conductivity. It is defined as [51]: This definition of the thermal conductivity is shown to produce accurate results when experimental results are used [51]. The Brinkman model is chosen for the dynamic viscosity of the nanofluid [52]: This model does not take into account the temperature dependence and size of the particle effects. It has been used in various studies for convective heat transfer applications and the heat transfer fluid was considered to be Newtonian up to a specified values of solid particle volume fraction [53,54]. In the experimental work of Halelfadl et al. [55], the viscosity of CNT-water was examined and temperature and solid particle volume fraction effects were analyzed. It was observed that the fluid behaves non-Newtonian (shear thinning) for higher nanoparticle volume fractions. In the analytical work of Benos et al. [56], impacts of CNT aggregations were included in the viscosity and thermal conductivity of CNT-water nanofluid. In a recent work, molecular dynamics simulation method was used to obtain the viscosity of a model water-based nanofluid with single walled CNT [57]. A correlation for the volume fraction in the range of 0.25% and 0.65% is offered and comparisons are made between various available models for the viscosity of nanofluids. For the definition of electrical conductivity of CNT-water nanofluid, Maxwell's model was used [51]: Mesh Examination and Code Verification Mesh is composed of tetrahedral elements and its independence is assured by using various number of elements. Table 2 shows the average Nusselt number variations versus number of elements considering three values of Richardson numbers. G6 with 69,769 number of elements is used for the subsequent computations. Validation of the present work is made by using different available numerical studies in the literature. In the first study, mixed convection in a double lid-driven cubic enclosure is examined for various Richardson and Reynolds numbers as studied in ref. [58]. The average Nusselt numbers of the configurations with (Re = 100, Ri = 1) and (Re = 100, Ri = 10) are 1.70 and 1.20 in ref. [58] whereas these values are calculated as 1.63 and 1.18 with the present solver. Another validation is conducted by using the experimental results of Heyhat et al. [59]. In this work, forced convection of Al 2 O 3 -water nanofluid in a horizontal tube in laminar flow conditions was examined for nanofluid solid volume fraction up to 2%. Following property relations for the thermal conductivity and dynamic viscosity are considered: Temperature dependence (between 20 • and 60 • ) of the properties are considered. The ratio of Nusselt numbers for nanofluid and water case is shown in Figure 2 for different values of Reynolds numbers. Highest deviation is obtained as 9.40% between the experimental data and present solver at Reynolds number of 800. Final verification of the code is made by using the results from the two dimensional double-lid driven cavity problem analyzed in ref. [60]. Figure 3 shows the average Nusselt number comparisons for various Richardson numbers. The results of the validation studies show that the current solver can predict the nanofluid behavior and convective heat transfer of lid driven cavity problems in 2D and 3D configurations. Results and Discussion In the present study, impacts of a rotating cone on mixed convection of CNT-water nanofluid in a 3D trapezoidal porous cavity are examined considering magnetic field effects. Richardson number is the ratio of the free convection effects to the forced convection due to the moving surface. The Rayleigh number is fixed to the 10 5 and a lower Richardson number value gives a higher velocity of the upper and lower moving surfaces. As the value of Ri rises, the natural convection effects are increased while the penetrating fluid motion form the moving surfaces are reduced. Impact of rotational speed of the cone on the 3D and 2D mid-plane flow and thermal patterns are shown in Figure 4 (Ri = 5, Ha = 10, Da = 10 −3 , AR = 0.25, x 0 = 0.5 H, φ = 0.04). As compared to stationary cylinder case, multi-recirculation regions are established for ω = −300 while the diagonally elongated vortex becomes flattened for ω = 300. The thermal gradients become higher especially for the upper part of the hot surface with rotation of the cone. The average Nusselt number variations with respect to changes in the Ri number for three values of angular rotational speed of the cylinder are shown in Figure 5a-c. The average heat transfer behavior for various Richardson numbers are significantly affected by the angular rotational velocity of the cone. For stationary cone case at ω = 0, the average Nusselt number generally enhances with higher values of Ri numbers. However, at the highest speed of ω = −300, the impact is reverse. This could be attributed to reduced convective effects of the rotating cone with higher velocities of the upper and lower surfaces which resulted in heat transfer deterioration. The variation of the average Nu value of the hot surfaces with varying values of ω of the objects (2D cylinder, 3D cylinder and 3D cone) are shown in Figure 5d. The average Nu value is higher for 2D cylinder case and discrepancies between 2D and 3D configuration and the average Nusselt number values rise with higher ω. There is only 7% higher values are achieved for 2D cylinder case as compared to 3D cylinder configuration while this value becomes 14% and 28.5% for rotational speeds of ω = −250 and ω = 250, respectively. There are some negligible variations in the average Nusselt number for 3D cylinder and 3D cone and the highest variation is 2% at ω = −250. In the current work, the competing forces between the magnetic field and hydrodynamic forces can be defined by using additional interaction parameters. An additional parameter which is the interaction index (N L ) can be defined as: It gives the ratio of the Lorentz forces to the inertia forces due to the moving wall. In the present work, the rotation of the cone also affects the convective flow features. Another interaction parameter which defines the ratio of the Lorentz forces to the rotational effects of the cone can be defined as: and the rotational Reynolds number is defined as: The In the current work, nanofluids are used with magnetic field effects. In the experimental work of Kaneda et al. [61], where natural convection for a liquid metal with uniform magnetic field is examined, the convective heat transfer was found to be reduced with larger magnetic field strength. In another experimental work, free convection of a magnetic fluid in the annular space of two horizontal cylinders was examined [62]. It was observed that the direction and amplitude of magnetic field were effective in convective heat transfer and magnetic field can be used as a control tool. Convective heat transfer features around a heated wire under uniform magnetic field and magnetic field gradient were experimentally conducted in Ref. [63]. The orientation and strength of the magnetic field were shown to play a significant role on the heat transfer features. Both the thermal and electrical conductivity of the base fluid changes by introducing nano sized particles. The configuration with water (φ = 0) and without magnetic field effects (Ha = 0) are taken as the reference case while enhancement or deterioration the average Nu value for various nanofluids with different solid particle volume fractions considering two values of Richards numbers are shown in Figure 6. The amount of heat transfer enhancement with nanoparticle addition reduces for higher Hartmann number case for all Ri numbers. Similar results have also been in previous studies for convective heat transfer studies in cavities considering magnetic field effects. Deterioration of the heat transfer is achieved after Ha = 10 for φ = 0 and at Ha = 50 for φ = 1% considering all Richards number cases. The average enhancement of heat transfer rate in the absence of magnetic field is highest for Ri = 1 and the amount is 133% while at Ha = 50, this value is reduced to 84.3%. It is observed that the heat transfer enhancement is still very high with the inclusion of CNT nanoparticles even in the presence of destructive effects of MHD on convection. In the experimental work of Sarafraz et al. [64], performance of COOH functionalized multi-walled carbon nanotubes-water nanofluid was experimentally tested for a double pipe heat exchanger. Small penalty was noted for pressure drop while significant enhancements in the thermal performance up to 44% for the highest mass concentration of wt.% = 0.3 were observed. In another experimental work, thermal performance features of CNT-water nanofluid in a tube with inserted helical screw louvered rods were analyzed for solid volume fractions of 0.1%, 0.2%, and 0.5% in Ref. [65]. The highest thermal performance index of 1.23 was obtained for 0.5% volume concentration for a twist ratio of 1.78. From these experimental studies and form numerical studies as mentioned above, it is obvious that using CNT nanoparticles in heat transfer fluids gives higher thermal performance.In the current work, the highest amount of average heat transfer reduction with magnetic field in the absence of nanoparticles is 31.4% at Richardson number of 50. The aspect ratio and location of the rotating cone can be considered other parameters that could contribute to the convective heat transfer enhancement. Aspect ratio of the cone denotes the ratio of the radius of the base circular surfaces of the cone. A higher aspect ratio denotes a higher average radius of the cone. As the value of AR is higher, the gap between the surfaces of the hot wall and rotating cone reduces. Higher impact of the convective flow motion due to the rotating cone is obtained with higher AR and thermal gradients near the hot surface are expected to become steepened. Impacts of AR and vertical location of the rotating cone on the average variation of the Nusselt number of the hot surface are shown in Figure 7. AR = 1 denotes a rotating circular cylinder in 3D cavity configuration. The average Nusselt number shows an increasing trend with respect to changes in higher values of AR. The amount of enhancement in the average Nu value is 95% which is significant when the cases with lowest and highest aspect ratio are compared. The horizontal location of the rotating cone resulted in first deterioration of average heat transfer from x 0 = 0.35 H to 0.5 H and then increment until x 0 = 0.65 H, but the amount of variation is only 6% at the highest. Conclusions In the current work, impacts of a rotating cone with MHD effects are considered for the mixed convection of CNT-water nanofluid in a double lid-driven 3D trapezoidal cavity. Different behaviors of average heat transfer with respect to changes in the Richardson number is observed depending upon the angular rotational speed of the cone. The average Nusselt number increases for higher values of Richardson number when the cone is stationary and it shows a decreasing trend when the cone is rotating at speed of −300. Comparisons are also made between the 3D configuration with an inner rotating cone and 2D configuration with an inner rotating cylinder. Comparison results showed that the average Nusselt number is higher for 2D case. As the value of rotational speed increases, the discrepancy between the average Nu values between the 2D and 3D case and increases and highest value of 28.5% is obtained. Magnetic field effects reduced the effective convection but with the use of highly conductive CNT particles, the average Nu value enhances by about 84.3% at the highest magnetic field strength when the Hartmann number is taken as 50. The aspect ratio of the cone was found to be an effective parameter for heat transfer enhancement in 3D configuration and up to 95% average Nu value enhancements are observed when the values for lowest and highest aspect ratio are compared. However, the horizontal location of the rotating cone has a slight impact on the heat transfer.
2020-03-05T10:08:24.239Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "ab408ae41ed49627ec159b7941a734745678c873", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/10/3/449/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d9b429fd84be57efbef91c6b5b821be0df37ff1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
55482442
pes2o/s2orc
v3-fos-license
Investigation of Desert Dust Contribution to Source Apportionment of PM 10 and PM 2 . 5 from a Southern Mediterranean Coast In order to identify the source apportionment of particulate matter PM10 and PM2.5 in the southern Mediterranean coast of Tunis (Tunisia), three different sites characterized respectively by traffic, industries and being an urban background area are studied. The chemical characterization included a gravimetric determination of atmospheric particles mass concentration, measurements of the major anions (SO4, NO3, Cl) and cations (Ca, Mg, K, NH4) concentrations in the aerosol samples by ion chromatography and analysis of 18 elements by energy dispersion X-ray fluorescence. Aerosol ion balance of various PM10 constituents are used to identify possible sources of the particulate matters. Thanks to these analysis, the particulate masses were reconstructed from the main possible constituents: crustal matter, primary and secondary pollutants, marine aerosols and organic matter. Wherever PM10 and PM2.5 were studied, PM10 crustal elements and sea salt aerosols were mainly associated with the coarse fraction whereas primary and secondary anthropogenic pollutants as well as organic matter rather compose PM2.5 fraction. In all the sites, PM10 mass was mainly composed of crustal matter (41–50%) and poorly of sea salt (3–4%). And so aerosols chemical composition is heavily affected by dust winds from Sahara desert, with some contribution of local traffic and industries and only a slight direct impact of the nearby Mediterranean sea. INTRODUCTION Aerosols are important components of the earth system.The composition of atmospheric aerosols impacts on many parameters, among them the decreased visibility (Watson, 2002), deposits of pollutants affecting the ecosystems (Bytnerowicz and Fenn, 1996), tropospheric toxicity and human health (Carter et al., 1997;Peters et al., 2001;Sattler et al., 2001, Pope andDockery, 2006;Oesterling et al., 2008;Hsieh et al., 2009), the hygroscopic and optical properties of the aerosol (Tsai and Kuo, 2005;Tsai et al., 2007), the earth's radiation budget, the global climate (Facchini et al., 1999;Charlson et al., 2001;Acker et al., 2002) and the direct and indirect effect of aerosols on the planet energy balance (Seinfeld and Pandis, 1998;Cabada et al., 2004;De Carlo et al., 2008).Yet the mass size distributions and chemical characteristics of the aerosols are still insufficiently understood. The need for analysis of Particulate Matter (PM) sources contributing to PM 10 and PM 2.5 atmospheric concentrations is particularly strong in Tunis, situated in North Africa, in the southern Mediterranean coast.Specific meteorological circulations and natural sources like Mediterranean sea and the proximity of Sahara create specific patterns of aerosol concentrations that could influence not only the particulate concentrations through Europe but also global climate changes due to the migration of Sahara desert dust. The aim of this paper is to investigate the origin and the variability of PM 10 and PM 2.5 in the urban area of Tunis.It involves the measurement of the mass concentrations and the different chemical compositions of atmospheric suspended particulate matters.It compares the particulate matter characteristics in three different areas and estimates the source apportionment of the particulate matters in this area. STUDY AREA The study area is located in Tunis city, the capital of Tunisia in Northern Africa.Three urban sites were chosen for their specificities (Fig. 1).The near white grey level in Fig. 1 covers the areas outside Tunis.The other shades of grey increases with the population density inside Greater Tunis. Site 1 (Bab Saadoun) is located at a big crossroads in the center of Tunis (latitude 36, 80769°N, longitude 10, 15941°E).It is a point of high density traffic roads and congestions all day long.Site 2 (Ben Arous) is located in Tunis suburb, about 10km south of Tunis center, in an industrial area (latitude 36, 74120°N, longitude 10, 19161°E).The industrial activities include a fuel oil power station, a cement factory and several chemical, food and welding factories.Site 3 (El Mourouj) is also located in the southern suburb, about 12 km from Tunis, inside a city park surrounded by a crowded dormitory area (latitude 36, 74120°N, longitude 10, 27442°E). Sample Collection and Analysis The sampling of PM 10 and PM 2.5 was performed in 2008 from June 6 to 27, 24 hours a day, 7 days a week.The filters were changed every day.There were 21 samples from each site for PM 10 and additionnal 21 samples for PM 2.5 in Bab Saadoun.Of the 85 samples, only three were missing (one in Ben Arous and two in El Mourouj). Daily sampling of PM 10 and PM 2.5 was carried out by a dual-channel sequential sampler operating at a flow rate of 2.3 m 3 /h.Sampling heads met the specifications of the European Standard (EN12341, 1998).The sampler (HYDRA Dual Sampler -FAI Instruments) has two independent simultaneous sampling lines for PM 10 and PM 2.5 .Each sampler was equipped with two filters: a PTFE membrane and a quartz fiber membrane. The sampler was placed inside constant temperature housing, set at 20°C in order to reduce the loss of volatile chemical species from the collected particles (Perrino et al., 2008b, c).The gravimetric determinations of the PM 10 and PM 2.5 mass concentrations were carried out on the PTFE filters in the sampler by β attenuation method.After these samplings and measurements, the filters were stored at 5°C for further analysis. Chemical Composition of PM 10 and PM 2.5 The PM on the PTFE filters were analyzed by energydispersion X-ray fluorescence (ED-XRF, X-Lab 2000, SPECTRO) for Na, Mg, Al, Si, S, Cl, K, Ca, Fe, As, Cr, Cu, Mn, Ni, Pb, Ti, V, Zn, yielding results directly in mass concentrations (µg/m 3 ).Some metals (As, Cr, Cu, Mn, Ni, Pb, Ti, V, Zn) were not used in the estimation of PM since only the major metals were used to calculate the reconstructed masses.Afterwards, PTFE filters were ultrasonicated two times in 5 mL ultra-pure water.The resulting solution was analyzed by ion chromatography (DX120, Dionex) for anions (chloride, nitrate, sulphate) and cations (sodium, ammonium, potassium, magnesium, calcium).The ion chromatography conditions are described in Table 1.The insoluble fraction (Cins) of Na, K, Mg and Ca was calculated as the difference between the concentration resulting from XRF (total fraction) and the concentration resulting from ion chromatography (soluble fraction only). Quartz filters were analyzed for their elemental carbon and organic carbon content (EC/OC) using a thermo-optical analyzer integrating a flame ionization detector (OCEC Carbon Aerosol Analyzer, Sunset Laboratory).The filter is first heated in helium atmosphere up to 870°C for organic carbon compounds analysis then up to 900°C in helium and oxygen atmosphere for the determination of elemental carbon compounds. Previous studies have shown that in most environments this overall procedure allows the identification of more than 90% of the particulate mass (Perrino et al., 2009b(Perrino et al., , 2010)). The cation and anion in all the PM samples were calculated as follows: where the ions are in µg/m 3 Crutal matter (CM, sea salt aerosol (SSA), primary anthropogenic pollutants (PA), organic matter (OM) and inorganic secondary species (IS) cannot be measurd, they are estimated by the use of the previously analysed elements (by XRF) and ions (by ionic chromatography). Crustal matter (CM) was estimated (Eldred et al., 1987;Chan et al., 1997) by the following equation, which takes into account the main elements considered as components of the earth's crust: [CM] = oxides + carbonates Seven elements are assumed to form the oxides.Their concentration was multiplied by the ratio between the molecular weight of the oxide and the molecular weight of the element.Calcium and magnesium carbonates were calculated as the soluble (sol) fractions of Ca and Mg, added to the calculated CO 3 2-concentrations: and so Sea-salt aerosol (SSA) was estimated by the following equation using sodium and chloride and estimating the minor constituents S, Mg, Ca and K through a multiplicative factor (Perrino et al., 2008a, b): Primary anthropogenic pollutants (PA) were estimated as the elemental carbon concentration (EC) plus the primary organic carbon content estimated by the elemental carbon multiplied by 1.1 (Viidanoja et al., 2002): Secondary organic matter (OM) was estimated as the remaining amount of organic carbon content multiplied by a factor α that takes into account the non-carbon component of organic molecules.The factor α is the average molecular weight per carbon weight ratio studied by Turpin and Lim (2001).They state that the currently used ratio of 1.4 is the lowest reasonable ratio for an urban aerosol and that a ratio of 1.4 is too low for nonurban aerosols.Based on the above evaluation, they recommend using a ratio of 1.6 ± 0.2 for urban aerosols and 2.1 ± 0.2 for aged (nonurban) aerosols.Considering the detailed study of these authors, in our calculations, this factor α was set to 1.6 for the urban sites 1 and 2 and to 1.8 for the urban background site 3: Inorganic secondary species (IS) were estimated (Wang and Shooter, 2001) as the sum of ammonium, nitrate and non sea-salt (nss) sulphate: The reconstructed mass considers all five origins: Mass and Water Soluble Ions Concentrations in PM 10 and PM 2.5 Three sites around Tunis city (Tunisia, Africa) are analyzed for their PM 10 .They are chosen for their characteristics: Site 1 as a traffic site, Site 2 as an industrial site and Site 3 as an urban park in a dormitory area.Site 1 is also studied for its PM 2.5 .For all three sites (Table 2), if the variability (SD) of the results is taken into account, the mass concentrations of PM 10 are very close, around 56 µg/m 3 .Although the characteristics of the sites are notably different, all the concentrations of the studied water soluble ions (chloride Cl -, nitrate NO 3 -, sulfate SO 4 2-, sodium Na + , ammonium NH 4 + , potassium K + , magnesium Mg 2+ and calcium Ca 2+ ) are similarly close. The higher Cl -concentrations in PM 10 were found at Site 2 the nearest site to the sea.It is known that the spatial variability of Na + and Cl -in atmospheric aerosols are mainly derived from the marine source, more commonly referred to as sea-salts (Quinn et al., 2004;Kumar et al., 2008).This indicates that all the sites experienced a certain marine influence.In the PM 10 samples, Ca 2+ , SO 4 2-and NO 3 were the most abundant ions, and Ca 2+ alone accounted for 35.0% of PM 10 mass, ranking at the top of the ions for the three sites.At Site 1 (Bab Saadoun) ion masses were in the order Ca 2+ = SO 4 2-> NO 3 -> Na + > Cl -= NH 4 + > K + > Mg 2+ while for Site 2 (Ben Arous) and at Site 3 (El Mourouj) ion masses were in the order Ca 2+ > NO 3 -> SO 4 2-> Na + > NH 4 + > Cl -> K + > Mg 2+ .Prior studies have shown that high Ca 2+ occurs during dust storms (Choi et al., 2001;Arimoto et al., 2004;Cao et al., 2005b;Shen et al., 2007;Matassoni et al., 2011;Galindo et al., 2013).This calls, in first hypothesis, for the possibility of these particulate matter to mainly originate from Sahara dust.The concentrations of secondary ions SO 4 2-and NH 4 + , during the sampling period, were the lowest in PM 10 .They accounted for 31% of the total PM 10 ion concentrations at Site 1, 28% at Site 2 and 23% at Site 3. In site 1, the mean mass concentrations analyzed for PM 10 and PM 2.5 were 54 µg/m 3 and 25 µg/m 3 (Table 2) respectively and the ion masses order for PM 2.5 was different from the one for PM 10 .In PM 2.5 , the ion masses order was SO 4 2-> NH 4 + > Ca 2+ > Na + > NO 3 -> K + = Cl -> Mg 2+ .PM 2.5 contributes to 46% of the PM 10 mass.Chlorides, nitrates, sodium, magnesium and calcium ion mass concentrations are lower in PM 2.5 than in PM 10 suggesting that these ions are concentrated in the coarse fraction, between PM 10 and PM 2.5 .It is not the case for SO 4 2-and NH 4 + of PM 10 that originated entirely from PM 2.5 .These two secondary ions accounted for 66% of the total PM 2.5 ion concentrationsand only 31% of the PM 10 .Other authors (Wang et al., 2006;Pey et al., 2013) has also seen secondary aerosol species being the most abundant of the ions studied in PM 2.5 .The relatively high SO 4 2-concentration may be due to gas to particle conversion of SO 2 over sea regions (Reddy et al., 2008, Zhao et al., 2011) into H 2 SO 4 (Stockwell and Calvert, 1983) which may further react with atmospheric NH 3 resulting in (NH 4 ) 2 SO 4 aerosols (Seinfeld and Pandis, 1998).High SO 4 2-concentrations associated with land breeze would also affect significantly the long range transport of SO 2 .In site 1, the PM 2.5 chemical composition domination by SO 4 2and NH 4 + indicates thus the influence of land breeze and various anthropogenic activities. When compared to other cities in the world, either on the Mediterrannean coast like Bari in Italy (Amodio et al., 2008), on other coasts like Erdemli in Turkey (Kocak et al., 2004(Kocak et al., , 2007) ) and Mangalore in India (Hegde et al., 2007) or in industrial or high traffic areas like Madrid in Spain (Salvadore et al., 2004), Yampa Valley in USA (Waston et al., 2001) and Pamplona in Spain (Aldabe et al., 2011), no prevailing pattern comes out.The only similarity that can possibly be noted is the high PM 10 mass concentrations on our sites (58, 56 and 54 µg/m 3 ) that are quiet close to Calexico's (61.90 µg/m 3 ) in USM (Chow et al., 2001), possibly due to both proximity to a desert land. Particulate Matter Ion Balance Ion balance is a useful tool to determine any possibly missing ionic species.The linear regression of total cation equivalents against total anion equivalents for each size class show a linear relationship with a significant positive correlation coefficients between the sum of cations concentrations and that of anions, parallel to the theoretical line 1:1.The deviation from the theoretical line indicates a deficiency of anions, since bicarbonate, organic ions (formate and acetate), F -, NO 2 -, PO 4 3-and Br -were not determined in the present study. According to the ion balance, PM 10 and PM 2.5 samples are slightly dominated by cations and anions, respectively.A probable relationship between Ca 2+ and anion deficiency implies that CO 3 -is most probably the missing anion in the PM 10 samples whereas significant correlation between SO 4 2-and cation deficiency in the fine fraction is might be a consequence of H + associated SO 4 2-.These results are consistent with those of PM studies conducted by Khoder and Hassan (2008). The regression coefficient for PM 10 was 0.89 at S1, 0.88 at S2 and 0.84 at S3, and 0.9 for PM 2.5 at site 1, lower than 1, probably due to the missing compounds.Similar values have been reported elsewhere (Tsitouridou and Samara, 1993;Karakas and Tuncel, 1997;Cheng et al., 2000;Tsitouridou et al., 2003;Kocak et al., 2007).Additionnaly, if the sites are ranked by their regression coefficient between anions and cations: site 1 PM 2.5 (Traffic) > site 1 PM 10 (Traffic) > site 2 PM 10 (industrial) > site 3 PM 10 (Urban), it is probably due to the decrease of the missing organic acids and bicarbonate ions in the particulate matters of these sites. Percent Contribution of Different Water Soluble Ions In water soluble extract of PM 10 , Ca 2+ was found to be the major ion with its respective contribution of 33%, 35% and 29% for sites 1, 2 and 3 (Figs.2(b), 2(c) and 2(d)).Generally soil is considered to be the main source of Ca 2+ .High calcium concentration may be due to dust transport from the Sahara at the south of the country.NO 3 -is the second major ion for PM 10 in site 2 and 3 whereas SO 4 2-is the second major ion for PM 10 in site 1, in percent by mass concentration. Nitrate is a secondary aerosol formed by the combination of NH 3 and nitric acid issued from the conversion of NO x .NO x is known to be the most significant precursor of nitrate.Dynamic distributions of nitrate in gas-particle phase had been reported by many authors (Willison et al., 1985;Suzuki et al., 2008).They suggested that the main factor to impact distributions of nitrate in gas-particle phase is temperature (Zhao et al., 2011).Indeed in Tunis, high temperatures, above 40°C, can be reached. For PM 2.5 in site 1 (Fig. 2(a)), the major ion, in percent by mass concentration, is SO 4 2-(42%).The contribution to PM 10 in the same site for this same ion is only 27%.The SO 4 2-high concentrations in site 1 are probably due to the heavy traffic in this area.The influence of sea spray does not seem to be relevent: Na + , Cl -and Mg 2+ concentrations are quiet low, even though the distance between the sites and the Mediterranean sea do not exceed 20km. Source Apportionment of Aerosol Particles during the Sampling Period in Three Sites In order to study the different sources of the PM 10 and PM 2.5 of the three sites and their apportionment, five origins were considered and estimated as mass concentrations: crustal matter (CM), sea-salt aerosol (SSA), primary anthropogenic pollutants (PA), organic matter (OM) and secondary inorganic species (SI).All together, these five concentrations constitue the reconstructed mass concentrations.Each of the five concentrations was calculated using the ions analyzed by ion chromatography and the insoluble elements analyzed by energy-dispersion X-ray fluorescence (Table 2).This reconstructed mass (RM) is compared to the gravimetric mass (GM) issued from the gravimetric experimental determination (Table 3).The calculated and measured values are very close: Pearson coefficient values are 0.95, 0.90 and 0.88 respectively for PM 10 of site 1, site 2 and site 3; and 0.91 for PM 2.5 of site 1.The data show that a very satisfactory reconstruction of the PM mass concentrations can be obtained and used to understand the particulate matter origins for all the sites. For PM 10 , the mass concentration is mainly composed of crustal matter which constitutes 41%, 44% and 50% at the sites 1, 2 and 3, respectively (Fig. 3).The higher value of the crustal matter (CM) fraction at site 3 (calcium and silicium respective mean concentrations are 8.8 and 2.1 µg/m 3 ), compared to site 1, is probably due to calcium oxide of local origin: site 1 is central urban whereas site 3 is in urban outskirts.These values of CM are high if compared to CM registered in northern Mediterranean cities the like Lazio region (25-30%) in Italy (Perrino et al., 2008a).It is probably due to the higher proximity of the Sahara desert to Tunis, situated in the southern side of the Mediterranean sea. Primary anthropogenic pollutants, mainly emitted by traffic, is higher at the traffic site 1 (14%) than at the other two sites (9% at site 2 and 6% at site 3).Organic matter fraction (OM) constitute about 30% of the total PM 10 mass at the three sites.This result is similar to the one of Perrino et al. (2009b) at the traffic station in Rome (Italy) and where this contribution decreases to 10% at the urban background station at the urban stations in Latina and Viterbo and at the near-city station of Montelibretti. The primary anthropogenic compounds (PA) account for 6% of the PM 10 at the urban park in site 3, 11% at the traffic site 1 and 15% at the industrial site 2.They probably originate from local emission, increasing with the level of pollution. The mass concentration of inorganic secondary pollutants (IS) have very close values, around 10%, at all the stations.This is the typical behaviour of secondary aerosols, which exhibit a quite homogeneous space distribution on a regional scale. With 4% of total PM 10 mass concentrations, sea-salt (SSA) was the source with the least weight (3-4%) of the three sites of this study even though they are located in a coastal area. In site 1, primary anthropogenic compounds (Fig. 4) are almost completely (91% of the total PA) in the fine fraction (PM 2.5 ), more than organic matter (68%) and inorganic secondary species (71%).Crustal material is mostly in the coarse fraction (80% of the total CM) as well as sea salt aerosols (81%).During the study, an important variation was recorded for crustal elements, which showed a relevant increase during the last two days of the sampling exceeding 50% of the total PM 10 mass.This increase is recorded for PM 10 at all three stations and at a less extent also for PM 2.5 .This extrem event characterizes and coincide with a desert dust wind event from Sahara. When the three sites are compared in terms of average mass composition (Fig. 5), it is perceptible that the primary anthropogenic fraction decreases from the traffic to industrial to urban background stations.The difference in the crustal fraction between the values recorded at site 1 and site 3 is probably due to the role played by dust re-suspension, which is traffic-related.In the site 3 the mean mass concentrations are 10 µg/m 3 for Ca and only 2.9 µg/m 3 for Si.The higher value of the crustal fraction at site 3, compared to site 1, is due, in particular, to calcium oxide, probably of local origin. The concentration of secondary pollutants at the industrial site 3 and background urban stations site 2 is very similar, about 4.9 µg/m 3 .The higher values at the traffic site 1 are probably related to local emissions.The contributions of these elements to total mass of particulate matter were similar but higher than those of desert dust from North Africa found and reported in Southern European countries (Aldabe et al., 2011).Even if there is some contribution of sea salt from the near Mediterranean sea, the majority of the PM 10 is composed of crustal matter originating probably from the near Sahara desert and some primary and secondary pollutants coming from the local traffic and industries as well as a significant fraction of organic matter of various origins.These characterizations would help the stakeholders and the decision makers to take actions against the tropospheric pollution due to particles by knowing its origins.It can also be useful for the implementation of epidemiologic studies and for the understanding of the effects on health and environment of aerosols.It also allows the evaluation of the dispersion of Sahara desert dust, from the south of the country to the north, near the Mediterranean coast and so their movements to further destinations like Europe.It would also be possible to use these results within the framework of the study of global climate change that is known to be influenced by dust and its desertic component.In the same context, modelization of futur climates that needs local atmospheric data that are presently lacking in this area could usefully exploit the above results. Fig. 2 . Fig. 2. Ionic composition of PM 10 and PM 2.5 , in percent by mass concentration, at the three sampling sites. Fig. 3 . Fig. 3. Mass concentration composition of PM 10 at the three sampling sites. Fig. 4 . Fig. 4. Contribution of each source to the three size fractions at the traffic site 1. Fig. 5 . Fig. 5. Relative contributions of aerosol species to PM 10 fraction for the three sites. Table 1 . Ion chromatography conditions for the analysis of cations and anions in the samples. Table 2 . Mean, Standard Deviation (SD)and extreme event (ee) of PM 2.5 and PM 10 mass, ionic, elemental concentrations and estimated sources concentrations at the three sampling sites (µg/m 3 ) *CM: crustal matter, SSA: Sea Salt Aerosols, OM: Organic Matter, PA: Primary Anthropogenic pollutant, IS: Inorganic Secondary species. Table 3 . Mean values, standard deviations SD and ranges (in µg/m 3 ) of PM 10 and PM 2.5 at all the sampling sites.GM and RM are the gravimetric and reconstructed mass concentrations, respectively.
2018-12-06T23:10:42.835Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "80e79eeb77d41a17cd552cdae3ffcc248c703942", "oa_license": "CCBY", "oa_url": "https://aaqr.org/articles/aaqr-14-10-oa-0255.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "80e79eeb77d41a17cd552cdae3ffcc248c703942", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
15929090
pes2o/s2orc
v3-fos-license
The Multiple Roles of Hypothetical Gene BPSS1356 in Burkholderia pseudomallei Burkholderia pseudomallei is an opportunistic pathogen and the causative agent of melioidosis. It is able to adapt to harsh environments and can live intracellularly in its infected hosts. In this study, identification of transcriptional factors that associate with the β′ subunit (RpoC) of RNA polymerase was performed. The N-terminal region of this subunit is known to trigger promoter melting when associated with a sigma factor. A pull-down assay using histidine-tagged B. pseudomallei RpoC N-terminal region as bait showed that a hypothetical protein BPSS1356 was one of the proteins bound. This hypothetical protein is conserved in all B. pseudomallei strains and present only in the Burkholderia genus. A BPSS1356 deletion mutant was generated to investigate its biological function. The mutant strain exhibited reduced biofilm formation and a lower cell density during the stationary phase of growth in LB medium. Electron microscopic analysis revealed that the ΔBPSS1356 mutant cells had a shrunken cytoplasm indicative of cell plasmolysis and a rougher surface when compared to the wild type. An RNA microarray result showed that a total of 63 genes were transcriptionally affected by the BPSS1356 deletion with fold change values of higher than 4. The expression of a group of genes encoding membrane located transporters was concurrently down-regulated in ΔBPSS1356 mutant. Amongst the affected genes, the putative ion transportation genes were the most severely suppressed. Deprivation of BPSS1356 also down-regulated the transcriptions of genes for the arginine deiminase system, glycerol metabolism, type III secretion system cluster 2, cytochrome bd oxidase and arsenic resistance. It is therefore obvious that BPSS1356 plays a multiple regulatory roles on many genes. Introduction Burkholderia pseudomallei is an opportunistic pathogen that infect higher eukaryotes including human. It causes a life threatening disease known as melioidosis which is endemic especially in Southern Asia [1]. This Gram-negative bacterium is an environmental saprophyte that resides commonly in wet soil and stagnant water. Multiple acquisition routes and the ability to live intracellularly in its host cells including macrophages is a distinct characteristic of B. pseudomallei in the development of the fatal disease [2]. Resistance to canonical antibiotics, high mortality rate of infected patients and the expansion of endemic areas are amongst the major reasons why B. pseudomallei is receiving great attention [3]. RNA polymerase serves as the key catalytic enzyme of transcription. A functional assembly of a RNA polymerase consists four core subunits (subunit a, b, b9 and v) for transcriptional elongation and a sigma factor for promoter recognition. The sigma factor is known to be an essential component to respond to various growth conditions or environmental stimuli. However, the network of protein-protein interaction of each subunit of bacterial RNA polymerase is a rather intricate system. In a global protein-protein network investigation, Arifuzzaman et al. (2006) [4] reported that bacterial RNA polymerase is a highly interactive enzyme. However, the biological purposes of many of these bindings are largely unknown. The study was conducted by using a pull-down assay in which all the protein baits were recombinantly produced. A similar result was observed if the native form of the protein baits were used [5]. The process of transcription in prokaryotes involves several stages. The initial step of transcription is the formation of an open promoter complex in which the promoter is melted by separating the two DNA strands in the promoter region. Young et al. (2004) [6] showed that amino acids 1 to 314 of the b9 subunit N-terminal region and amino acids 94 to 507 of the s A subunit were sufficient to robustly melt the extended 210 promoter region. These two polypeptides comprise less than one-fifth of RNA polymerase holoenzyme. This N-terminal region of the b9 subunit contains a Zn 2+ finger domain and a coiled-coil domain. It is responsible for the initial promoter binding and s 70 subunit docking, respectively [7,8]. This minimal region of b9 subunit that causes promoter melting was recombinantly produced and later used as the bait in a pull-down assay. The interacting proteins were harvested and their identities were determined using a Maldi-TOF analysis. One of the interacting proteins was identified as hypothetical protein BPSS1356 based on the B. pseudomallei genome [9]. An isogenic BPSS1356 deletion mutant was constructed to elucidate the biological role of BPSS1356 in Burkholderia pseudomallei. A comparative phenotypic characterizations as well as an RNA microarray study were conducted on the mutant and the wild type strains. Materials and Methods Production of RpoC N-terminal protein (RpoC-N) and pull-down assay The DNA fragment encoding RpoC-N was PCR amplified from the genomic DNA of Burkholderia pseudomallei K96243. This N-terminal fragment contained the minimal region of RpoC required for promoter melting during transcription initiation [6]. The genome sequence of K96243 (European Molecular Biology Laboratory accession numbers BX571965 and BX571966) reported by Holden et al. (2004) [9] was referred to in the design of the primers. The sequences of the forward and reverse primers were 59-ATAGGATCCATCGGTCTGGCCTCGCCGGAC-39 (The underlined nucleotides represent BamHI recognition sequence) and 59-TATGGTACCGACGCGCTTGCCGAG-CAGGTTC -39 (The underlined nucleotides represent KpnI recognition sequence), respectively. The primers were designed to target the coding sequence of the N-terminal region of RpoC corresponding to amino acid 32 to 347 at genomic location of 3820366 to 3820363 (978 bp). PCR amplification was performed using the high fidelity KOD DNA polymerase (Novagen, USA) accordingly to the manufacturer's instruction. The PCR amplified DNA fragment coding for N-terminal region of RpoC (approximately 1 kb) was then cloned into the expression vector pQE-30 (Qiagen, Germany) using the BamHI and KpnI restriction sites. Escherichia coli JM109 was used as the cloning and expression host. The resultant plasmind was named as pQE-RPOCN and its recombinant protein contained a His-tag at the N-terminus. The plasmid pQE-RPOCN was extracted and subjected to automated DNA sequencing to verify the insert. Mid-exponential-phase cultures of E. coli JM109 harboring pQE-RPOCN growing in LB medium at 30uC was induced with 0.5 mM IPTG for protein production. The recombinant RpoC-N produced appeared as inclusion body. Thus, protein denaturation and refolding were performed in order to obtain soluble form by referring to protocol suggested by Young et al. (2001) [7] with modifications. The inclusion body was denatured using Urea Buffer (100 mM NaH 2 PO 4 , 8 M urea, pH 8.0). The denatured protein was subsequently subjected to dialysis against a native buffer (50 mM NaH 2 PO 4 , 100 mM NaCl, 10% glycerol, 0.1% Triton-X, pH 8.0) using SnakeSkin (Thermo Scientific, USA) dialysis tubing of 10 k MWCO. The denatured sample was first dialysed for 6 hours against a 10 time volume of native buffer at 4uC. The procedure was repeated for another 6 hours using fresh native buffer. A prolonged dialysis period of 16 hours was performed for another round of dialysis. The dialysed sample was then collected and centrifuged at 12, 000 g for 20 min. The supernatant which contained soluble form of RpoC-N was collected for pull-down experiment. This soluble protein was quantified using Bradford Reagent (Sigma, USA) with bovine serum albumin used as the standard. Pull-down assay protocol that suggested by Arifuzzaman et al. (2006) [4] was referred in this study with modifications. A single colony of B. pseudomallei was inoculated in 5 ml of LB medium and grown overnight at 37uC. A total of 1 ml overnight culture was used as an inoculum to inoculate 100 ml of fresh LB medium. The new culture was grown until it reached an OD 600 value of approximately 1.0. The cells were centrifuged and resuspended in 5 ml of native buffer. Cells were then lysed using sonication. The supernatant was collected after centrifugation at 12, 000 g for 20 min. This soluble fraction of B. pseudomallei was then used for a pull-down assay. A total of 2 mg of the His-Tagged RpoC-N protein was loaded into a 200 ml bed volume of Talon resin in a free-flow 10 ml column. The resin contained embedded cobalt ion that should immobilize the His-tagged RpoC-N. The resin was then washed twice with 2 ml washing buffer (same ingredient with native buffer). The B. pseudomallei protein lysate (5 ml) was then loaded into the column. The resin was then washed with 2 ml of washing buffer which was actually native buffer with 5 mM imidazole. This was performed three times. Subsequently, a total of 200 ml first elution buffer (100 mM NaH 2 PO 4 , 8 M urea, pH 8.0) was loaded into the resin and gently mixed for 15 min. This first eluted fraction contained the denatured RpoC-N interactive proteins and was collected for future analysis. The second elution buffer used was the same as the first with the imidazole added (100 mM NaH 2 PO 4 , 8 M urea, 300 mM imidazole, pH 8.0). It was loaded to the column to remove RpoC-N as well as proteins that unspecifically bound to the Talon resin. A negative control was performed using a 200 ml bed volume of Talon resin and 2 mg of B. pseudomallei total protein as starting materials. The manufacturer's protocol of native purification was adhered to. The washing buffer without imidazole (same ingredient with native buffer) was used in the washing step and the second elution buffer was used in the elution step. The eluted protein samples electrophoresed using 10% SDS-PAGE and were then stained with Coomassie Blue. All the protein bands of first elution were carefully excised using a clean scalpel and each of them was transferred into a microcentrifuge tubes. Peptide digestions and purifications were performed accordingly to a protocol provided by Protein and Proteomic Centre of the National University of Singapore. The complete in gel digestion and Zip-Tip purification protocols are available in web http:// www.dbs.nus.edu.sg/research/facilities/ppc/index.htm. Trysin was used for peptide digestion. The digested and desalted peptides were finally analyzed using the Maldi-TOF mass spectrometry to determine their identities. This procedure was outsourced to Protein and Proteomic Centre of the National University of Singapore as well. Construction of B. pseudomallei BPSS1356 deletion mutant The B. pseudomallei BPSS1356 mutant strain carrying a markerless deletion of the BPSS1356 gene was generated via homologous recombination using a non replicative plasmid pDM-4 [10]. The mutagenesis design removed a region of BPSS1356 open reading frame that encodes for amino acid position of 8 to 1104. The upstream (US) and downstream (DS) homologous regions relative to BPSS1356 ORF were PCR amplified to yield DNA fragments of 954 bp and 998 bp, respectively. They were amplified using primer pairs of 1356USF/1356USR (US fragment) and 1356DSF/1356DSR (DS fragment). The US and DS fragments were treated with the restriction enzyme HindIII and the ends were subsequently ligated together. The ligated sample was used as the template to amplify the fused US-DS fragment. The amplified US-DS fragment was cloned into pGEM-T vector (Promega, USA) via the TA cloning procedure to produce pGEM1356-USDS. The US-DS fragment was subsequently recloned into a SacB-based pDM-4 plasmid through BgIII and SacI restriction sites. E. coli S17-1 lpir was used as the cloning host. The resultant plasmid pDM4-1356 was verified by DNA sequencing from both directions using primers NQCAT and NQREV. The plasmid pDM-1356 was introduced into B. pseudomallei K96243 via conjugation by biparental mating with E. coli S17-1lpir (pDM4-1356) as donor and B. pseudomallei K96243 as recipient. The merodiploid strains were selected on LB agar without NaCl supplemented with 150 mg/ml of chloramphenicol and 50 mg/ml of gentamicin (to kill donor E. coli). During this merodiploid stage, the non-replicative plasmid pDM4-1356 should have been integrated into the genome of B. pseudomallei via a single homologous recombination step. Both wild type and BPSS1356 deletion alleles should be present in this merodiploid. It is also be resistant to chloramphenicol. The merodiploid strain was subjected to a spontaneous second stage of homologous recombination in order to generate BPSS1356 deletion mutant of B. pseudomallei K96243. The plasmid pDM4-1356 contained the sacB gene of Bacillus subtilis which encodes levansucrase that will synthesize levan from sucrose. The accumulation of levan or high molecular weight fructose polymer is lethal to Gram-negative host. Thus, the merodiploid strain which contained whole plasmid recombined into chromosome will be unable to grow on sucrose supplemented media. Only the cells that have been undergone a second homologous recombination to remove the sacB gene can survive. This event will produce either a wild type or deletion mutant depending on the location of recombination. The grown merodiploid cells were plated on the LB agar without NaCl but containing 10% sucrose. The developed colonies were subjected to PCR screening using primer pairs of 1356U-out and 1356D-out. The screening primers used were outside of the US-DS homologous region and therefore were not involved in generation of mutant. The mutant strain should produce a PCR fragment of 2.2 kb in length; the wild type strain should produce a PCR fragment of 5.5 kb. The 2.2 kb DNA fragment was subjected to DNA sequencing using primer 1356_mutSeq to verify the deletion. The sequences of the primers used in mutant construction are available in Table S1A. Growth pattern of wild type and DBPSS1356 mutant Growth curves of wild type B. pseudomallei K96243 and DBPSS1356 mutant in LB and M9 minimal broth medium were observed. A single colony of both strains were inoculated in 25 ml of media and grown for 24 hours at 37uC with 180 rpm rotation, respectively. A fresh medium of 250 ml were separately inoculated with each culture to an optical density value (600 nm) of 0.05. The cultures were grown at 37uC at 180 rpm. Optical density of cultures at 600 nm (OD 600 ) was measured at various time intervals in a spectrophotometer (U-1900 UV/Vis spectrophotometer 200V-Hitachi, Japan). Dilutions were performed when the OD 600 value of an undiluted culture higher than 0.5. Bacterial growth for each strain was performed in triplicates until the growth curve reached a plateau. The mean values of OD 600 were plotted into graphs. Electron microscopy Scanning electron microscopy (SEM) and transmission electron microscopy (TEM) were conducted for wild type and mutant strains. The sample processing method was adapted from Glavert (1981). The bacterial cells were harvested from exponential cultures (OD 600 0.5) for both scanning methods. Biofilm formation assay The biofilm formation assay was conducted for the wild type B. pseudomallei K96243 and DBPSS1356 mutant using LB broth and M9 minimal medium. Biofilm formation assay protocol was adapted from O'Toole & Kolter (1998) using spectrophotometric quantification. The absorbance was measured at 595 nm using a microplate reader (Model 680, Bio-Rad Laboratories, USA). The OD 595 values of the bacterial samples were normalized against values obtained from the negative control (uninoculated media). The mean values of triplicate samples were used for this analysis. Global RNA microarray analysis The RNA samples of wild type B. pseudomallei K96243 and DBPSS1356 mutant were isolated using easy-BLUE Total RNA Extraction Kit (Intron, Korea) following strictly the manufacturer's protocol. The bacterial cells were harvested when an OD 600 value of 1.0 was reached (exponential growth phase). A total of 3 ml culture was used as started culture for every extraction. The isolated RNA samples were subjected to DNAase (Superase, Ambion) treatment according to the manufacturer's protocol. The DNAse treated RNA samples were then subjected to phenol/ chloroform extraction. The volume of RNA sample was toped up to 300 ml using RNase free ddH 2 O. A total of 300 ml phenol/ chloroform solution, pH 5.2 (Ambion, USA) was added and the mixture was vigorously vortexed. The sample was centrifuged at 12,000 g for 5 min. The upper aqueous layer was transferred to a new microcentrifuge tube. A total of 850 ml absolute ethanol and 30 ml of 3 M sodium acetate were added. The sample was frozen at 220uC for 1 hour in order to enhance RNA precipitation. Each sample was centrifuged at 12, 000 g for 20 min at 4uC to pellet the RNA. The RNA pellet was then washed with ice-cold 70% ethanol and air dried. Subsequently, the dried RNA was dissolved in 50 ml of RNAse free ddH 2 O. The RNA quantization was conducted using the Nanodrop analyzer. The integrity of RNA was determined using Bioanalyzer (Algilent, USA). The RNA samples with RIN value of 10 were as of high quality and were chosen to proceed to the next step. These RNA samples were subjected to poly(A) polymerization using Poly(A) Tailing Kit purchased from Epicentre (USA). The reaction volume was 50 ml and it contained 25 mg of RNA, 1 mM ATP, 4 U Poly(A) polymerase and 1 X reaction buffer. The reaction was conducted at 37uC for 30 min. The (A)-tailed RNA samples were subjected to phenol/chloroform extraction according to the same method as mentioned above. The RNA's concentration was determined using Nanodrop analyzer. The resultant purified A-tailed RNA samples were then subjected to microarray analysis. The Agilent microarray platform using the 8615 k microarray format was used in this study. Each slide could accommodate 8 independent samples and each single compartment contained approximately 15, 000 oligonucleotide probes representing all 5721 ORFs annotated by Holden et al. (2004) [9]. Each probe was 60 nucleotides long. The design was kindly provided by Prof. Sheila Nathan (Universiti Kebangsaan Malaysia) and was custommade by Agilent Technologies (USA) (Probe ID: 019078). The probe detail is available at NCBI GEO database with accession number GPL13233. The One-Color Microarray-Based Gene Expression Analysis (Low input Quick Amp Labelling Kit) provided by Agilent Technologies (USA) was used in this study. All the reagents and chemicals were included in the kit. The manufacturer's instruction was strictly followed without any modification. A total of 200 ng poly(A)-tailed RNA sample was used as the starting amount. Briefly, the RNA sample was reverse transcribed to cDNA using reverse transcriptase AffinityScript and T7 promoter primer. The resultant cDNA was then subjected to in vitro transcriptional amplification using T7 RNA polymerase in which Cyanine 3-CTP was incorporated into the resultant cRNA. The resultant cRNA sample was purified using Absolute RNA Nanoprep Kit (Stratagene, USA). The purified cRNA was quantitated using NanoDrop ND-1000 UV-VIS Spectrophotometer (Thermo Scientific, USA) to determine the cRNA yield and the Cyanine 3 labelling efficiency. A total of 600 ng resultant cRNA was subjected to enzymatic fragmentation in a total reaction volume of 25 ml. The fragmented cRNA was then added with 25 ml 2X hybridization buffer. A volume of 40 ml was subjected to hybridization with probe slide at 65uC for 17 hours. After performing the recommended washing steps, the slide was then scanned using Agilent Microarray Scanner with Green Dye channel and 5 mm of scan resolution. The resultant high resolution images were then subjected to data extraction using Agilent Feature Extraction Software. The extracted gene expression data was later analyzed using Gene-Spring GX software (Agilent Technologies, USA). The raw expression data of six samples were adjusted to a threshold value of 1, a median shift normalization value of 75 percentile and a baseline transformation using median of all samples. Three filtered data of each strain were group together to compare the gene expression of the wild type and versus mutant strains. The differentially expressed genes of both strains were obtained after entity list was filtered at p-value P, = 0.05 (T Test unpaired). The fold change of corresponding gene was expressed as log 2 values. A fold change value of 2.0 and above was considered as differentially expressed genes. Real-time PCR validation of microarray results The same RNA samples used for microarray study were subjected to real time PCR analysis in order to validate the microarray result. The one step iScript One-Step RT-PCR Kit With SYBR Green (BioRad, USA) was used in real time PCR. The real time thermal cycler used was BioRad CFX (BioRad, USA). A total of ten genes (Table S1B) were chosen for validation. Normalization was performed against housekeeping gene BPSL2758 (glyA) which encodes serine hydroxymethyltransferase [11,12]. The real time PCR ingredient of 25 ml final volume contained 1X SYBR Green RT-PCR reaction mixture, 300 nM of each primer, 200 ng of RNA template, 1 ml of iScript reverse transcriptase; the final volume was 50 ml. The thermal steps of PCR used are of manufacturer's recommendations. A standard curve was constructed to determine the PCR efficiency (R) of each gene. To do this, a 2 fold serial dilution of the RNA template was used and the same PCR condition was applied. The real-time RT-PCR result was analyzed using Bio-Rad CFX manager (BioRad, USA). This quantization experiment was performed in triplicate and the manufacturer's general guidelines were followed. The fold change values (log 2 ) values of all genes obtained from real-time quantification and microarray study were plotted. The calculated r value (slope) was used for evaluation of RNA microarray output data. The value of 1 represents an ideal correlation between RNA microarray and real-time PCR quantification. Oxidative stress sensitivity assay The DBPSS1356 mutant showed reduced production of cytochrome bd respiratory oxidase. The oxidative stress sensitivity assay was thus performed to characterize the effect of oxidative stress on wild type and mutant strains. The disc inhibition assay was executed as described by Tunpiboonsak et al. (2010) [13]. Briefly, a few isolated colonies of both strains were individually inoculated into a 5 ml sterile saline solution until the turbidity of cell suspension was equivalent to 0.5 McFarland standard. A sterile cotton swab was used to spread the cells to the entire surface of LB agar plates evenly. The plates were left to dry at room temperature. Subsequently, 6 mm paper discs containing 10 ml of 0%, 2.5%, 5.0%, 10%, 15%, 20%, 25%, 30% and 35% hydrogen peroxide were placed on the cell lawn. All LB agar plates were incubated overnight at 37uC. Growth inhibition zones were measured after 24 hours incubation. Each strain was studied in triplicates. Growth pattern of B. pseudomallei DBPSS1356 mutant in minimal medium with glycerol as sole carbon source The absence of BPSS1356 in B. pseudomallei down-regulated the glycerol metabolism related genes as indicated by the result of microarray experiment. Thus, the growth kinetics of both strains, grown using glycerol as the sole carbon source, were determined. The parameters used were the same as in the study for growth pattern of wild type and DBPSS1356 mutant as described earlier. The medium used was M9 minimal medium supplemented with 0.2% glycerol. The optical density values were recorded at 12hour intervals for up to 108 hours. Growth pattern of B. pseudomallei DBPSS1356 mutant in high salt medium Based on the microarray results, the BPSS1356 gene is believed to be involved in the regulation of ion transportation. Therefore, B. pseudomallei wild type and DBPSS1356 mutant were subjected to a high-salt condition to determine their growth kinetics. Pumirat et al. (2010) [14] reported that the growth of B. pseudomallei was slightly attenuated when LB supplemented with 320 mM NaCl medium was used. The parameters used were the same as in the study for growth pattern of wild type and DBPSS1356 mutant as described earlier. The optical density values were recorded every 2 hour interval for up to 30 hours. Osmotic stress assay The osmotic stress assay method was performed according to Subsin et al. (2003) [15] for wild type B. pseudomallei K96243 and DBPSS1356 mutant. The bacterial cultures were prepared by overnight growth in LB broth at 37uC with rotation at 180 rpm. Bacterial cells were then harvested, washed and resuspended in M9 medium supplemented with 4 M NaCl. The resuspended cells were incubated at 37uC with shaking at 180 rpm. The colony forming unit (c.f.u.) per ml values were calculated by plating diluted cell suspensions on LA agar plates after 0, 12, 18 and 24hour intervals. The osmotic stress assay for each strain was performed in triplicates. The viable counts were expressed as the percentage of survival after osmotic shock. Result The interactome of RpoC-N A total of 15 discernible protein bands were chosen for Maldi-TOF analysis and the identities of the proteins obtained using Mascot search are as listed in Table 1. Intriguingly, the BPSS1356 appeared as two isoforms with sizes of approximately 80 kDa and 120 kDa (Figure 1, Band 2 and Band 3 of Lane 2). BPSS1356 is also an uncharacterized conserved protein that found only in Burkholderia genus. The identities of four protein bands failed to be determined by Maldi-TOF analysis. This could due to peptide degradation during the sample preparation. Generation of Burkholderia pseudomallei DBPSS1356 mutant PCR amplification using outer primers was used to verify the deletion of the BPSS1356 gene in the mutant strain. The mutant resulted PCR amplicon of 2.2 kb long and the wild type gave a 5.2 kb amplicon, respectively as shown in Figure 2. The automated DNA sequencing result of this PCR amplicon confirmed the deletion of BPSS1356 gene. Growth curve When LB broth was used as the growth medium, both wild type B. pseudomallei K96243 and DBPSS1356 mutant showed the same growth rate from the lag to the early stationary phases ( Figure 3). However, the DBPSS1356 mutant showed a greater decrement rate of cell density in stationary phase as compared to wild type. After about 16 hours of incubation, the cell density of the mutant strain culture started to decrease. This could possibly due to cell lysis progressing at a higher rate as compared to the wild type. This suggested that BPSS1356 might play a role in maintaining cell integrity during the stationary growth phase. However, both strains demonstrated no difference in growth rate when M9 minimal medium was used (data not shown). Electron microscopy The cells' surface of the DBPSS1356 mutant showed no difference with the wild type in terms of shape, width and length when examined using SEM ( Figure 4). However, the cells of mutant strain exhibited a rougher cell surface than the wild type. The rough surface architecture of the mutant cells could possibly be due to less tolerance to the SEM preparation steps. TEM was also performed to examine the interior of the DBPSS1356 mutant and was compared with the wild type. As judged from Figure 5, the mutant showed an interesting effect of shrunken cytoplasm compartment and an expansion of periplasmic space. This appearance looked similar to bacterial plasmolysis when exposed to hypertonic solvent [16]. This observation suggested that BPSS1356 play a role in maintaining the osmotic balance between the inner compartment and the outer space of the cells. Biofilm formation assay The role of BPSS1356 in biofilm formation was investigated using the microtitre plate assay. When LB broth was used as the growth medium, DBPSS1356 mutant exhibited decreased biofilm formation. As shown in Figure 6, D1356 mutant showed a decrement of 40% (p = 0.0015; student t test) biofilm mass compared to the wild type. The decrement could due to the reduced growth rate of mutant in stationary phase. It is therefore BPSS1356 may not directly involve in biofilm formation. However, there was no significant difference between the wild type and D1356 mutant if M9 minimal medium was used as culture medium (data not shown). Global transcriptional analysis Based on the RNA microarray analysis, the wild type B. pseudomallei and DBPSS1356 mutant showed differences in global gene expression from cultures grown in LB broth. Compare to the wild type, the mutant showed down-regulation in 303 genes and up-regulation in 289 genes when variations of more than 2 fold were considered. When a higher stringency of analysis which only considers 4 fold expression change was performed, a total of 63 genes (30 genes of Chromosome 1; 33 genes of chromosome 2) showed reduced expression in B. pseudomallei D1356 mutant. Whereas, the expression of 26 genes (21 genes of chromosome 1; 5 genes from chromosome 2) were enhanced in the mutant compared to the wild type. These differentially expressed genes were classified using COG (Clusters of Orthologous Groups) annotation system which has four functional groups: metabolism, information storage and processing, cellular processes and unknown function ( Table 2). The complete list of differentially expressed genes with at least 4 fold value is shown in Table S2. Under the metabolism group, a total of 36 genes were found to be affected upon deletion of BPSS1356. These genes were further grouped into six functional categories: lipid metabolism (9 genes), energy production and conversion (6 genes), amino acid transport and metabolism (7 genes), carbohydrate transport and metabolism (6 genes), secondary metabolite biosynthesis, transport and catabolism (6 genes) and coenzyme metabolism (2 genes). Therefore, absence of BPSS1356 in B. pseudomallei affected the metabolism related genes most in which 40% (36 of 89 genes) were differentially expressed as compared to the wild type. A total of 8 out of 9 genes that were related to lipid metabolism were derived from 3 putative operons (BPSL0648 to BPSL0651, BPSL1954 to BPSL1955 and BPSL0473 to BPSL0493). The adjacent genes were also considered for analysis with a fold change cut off value of 2.0. Two operons were down-regulated in DBPSS1356 mutant while one operon was up-regulated (Table S3). To deduce whether the lipid metabolism pathway was connected with the operons, those genes were mapped with the KEGG pathway database [17]. The result turned up to be inconclusive, as there was no specific lipid metabolism pathway which was related to the operons. This inconclusive result is due to either one gene was present in different pathways or some of the genes were not present in the lipid metabolism pathway. The lack of lipid metabolism study using B. pseudomallei and its related species made the analysis more challenging to perform. In the energy production and conversion category, glycerol metabolism was affected upon removal of BPSS1356. The genes glpK (glycerol kinase, BPSL0687) and glpA (Glycerol-3-phosphate dehydrogenase, BPSL0688) were down-regulated in mutant compared to the wild type. The fold change values were 10.48 and 23.28, respectively. The gene glpF (glycerol uptake facilitator, BPSL0686) located immediately upstream of glpK was also downregulated with a fold change value of 10.08. These three genes are possibly coregulated as a single transcript that is involved in glycerol based energy by producing dihydroxyacetone phosphate to enter the glycolysis process [18]. Besides, BPSS1356 indicated an important role in oxygen-based respiratory system of B. pseudomallei. The genes cydB (cytochrome bd oxidase subunit 2, BPSS0234) and cydA (cytochrome bd oxidase subunit, BPSS0235) were down-regulated in mutant with a fold change values of 6.58 and 5.77. They encoded cytochrome bd respiratory oxidase that culminates the reduction of oxygen to water [19]. BPSS1356 influences the amino acid transport and metabolism category. It could be possibly functioning as a positive regulator of arginine metabolism. In DBPSS1356 mutant, four consecutive genes arcD, arcA, arcB and arcC which code for arginine/ornithine antiporter (BPSL1742), arginine deiminase (BPSL1743), ornithine carbamoyltransferase (BPSL1744) and carbonate kinase (BPSL1745), respectively were markedly down-regulated (15.96, 18.68, 9.08 and 11.62). These four genes could be coregulated as single operon since they located continuously and transcribed in the same direction. A total of 11 genes involved in cellular processes group were affected as well. A total of 5 genes belonged to the inorganic ion transport and metabolism category. The remaining 6 genes belonged to 4 other categories: signal transduction mechanism (3 genes), cell envelope biogenesis and outer membrane (1 gene), intracellular trafficking and secretion (1 gene) and cell motility and secretion (1 gene). A minimal effect was observed on the cell information storage and processing cluster with total of only 6 genes affected: 3 genes corresponded to transcription; 2 genes corresponded to translation, ribosomal structure and biogenesis and 1 gene corresponded to nucleotide transport and metabolism. In the inorganic ion transport and metabolism category, BPSS1433 was down-regulated in the DBPSS1356 mutant. It is a member of an arsenic resistance locus which comprises arsR (transcriptional regulator, BPSS1430), BPSS1431 (unknown function), arsC (arsenate reductase, BPSS1432) and arsD (arsenite effux pump BPSS1433). All these genes were significantly downregulated with fold change values of 4.83, 7.30, 14.92 and 14.27, respectively. In the COG annotation system, these genes Table 1 were annotation with different names. However, the genes' affiliations were changed to ars to avoid confusion based on their sequence homology with arsenic resistance related genes [20]. In the same category, the BPSS0766 genes presumably encodes chloride channel protein (EriC) was severely suppressed in the mutant with a fold change value of 32.99 It indicated that BPSS1356 could functioned as major regulatory factor in the ion transportation process. Cell secretion system was affected as BPSS1613, BPSS1614, BPSS1617 and BPSS168 were downregulated at least four fold. These are members of Type III secretion system (cluster 2). This locus consisted of BPSS1613 to BPSS1629 (16 ORFs). A total of 10 ORFs were found downregulated at least 2 fold as well (list not shown). A large number of poorly annotated genes were also differentially expressed between the wild type and DBPSS1356 mutant. A total of 36 genes from general prediction category (7 genes) and function unknown category (29 genes) were observed. The general prediction category comprises protein members with predicted biochemistry activity but different function. The proteins that have no evidence of known functions are classified in the unknown category. The BPSL0324 was the most heavily suppressed gene with a fold change value of 42.87. This protein was classified in general prediction category. It shared homology with ubiquitous sodium bile acid symporter indicating that BPSS1356 might somehow play a critical regulation role in ion transportation. Real-time PCR validation The standard curves of all selected and reference genes were constructed to obtain the value of PCR efficiency (R). The R value was generated automatically by the Bio-Rad CFX manager software. The R values are listed in Table S4. The normalized fold change value of a transcript when comparing wild type over mutant is expressed as Nw ' Nm = R Cq(m) ' R Cq(w) X R Cq(rm) ' R Cq(rw) (N represents the initial mRNA copy number, w represents wild type, m represent mutant, r represent reference gene, R represents value of PCR efficiency, Cq represents cycle value of quantification). The fold change values of each chosen candidate are also listed in Table S4. The fold change values (log 2 ) values of all genes obtained from real-time quantification and microarray study were plotted. The calculated r value (slope) was 1.06 and this showed that the resultant data obtained from both methods of RNA quantification are in good agreement (Figure 7). This showed that the RNA microarray output data closely represented the actual transciptomic status of the cells. Oxidative stress sensitivity assay The results for this assay are as shown in Figure 8. The wild type and mutant strains showed no significant difference in response to various concentrations of hydrogen peroxide. This indicates that the down-regulation of cytochrome bd respiratory oxidase does not increase the sensitivity of the mutant cells towards oxidative stress. Growth pattern of DBPSS1356 mutant in minimal medium with glycerol as sole carbon source The DBPSS1356 mutant showed a slightly higher growth rate in the first 60 hours incubation as shown in Figure 9. Both the wild type and DBPSS1356 mutant strains showed cell aggregation during the incubation. The aggregation appeared as insoluble pellet that settled at the bottom of the cultures. However, the level of cell aggregation was markedly greater for DBPSS1356 mutant cells after 60 hours of incubation. The mutant culture showed reduced planktonic cell mass as compared to the wild type. This observation demonstrated that BPSS1356 is involved in glycerol metabolism as deduced from the microarray analysis. The growth pattern of DBPSS1356 mutant in high salt medium The growth patterns of both strains in high salt medium are shown in Figure 10. Both the wild type B. pseudomallei K96243 and DBPSS1356 mutant strains showed the same growth rate for the first 8 hours after inoculation. Subsequently, the DBPSS1356 mutant showed a reduced growth compared to the wild type. The DBPSS1356 mutant exhibited less cell mass than wild type when the plateau phase was achieved. This different pattern of post stationary growth could be attributed to the regulatory role of Osmotic stress assay The survival rates of wild type and DBPSS1356 mutant were similar (data not shown). This result indicates that the lost of BPSS1356 gene did not result in any significant changes in response to hyper osmotic shock although the cell fitness of DBPSS1356 mutant was compromised when it was grown using high salt medium. Discussion B. pseudomallei is the causal agent of melioidosis, a severe disease that is endemic in tropical areas of Southeast Asia and Northern Australia [21]. The bacterium has a variety of biological repertoire that enables it to invade the immune system. A search for novel virulent factors and genetic adaptations that allow this bacterium survive intracellularly within its host has been greatly facilitated by the availability of the complete genome sequence of B. pseudomallei. However, more basic studies on B. pseudomallei are still needed to Table 2. Numbers of genes affected upon deletion of BPSS1356, categorized using COG functional categories annotation system. Energy production and conversion 5 1 Amino acids transport and metabolism 6 1 Secondary metabolites biosynthesis, transport and catabolism 4 2 Carbohydrate transport and metabolism 5 1 Coenzyme metabolism 1 1 Cellular processes Inorganic ion transport and metabolism 3 2 Signal transduction 2 1 Cell envelope biogenesis, outer membrane 1 0 Intracellular trafficking and secretion 1 0 Cell mobility and secretion 1 0 Information storage and processing Transcription 2 1 Translation, ribosomal structure and biogenesis 2 0 Nucleotide transport and metabolism 1 0 enable it to become a successful model for studying invasion and dissection. Unknown function The core unit of bacterial RNA polymerase comprises four different type of subunits (a 2 bb9v). Another subunit called sigma factor (s) binds to the core enzyme to form the holoenzyme. The RNA polymerase bound sigma factor confers promoter recognition and melting [22]. The minimal region encompassing the promoter and RNA polymerase complex consisted of a small portion of the N-terminal region of the b9 subunit and a partial segment of the s factor [6]. This assembly serves as a model to study the mechanism of transcription initiation. In this study, we used the N-terminal domain of the RNA polymerase b9 subunit (referred to as RpoC-N from here on) from B. pseudomallei as the bait in a pull-down experiment to isolate and identify protein partners that may also contribute to the transcriptional process in this bacterium. The pull-down assay bound RNA polymerase subunits (RpoA and RpoB) indicating that the RpoC-N His-tagged at the Nterminus was possibly folded to enable its anticipated partners to attach to it. This also showed the effectiveness of the pull-down assay. Amongst the pulled down proteins, four were ribosomal subunits (RpsA, RpsC, RplA and RpsD) which are involved in translation. This observation is in good agreement with the result reported by Butland et al. (2005) [5] who showed that there were direct protein interactions between RNA polymerase and ribosomal proteins. Arifuzzaman et al. (2006) [4] reported the [23] showed that RNA polymerase and ribosome work in partnership during gene expression. The interactions involved were presumed to engage direct protein-protein dockings which resulted in a giant enzymatic complex [24]. In this study, the pulldown assay revealed several proteins which interacted with RpoC-N including a gene product of a hypothetical gene BPSS1356. Based on the genome annotation of Holden et al. (2004) [9], approximately 25% of B. pseudomallei total genes were found to be hypothetical. This category of genes does not have any known homologs when compared to other species. It is unclear whether they encode actual proteins. Based on the early annotation, BPSS1356 was one of the members of this category. From this study, BPSS1356 was found to be a real protein coding gene and its protein interacted with RNA polymerase. This relatively large protein (125 kDa) is present only in the Burkholderia genus based on the tBlastN analysis. The fact that it does not exist in non Burkholderia bacteria indicates that it is probably not an essential gene. Amongst the 59 strains of B. pseudomallei with genome information available in NCBI, BPSS1356 is found in 58 strains. The tBlastN analysis revealed that BPSS1356 is highly conserved with only a maximum variation of 4 amino acids found amongst its homologs. Of interesting note, the absence of BPSS1356 in strain 1258a could be due to the melioidosis patient was infected by more than one strain of B. pseudomallei, this is supported by the observation that the relapse strain 1258 b possesses BPSS1356 [25]. Thus, BPSS1356 could possibly assist in the development of melioidosis or the dormancy of B. pseudomallei in its infected host. The start codon of BPSS1356 was verified by integrating an inframe C-terminal His-Tag coding sequence into the chromosomal copy of BPSS1356 [26]. The C-terminal His-tagged BPSS1356 protein produced by resultant strain was then purified and subjected to an N-terminal sequencing analysis which subsequently revealed its start codon. The validated start codon was identical to the in silico prediction. Moreover, the success of BPSS1356 purification under native condition suggests that it is localized in cytoplasm. This observation contradicts to a finding that BPSS1356 is present in outer membrane fraction [27]. Thus, the localization of BPSS1356 requires further validation. An isogenic BPSS1356 deletion strain (DBPSS1356) was constructed in order to understand the biological role of this hypothetical gene by observing phenotypic differences between this mutant and the wild type. To provide insight into the factors affecting the observed shifts in physiology, a comparative transcriptomic microarray analysis was also performed between the mutant and wild type strains. The expression levels of 63 genes were significantly down-regulated and 26 up-regulated in the mutant for at least 4 fold change. Both DBPSS1356 and wild type strains displayed similar growth rate when grown using Luria Bertani (LB) broth and defined minimal medium (M9). However, the DBPSS1356 mutant exhibited a greater rate of decline in cell density during stationary phase as compared to the wild type in LB broth. Based on the microarray analysis, a possible cause for cell mass reduction might be the result of reduced production of cytochrome bd respiratory oxidase which is encoded by cydB (BPSS0234) and cydA (BPSS0235). This postulation was made based on the phenotype of a cytochrome bd oxidase minus mutant of Corynebacterium glutamicum which exhibited reduced cell mass during stationary growth phase [28]. In the case of C. glutamicum, the cydAB deletion mutant showed a similar trend during exponential phase compared to the wild type strain when glucose minimal medium was the growth medium. The mutant exhibited 40% reduction of cell biomass compared to wild type in stationary phase [28]. However, the reduced expression of cytochrome bd in DBPSS1356 mutant did not increase the sensitivity of the mutant to low oxygen tension. The transcriptional reduction in the mutant might not be strong enough to produce a discernible change in the cellular phenotype. Cytochrome bd is a respiratory reductase found commonly in various prokaryotes which culminates the reduction of oxygen to water [19]. In E. coli, it has been shown to be an integral component of the cytoplasmic membrane [29]. Its expression was induced by various environmental stresses such as low oxygen tension, alkalization of the medium, high temperature, presence of cyanide and high hydrostatic pressure (reviewed in [19]). Cytochrome bd respiratory oxygen reductase was also stipulated to be an important virulence factor of various aerobic pathogens that can survive within microaerobic environment upon host invasion: Mycobacterium tuberculosis in mouse lung [30], Brucella suis in macrophage cells [31], Pseudomonas aeruginosa in host lung [32] and a closely related species Burkholderia cenocepacia in lung during long term residency [33]. The microarray result showed that arcD (arginine/ornithine antiporter, BPSL1742), arcA (BPSL1743), arcB (BPSL1744) and arcC (BPSL1745) were found greatly down-regulated in the mutant strain with fold change values of 15.96, 18.68, 9.08 and 11.62, respectively. The arginine deiminase system (ADS) catalyzes the conversion of arginine to ornithine, ammonia and carbon dioxide with production of ATP. It comprises three major enzymes which are ArcA (arginine deiminase), ArcB (ornithine carbamoyl transferase) and ArcC (carbamate kinase) [34]. Chantratita et al. (2011) [35] characterized the proteomic profiles of Type I (wrinkled) and Type III (smooth) and showed that two enzymes of the arginine deiminase system (ArcA and ArcC) were enhanced in the later. A similar result was also observed when the proteomic profile of B. pseudomallei isolated from relapsing melioidosis (Type III colony morphology) was compared to its counterpart (Type I) obtained during primary infection [36]. All three major enzymes of ADS were found to be up-regulated in bacterial cells of the smooth colony (relapse) compared to the wrinkled colony (primary). However, the colony morphology of DBPSS1356 mutant appeared to be the same with wild type when grown on Ashdown agar (data not shown). The ADS was found to have minimal influence on the virulence of B. pseudomallei. An ADS deletion mutant of B. pseudomallei did not become avirulent when infection studies were performed using either macrophage cell lines [35] and murine models [37]. However, in the wild type B. pseudomallei, the ADS genes (arcA, arcB and arcC) gave a higher expression level during a long term residency in its host [36]. The same observation was also obtained in Pseudomonas aeruginosa [38] suggesting that the ADS plays an important role in long term adaptation within an infected host. The ADS is also involved in other forms of adaptations. It has also been shown to be a key factor in acid tolerance in B. pseudomallei [35] as well as other bacteria such as Streptococcus suis [39], Listeria monocytogenes [40], Streptococcus pyogenes [41] and oral streptococci [42]. In addition, in a mouse model experiment, the ADS of L. monocytogenes was found to be a virulent factor [40]. It was needed to maintain the bacterial survival within the macrophage phagosome. The arcA, arcB and arcC genes were induced in the presence of arginine, acidic condition and low oxygen tension [35,41,43,44]. Down-regulation of the ADS expression upon deletion of BPSS1356 indicated that the latter could presumably act as a transcriptional factor for ADSassociated cellular response. Interestingly, both ADS and cytochrome bd respiratory oxygen reductase related genes were enhanced during oxygen limited environment. This indicated the possibility of BPSS1356 serving as a common positive regulator of a subset of genes which are responsive to limited oxygen supply. The regulatory role of BPSS1356 is also reflected in glycerol metabolism. Glycerol related energy production genes consist of glpF (glycerol uptake facilitator, BPSL0686), glpK (glycerol kinase, BPSL0687) and glpA (Glycerol-3-phosphate dehydrogenase, BPSL0688) and they are located side by side in the same transcriptional direction. Their expressions were severely suppressed in DBPSS1356 mutant with fold change values of 10.08, 10.48 and 23.28, respectively. The down-regulation of the glycerol related genes suggests that glycerol assimilation and metabolism are controlled by BPSS1356. The DBPSS1356 mutant cells tend to aggregate when glycerol was used as the carbon source when compared to the wild type. This observation links the role of BPSS1356 in glycerol metabolism. GlpF is the well characterized membrane-bound aquaporin which allows transportation of glycerol and water through the cell membrane [45]. GlpK is a kinase that conducts the phosphorylation of glycerol to produce glycerol-3-phosphate (G3P). The resultant G3P is later oxidized by either G3P dehydrogenase or G3P oxidase and both enzymes produce dihydroxyacetone phosphate which enters the glycolytic pathway [18]. In E. coli, the anaerobic form of G3P dehydrogenase (GlpA) generates NADH 2 whereas the aerobic form of G3P oxidase (GlpD) generates hydrogen peroxide. Both enzymes share high similarity in their protein primary sequences [18]. When the B. pseudomallei BPSL0688 amino acid sequence was aligned with the E. coli K12 protein reference sequences, the result revealed that BPSL0688 matches GlpD better than GlpA with identities of 52% and 28%, respectively. This alignment result strongly suggests that BPSL0688 gene is the homologue of the E. coli glpD. If it is truly glpA, the corresponding operon members glpBC should also be present as in the case of E. coli. GlpABC forms the heterotrimeric enzyme G3P dehydrogenase (anaerobic enzyme of G3P dehydrogenase) [46]. Thus, labelling it as glpA as suggested by the COG annotation system is a misannotation. It is suggested that BPSL0688 is annotated as glpD to avoid confusion. Glycerol metabolism was never thoroughly investigated in B. pseudomallei. In Mycoplasma pneumoniae, glycerol metabolism results in the production of hydrogen peroxide, which is crucial for infection in eukaryotes. GlpD performs the formation of hydrogen peroxide which has an important cytotoxic effect against the eukaryotic cells [47]. In Borrelia burgdorferi, the glpFAD operon was required for normal fitness during the tick phase of the enzootic cycle [48]. The glpD mutant of this bacterium replicates at a slower rate in tick compared to the wild type although the mutant has no significant contribution to virulence in a murine infection model. Glycerol was denoted as an important carbohydrate source for glycolysis during the tick phase of the infectious cycle [48]. In B. pseudomallei, a deeper investigation is needed on its glycerol metabolism in order to elucidate the biological role of the glp operon in this pathogen. The type III secretion system (T3SS) is an important bacterial protein secretion vehicle. In many Gram negative pathogens, it is responsible for delivery of secreted proteins directly into host cytosol [49]. B. pseudomallei has three T3SS clusters. The T3SS cluster 1 (T3SS1) and T3SS cluster 2 (T3SS2) were found to play a minimal role in virulence in a hamster infection model [50]. In contrast, the T3SS cluster 3 (T3SS3) is a well known virulence determinant in pathogenesis [3,51]. The T3SS2 cluster showed high similarity with Xanthomanas spp suggesting that B. pseudomallei could also be a potential plant pathogen [52]. Lee et al. (2010) [53] tested this hypothesis by infecting tomato plant with B. pseudomallei and the study showed that T3SS2 was required for infection although the actual mechanism is unknown. In the B. pseudomallei K96243 genome, the T3SS2 proteins are encoded by the genes BPSS1613 to BPSS1629 [52]. In this study, the genes BPSS1613 to BPSS1618 and BPSS1622 to BPSS1626 were down-regulated suggesting that BPSS1356 plays a role in the expression of the genes of the T3SS2 proteins. In B. pseudomallei K92643 genome revealed the existence of an arsenic resistance operon. This putative operon encodes the homologs of a transcriptional regulator (ArsR; BPSS1430), a hypothetical protein of unknown function (BPSS1431), arsenate reductase (ArsC; BPSS1432) and expulsion pump of arsenite (ArsD; BPSS1433), respectively. This putative operon was downregulated in DBPSS1356 mutant when compared with the wild type. In E. coli, ArsC binds to arsenate resulting in the formation of a disulfide bond between the cystein residues and the reducing equivalents. The subsequent reduction of the disulfide bond results in arsenate reduction to arsenite and the later is extruded from the cells by ArsB [20]. Interestingly, GlpF also participates in the uptake of arsenite and a GlpF mutant resulted in an arseniteinsensitive phenotype [54]. The glpF expression showed downregulation as well in DBPSS1356 mutant. Coincidentally, both ArsD and GlpF are transmembrane proteins. Biofilm is a microorganismal consortium that adheres to biotic and abiotic surfaces. These aggregated cells are embedded with a matrix of extracellular polymeric substance matrix. Bacteria forms biofilm in response to nutrient limited condition and occurs during late stationary growth phase. These stages of bacterial physiological establishment involved transcriptional reprogramming of large scale of genes. The biofilm formation of bacteria allows their survival in unflavored living condition and it is a crucial adaptation strategy of various pathogens [55,56]. The DBPSS1356 mutant showed decreased biofilm formation compared to the wild type. This study showed 40% reduction when using LB as growth medium. This observation could also due to the reduced fitness of the DBPSS1356 mutant in stationary phase. Moreover, the microarray results revealed no significant difference in the expression of biofilm related genes between the DBPSS1356 mutant and wild type strains. Additionally, the RNA samples were isolated from the cells that were undergoing exponential growth. At this stage, the expressions of genes involved in biofilm formation are not yet altered. In B. pseudomallei, biofilm formation purportedly did not have a direct correlation with virulence. No mortality difference was observed when mice were challenged with biofilm producing and biofilm deficient mutant strains [57]. However, Sawasdidoln et al. (2010) [58] suggested that biofilm of B. pseudomallei is involved in enhancing drug resistance and may be the possible cause of high relapse of melioidosis. Bacterial ion channels are membrane proteins that responsible for the transportation of ions across membrane by flowing down their electrochemical gradient. These proteins have high selectivity of specific ions such as sodium, calcium, potassium or chloride [59]. Amongst the genes affected by the BPSS1356 deletion, the ion transportation related genes were down-regulated with the highest magnitude. The transcription of BPSL0324 (putative sodium bile acid symporter family protein) and eriC (chloride channel, BPSS0766) were suppressed with fold change values of 48 and 33, respectively, in DBPSS1356 mutant when compared to the wild type. BPSL0324 was annotated as an eukaryotic ubiquitous gene encoding a putative sodium bile acid symporter based on amino acid similarity with its eukaryotic counterparts as well as the presence of 9 transmembrane a-helical spanners [60,61]. In eukaryotes, this transmembrane protein functions in the liver in the uptake of bile acids from portal blood plasma mediated by the co-transportation of Na + [62]. A homologous ileal protein of this symporter in human is responsible for reabsorbing bile acids and taurocholate from small intestine and its mutation was characterized as the possible cause of Crohn's disease [63]. The biological function of this protein family in bacteria is yet to be determined although it is likely to play a role in Na + dependent acid transportation that resides in the cell membrane [61]. The BPSL0766 contains 7 transmembrane a-helical spanners and exhibits sequence homology to the eukaryotic Cl 2 channel (CLC) that is involved in anion transportation [64]. In E. coli, this family of proteins (EriC and MriT) confers survival in extreme acidic environment such as stomach acid through decarboxylation of glutamate or aspartate followed by excretion of the resulting products [65]. The EriC of E. coli is most likely a H + /Cl 2 exchange transporter rather than a channel with a single directional flow of anion channel [66]. It has also been shown that sycA (homolog of eriC) is an essential gene in Rhizobium tropici CIAT899 in order to establish proficient a symbiotic relationship with its legume host. However, the molecular basis of this observation is still unknown [67]. In this study, the TEM of DBPSS1356 mutant displayed a shrunken cytoplasm which could be due to an expanded periplasmic space. This observation could be the consequence of NaCl intake blockage due to the suppressions of ion transportation proteins BPSS0766 and EriC. The LB medium used contained 5% NaCl, this external hyperosmotic pressure possibly triggered the exportation of H 2 O from the cytoplamic compartment of DBPSS1356 mutant mimicking a plasmolysis process. Bacterial plasmolysis is common during a hyperosmotic exposure and shrinkage of the cytoplasm is a typical observation [16,68]. The most well characterized bacterial sodium channel is NaChBac from Bacillus halodurans [69]. This type of Na channel is not ubiquitous in bacteria and is not found in B. pseudomallei [70]. To date, there is no other example of bacterial sodium channel was reported. This suggests that BPSL0324 deserves further investigation to interrogate the possible role in sodium transportation vehicle. The plasmolysis of D1356 mutant when exposed to high salt condition would also be the probable cause of its rougher cell exterior when observed using scanning electron microscopy. The same observation was reported when E. coli was used as the model. The wrinkling of the E. coli cell wall was due to the dehydration force upon cell exposure to hyperosmotic condition [16]. The DBPSS1356 mutant exhibited reduced cell mass when grown in high salt medium and this could be due to the effect on the transportation of ions upon deletion of the gene BPSS1356. A NaCl sensitivity assay was performed and the result showed no viability difference between wild type and mutant (data not shown). This could be explained by the attenuation of ion transportation genes in DBPSS1356 mutant could not produce enough strength to cause viability discrepancy between wild type and mutant. The upstream sequences of BPSS1356 affected genes were analyzed using Virtual Footprint V3 [71] to search for a consensus putative promoter sequence controlled by known bacterial transcriptional factors. This attempt failed to reveal any consensus motif. A genus specific and yet to be characterized transcriptional regulation might be present in B. pseudomallei. BPSS1356 can possibly play a role in this process. In summary, the significantly attenuated genes in the DBPSS1356 mutant code for membrane proteins. These include BPSL0324 and eriC (Cl 2 channel) with fold change values of 42.87 and 32.99, respectively. Transportation related genes arcD (arginine transportation), glpF (glycerol permease) and arsD (expulsion pump of arsenite) were down-regulated at least 10 fold as well. The plasmolysis exhibited by the DBPSS1356 mutant served as an evidence in the role of the gene in ion transportation. The BPSS1356 was found to be present in RpoC-N interactome. This suggested that BPSS1356 may act as a positive Trans-acting regulatory element in the transcriptional regulation of ion transportation related genes. This hypothetical gene BPSS1356 is most likely necessary for B. pseudomallei survival in a harsh environment. BPSS1356 deletion mutant also exhibited downregulation of genes encoding cytochrome bd and arginine deiminase system. This was further supported by a reduced biofilm formation and a reduced cell mass during stationary phase. BPSS1356 has also been shown to affect transcriptional expressions of genes involved in glycerol metabolism, Type III secretion system, arsenic resistance pathway and lipid metabolism. It is therefore obvious that BPSS1356 played a role in regulation of many genes which explain its multiple regulatory effects. Transcription start sites mapping should be performed in order to precisely map the promoter regions of the affected genes and the outcome will provide a more detail on the underlying transcriptional regulation by BPSS1356. The hypothetical gene BPSL0324 merits further investigation since it is likely involved in sodium transportation via an uncharacterized machinery. Microarray data accession number The raw microarray data and the normalised signal intensity values were deposited to the Gene Expression Omnibus (GEO) (http://www.ncbi.nlm.nih.gov/geo/), they are accessible through GEO series accession number GSE53710. Supporting Information Table S1 Primers used in this study. A) List of primers used in DBPSS1356 mutant construction. B) List of primers used in real-time PCR validation. (DOCX)
2016-05-04T20:20:58.661Z
2014-06-13T00:00:00.000
{ "year": 2014, "sha1": "573670471e0fd313bca69448be7db29d58a91fa3", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0099218&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "573670471e0fd313bca69448be7db29d58a91fa3", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
5819607
pes2o/s2orc
v3-fos-license
Magnetic resonance imaging-based interpretation of degenerative changes in the lower lumbar segments and therapeutic consequences Intervertebral disc degeneration and facet joint osteoarthritis of the lumbar spine are, among others, well EDITORIAL 194 August 28, 2015|Volume 7|Issue 8| WJR|www.wjgnet.com World Journal of Radiology W J R Submit a Manuscript: http://www.wjgnet.com/esps/ Help Desk: http://www.wjgnet.com/esps/helpdesk.aspx DOI: 10.4329/wjr.v7.i8.194 World J Radiol 2015 August 28; 7(8): 194-197 ISSN 1949-8470 (online) © 2015 Baishideng Publishing Group Inc. All rights reserved. as well as the benefit of different therapeutic options often remains unclear. This article briefly reviews the correlation of IDD and FJOA with clinical pain scores and discusses possible treatment options of FJOA with focus on the intra-articular injection of corticosteroids. Maataoui A, Vogl TJ, Khan MF. Magnetic resonance imagingbased interpretation of degenerative changes in the lower lumbar segments and therapeutic consequences. World J Radiol 2015; 7(8): 194-197 Available from: URL: http://www.wjgnet. com/1949-8470/full/v7/i8/194.htm DOI: http://dx.doi.org/10.4329/ wjr.v7.i8.194 INTRODUCTION Among others, intervertebral disc degeneration (IDD) and facet joint osteoarthritis (FJOA) have been identified as causes for low back pain (LBP).Magnetic resonance imaging (MRI) is the imaging method of choice for the evaluation of IDD and FJOA of the lumbar spine [1,2] .For the grading of IDD of the lumbar spine Pfirrmann et al [3] proposed a MRI-based 5-point scale which is based on MRI signal intensity, disc structure, distinction between nucleus and annulus and disc height on T2weighted, midsagittal images.Due to its more precise demonstration of bony details computed tomography (CT) often is the preferred modality in the evaluation of FJOA.Weishaupt et al [4] evaluated the significance of MRI in comparison to CT using an established 4-point scale.In summary, the authors conclude that an additional CT scan is not required in the presence of a MRI examination.Due to the fact that nearly all lumbar structures are possible sources of LBP, for a large proportion of patients it remains difficult to provide a specific diagnosis.The Oswestry Disability Index (ODI) is the most commonly used measure to quantify disability for LBP [5] and could reflect the relationship between pain and increasing grades of IDD and FJOA.If FJOA is identified as source of pain, multiple therapeutic options have been described and established [6] .Among the different options, the lumbar facet joint (LFJ) intraarticular injection of corticosteroids in combination with an anaesthetic solution is one of the most frequently performed interventional procedures [7] .The theory of this particular therapeutic approach is based on the idea that there is inflammation of the synovial structures of the degenerated facet joints.Thus intraarticular steroid injection is performed to generate an anti-inflammatory effect in order to achieve pain relief.Although widely used the clinical benefit of intra-articular steroid injections remains controversial [8] .The aim of the presented article is to highlight the relationship of increasing grades of IDD/FJOA and clinical pain scores and to discuss therapeutic success of minimally invasive therapeutic procedures, such as intra-articular steroid injections in degenerated facet joints. FJOA and pain correlation Since the facet joints are the only synovial joints in the spine with hyaline cartilage overlying subchondral bone, a synovial membrane and a joint capsule, they develop degenerative changes that are equivalent to other peripheral joints.Different studies reported contradicting results about the prevalence of FJOA at lumbar levels.Kalichman et al [9] reported that FJOA is more prevalent at L4/5 (45.1%) followed by L5/S1 (38.2%) and L3/4 (30.6%) whereas Abbas et al [10] describe a different descending order: L5/S1 (55%), L4/5 (27%) and L3/4 (16%).Additionally, Abbas et al [10] describe that FJOA is an age dependant phenomenon, which increases cephalocaudally, whereas they found no correlation of FJOA with sex or the Body mass index.For the assessment of FJOA our group applied the 4-point scale as proposed by Weishaupt et al [4] on approximately 2400 facet joints of the lumbar segments L4/5 and L5/S1. Assuming that grade Ⅰ changes already represent mild degenerative changes, nearly all patients in our study group showed degenerative alterations of the facet joints (97% L4/5; 98% L5/S1).In 150 patients Ashraf et al [11] classified degenerative changes of the lumbar spine on lateral radiographs according to the criteria of Kellgren and Lawrence.Additionally, functional disability was measured using the ODI.They found no significant correlation between the morphological severity of osteoarthritis and ODI scores.Peterson et al [12] evaluated 172 consecutive patients with LBP. Lumbar radiographs were judged with regard to the severity of disc and facet joint degeneration.Results were correlated with the data of the ODI.The authors describe a weak correlation between the values of LBP and radiologically assessed lumbar spine degeneration.A major limitation of the mentioned studies is the fact that degenerative changes of the cervical and lumbar spine were graded on plain film radiographs, which are because of superposition of limited diagnostic value.Additionally, severity of degeneration of intervertebral discs as well as of facet joints was taken into account for scoring.As already mentioned nearly all-lumbar structures are possible sources of LBP, so that an isolated contemplation of anatomic structures (facet joint, intervertebral disc) and their degenerative changes with regard to clinical importance is necessary.Therefore we correlated degenerative changes of facet joints at lumbar levels L4/5 and L5/S1 with the ODI.Our results demonstrate that there is only a weak correlation between signs of degeneration and clinical disability scores as evaluated by ODI.Taking into account that a huge majority of patients of all ages show degenerative changes of facet joints in the lower motion segments of the lumbar spine, these results should be considered in the future evaluation of lumbar MRIs.In the presence of other degenerative changes like IDD, osteochondrosis or Morbus Baastrup the finding of FJOA shouldn't be Maataoui A et al .Management of low back pain considered evidentiary as the cause of LBP.In fact, the presented results seem to prove that chronic LBP is a multifactorial disorder, which cannot be explained with a constricted view on one lumbar compartment. IDD and pain correlation It is widely accepted that IDD of the lumbar spine is one of the main cause of lower back pain [13,14] .The etiology of IDD is not fully explained -heavy physical loading [15] , overweight [16,17] , vibrations during vehicle driving [18] and smoking [19] have been suggested to be associated with IDD.Since radiological features of IDD are almost universal in adults, it often remains unclear to what extent these changes are responsible for the clinical symptoms of the patient.From the radiological point of view, in the first place a standardized nomenclature in the evaluation of intervertebral disc alterations is needed.Pfirrmann et al [3] proposed a morphologic grading system which is based on MRI T2-weighted sagittal imaging and showed a good intra-and interobserver reliability. The grading system reflects the loss of proteoglycan concentration [20] in the nucleus pulposus of the lumbar disc, which goes along with a decreasing signal intensity in T2-weighted imaging.The experience of our group confirms the fact that IDD is a general finding in MRI of the lower (L4/5 and L5/S1) lumbar segments even in young-aged patients.The vast majority of examined patients presents with Pfirrmann grade Ⅱ -grade Ⅳ changes, whereas a relatively low percentage of lumbar discs present with grade Ⅴ changes.Only a small number of lumbar discs show no degenerative changes.These experiences impressively illustrate the dilemma to rate the clinical symptoms of the patient correctly, based on a pervasive imaging finding.In consensus to the above mentioned results regarding the correlation of FJOA and ODI scores, also the presence of IDD in lumbar MRI can't be considered evidentiary as a reason for LBP. LFJ intra-articular steroid injections LFJ intra-articular injections of corticosteroids in combination with an anaesthetic solution is one of the most frequently performed interventional procedures worldwide [7] .The theory of this particular therapeutic approach is based on the idea that there is inflammation of the synovial structures of the degenerated facet joints.Thus intra-articular steroid injection is performed to generate an anti-inflammatory effect in order to achieve pain relief.Although widely used the clinical benefit of intra-articular steroid injections remains controversial [8] .Lakemeier et al [21] compared the effectiveness of intra-articular steroid injections and radiofrequency denervation in relief of LBP associated with L3/L4 -L5/S1 FJOA [21] .They investigated the therapeutic effect of aforementioned interventional procedures in a cohort of 56 patients randomized in two therapeutic groups.In their double-blinded study the authors found no significant differences in the therapeutic success between the two procedures over a follow-up period of 6 mo.Ribeiro et al [22] compared the therapeutic success of intra-articular steroid injection vs intramuscular steroid application in patients with facet joint-related CLP.The experimental group received bilateral intraarticular steroid injection of segments L3/4 -L5/S1 (in total 6 injections), while the control group received 6 intramuscular injections on bilateral surface points of the paravertebral lumbar musculature.Both treatments were effective over the follow-up period of 6 mo compared to the baseline.Regarding pain -relief no significant difference between the procedures was observed. It is well known that besides technical modifications many additional factors are involved in therapeutic outcome.Gryll et al [23] reported about situational factors contributing to placebo effect during oral surgery (status of communicator of drug effects, attitude of dentist, attitude of dental technician and message of drug effects).Among the four variables only the attitude of the dentist and the dental technician led to a statistically significantly reduced fear of injection and lower ratings of pain experience from mandibular-block injection.Initial results of our group show, that the therapist's attitude and empathy may increase the therapeutic effect of LFJ intra-articular steroid injections in patients suffering from chronic LBP.Therefore, we performed a CT-guided puncture (Figure 1) of the facet joints at lumbar levels L4/5 or L5/S1, followed by an injection of a mixture of 4 mL of 0.5% bupivacaine and 1 mL of triamcinolone acetate (20 mg).After the therapeutic procedure we encouraged the patients of an experimental group to ask questions about the procedure and showed them representative CT-images.Patients of the control group left the interventional unit without further contact with the interventional radiologist.The initial results show a significant effect on pain relief during the early postinterventional phase in the experimental group as compared to the control group.It seems that in patients who better understand therapies applied on them, an increase in therapeutic efficacy can be observed.Explanatory behind the higher efficacy might be the phenomenon of hetero-suggestion, which occurs during the post-interventional patient-radiologist dialog during image presentation and might be conveying a message into the subconscious [24] .This shows how the open and transparent handling can lead to a strong therapeutic alliance between patients and physicians for the benefit of patients. Figure 1 Figure 1 Computed tomography-guided puncture of the facet joints at lumbar levels L4/5 showing the needle trajectory.
2018-04-03T03:27:51.341Z
2015-08-28T00:00:00.000
{ "year": 2015, "sha1": "7eb905c1e7ae23720de2834a74a2083c438b8f7e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4329/wjr.v7.i8.194", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "7eb905c1e7ae23720de2834a74a2083c438b8f7e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237418619
pes2o/s2orc
v3-fos-license
Clinical Management of Women with Newly Diagnosed Osteoporosis: Data from Everyday Practice in Bulgaria Introduction The real duration of osteoporosis treatment in clinical practice is still not well described. The primary objective is to estimate the proportion of patients who stayed on treatment during a 4-year follow-up, and the secondary objective is to estimate the proportion of patients who switched treatment and the reasons for switch or discontinuation. Methods This was a national retrospective chart review, based on routine clinical data. Data were collected electronically from medical records in 33 representative primary care physicians’ sites. Inclusion criteria were women with postmenopausal osteoporosis that have received initial treatment prescription following diagnosis by DXA between January 1, 2012 and December 31, 2014, and at least a 12-month database history after the index date. Exclusion criteria were women receiving treatment for osteoporosis and follow-up at secondary care physicians’ sites only. All statistical analyses were performed with the R statistical package. Results A total of 1206 female patients with newly diagnosed osteoporosis and treatment initiation were followed for 4 years. The majority (88.3%) had no history of previous fractures. Bone mineral density data were available in 70.1%. Endocrinology was the most common specialty among prescribing specialists (40.0%), followed by rheumatology (30.3%). Bisphosphonates (BPs) were the most common initial treatment (72.7%), followed by denosumab (20.1%). Ibandronate (70.2%) and alendronate (24.2%) constituted the majority of all prescribed BPs; 731 patients remained on treatment during the second year (60.6%), 524 during the third year (43.4%) and 403 (33.4%)—at study end (fourth year). In all groups, except that on denosumab, the most common reason for switching to another treatment was presumed lack of effect. The main reasons for treatment discontinuation were financial on the patient’s part. Conclusions The duration of osteoporosis treatment in real-world clinical practice is far from optimal: < 3–4 years irrespective of fracture risk. Factors other than medical considerations are at play, mainly limitations set by the Health Insurance Fund. The health authorities should be aware of this. Supplementary Information The online version contains supplementary material available at 10.1007/s40744-021-00358-0. INTRODUCTION The prevalence of osteoporosis increases with age in both men and women [1]. Data from the largest epidemiological osteoporosis survey in Bulgaria identified 16.8% of women aged 50 years or older to have osteoporosis at the femoral neck (FN) [2]. The lifetime probability of a hip fracture in Bulgarian women above the age of 50 years was estimated to be 11.2% in a more recent survey [3]. In addition, a large treatment gap was described-95% of Bulgarian postmenopausal women (PMO) women expected to have osteoporosis remained without treatment [1]. This treatment gap is one of the largest in Europe as a recent study described a mean gap of 74.6% in European countries, rising from 53% in Ireland to 91% in Germany [4]. In Bulgaria, osteoporosis treatment is reimbursed by the National Health Insurance Fund only if initiated by specialists-secondary care physicians (SCP), such as endocrinologists and rheumatologists, after referral by the primary care physician (PCP) [5]. Bone mineral density (BMD) measurement by dual-energy X-ray absorptiometry (DXA), although not reimbursed, is required at treatment initiation and once yearly as per scientific guidelines [6]. PCPs are then responsible for all prescriptions and patients to ensure continuity of care. All patients should ideally return to the SCP once annually for assessment of treatment response [6]. The PCPs would typically follow the initial recommendation but a good proportion would subsequently stop or switch treatment without re-consultation by the SCP. A small proportion of patients may take the initiative and have regular consultations with specialists, but in this scenario, the visit must be fully covered by the patient. Many factors might affect the osteoporosis management led by the PCPfrom the physicians' personal views on the disease priority to continuity in the diagnostic and therapeutic process [7,8]. No local data exist on how many patients continue their treatment after osteoporosis diagnosis by the SCP. A previous observational study included women with postmenopausal osteoporosis (PMO) who visited only SCP offices on a yearly basis for consultation [9]. Results showed that very few patients receiving denosumab discontinued therapy compared with patients receiving ibandronate. However, this study in the SCP setting was not generalizable to patients treated in primary care. We hypothesized that many patients discontinue treatment or switch to other medications without re-consultation with SCPs. In view of the lack of information about current patterns of osteoporosis treatment at primary care, the primary objective of this study was to estimate the proportion of women who stayed on the treatment prescribed by the SPC during a 4-year follow-up. The secondary objectives were: (1) to estimate the proportion of patients whose SCPrecommended therapy had been stopped or switched by the PCP, and (2) to estimate the reasons for therapy switch or discontinuation. Study Design The study was conducted as a national observational retrospective chart review based on routine clinical data of PMO women in Bulgaria and was therefore descriptive in nature with no formal hypothesis to be tested. It was performed in accordance with the Helsinki Declaration of 1964 and its later amendments. The study was in accordance with all local legal and regulatory requirements and followed generally accepted research practices. The study protocol was approved by the Central Ethics Medicines Committee of the Bulgarian Regulatory Agency ( §HBG-0003/28.02.2019 and §ERRB/CT-0364/ 08.05.2019). Due to the retrospective nature of this study, informed consent was not required. First data were included in the electronic CRF on 12.05.2019 and with closed out on 25.05.2020. Participants The study was performed in 33 representative (country-specific) sites with large PCP practices. Sites were selected from the available 4200 PCPs registered in Bulgaria, following feasibility assessment of the study in each one of them. The main site selection criterion was the high turnover of patients with osteoporosis-at least 50 per month. The sites were equally distributed across the country to avoid possible selection bias. The patients, whose data were included in the electronic database and subject of analysis, fulfilled the following criteria. Inclusion Criteria (1) PMO women with initial treatment prescription by SCP following diagnosis by DXA spanning from January 1, 2012, to December 31, 2014; and (2) Patients attending the practice regularly throughout at least 12 months after treatment initiation (in order to reliably describe treatment patterns and cessation); and (3) Patients that fulfilled their prescription at least during the first year after treatment initiation. Exclusion Criterion Women receiving treatment for osteoporosis (OP) and follow-up at SCP sites only. This criterion was introduced as a focus on the management of osteoporosis specifically by PCPs (general practitioners) after the introductory consultation by the SCPs. In addition, this ensured the completeness and continuity of data, as the PCPs had documented the overall health condition of their patients (co-morbidities, etc.). Baseline characteristics were assessed on the index date of the initial treatment recommendation by the SCP. Patient data were collected from the time of diagnosis (by an SCP) for up to 4 years in the PCP setting, or until the first of death or loss to follow-up. Data Analysis Twelve hundred patients was found to be the minimal sample size sufficient to provide satisfying precision of the interval estimates for the primary objective in the worst case (half CI = 2.8% for p = 0.5). Data used during the study were collected electronically from subject medical records through a dedicated study application and were then stored in a database in compliance with the requirements of FDA document 21 CFR part 11. All statistical analyses were performed using the R statistical package in version 3.6.2 [10,11]. All data were descriptive in nature. Categorical variables were displayed as the number and percentage of patients in each category-e.g., by groups after the first, second, third, and fourth years for outcome targets, etc. For categorical variables, the absolute (counts) and relative (%) frequency of patients in each category was presented. For continuous variables, number of observations, mean, median, standard deviation, quartiles 1 and 3, range (minimum and maximum), and the number with missing data were reported. Statistical significance was set as two-tailed p B 0.05. Inferential analysis for comparison between treatments was not included in the study objectives. It was not performed post hoc due to the relatively small proportions of patients remaining on treatment and the frequent switches and therapy re-initiation. Primary and Secondary Care Physicians Providing Patient Data Thirty-three PCPs were approved for study participation and granted access to their electronic databases. On average, PCPs had 23.1 (SD 6.0) years of practice with a range of 8 to 36 years (Q1, 19.0, Q3, 27.0 years) and a median of 20 years. Endocrinology was the most common specialty among SCPs (40.0%), followed by rheumatology (30.3%), orthopedics (14.2%), and internal medicine (10.2%). Patient Characteristics A total of 1266 female patients enrolled, 1206 of whom matched the trial criteria entered the analysis. The mean age of the patients was 66.0 (SD 8.6) years, ranging from 39 to 93 years (Q1, 60 years, Q3, 72 years). The youngest patients (\ 65 years) constituted the majority (45.6%) of the research population, followed by the group between 65 and 75 years old (37.2%). The oldest patients (C 75 years) formed the smallest subgroup in the trial (17.2%). The patents' mean weight was 68.1 kg (SD 11.1; N = 1173), range 40-122 kg (Q1, 60.0, Q3, 74.0 kg). The mean BMI was 26.4 kg/m 2 (SD 4.0; N = 1170), with a range from 17.4 to 44.9 kg/m 2 (Q1, 23.7, Q3, 28.4 kg/m 2 ) and a median of 25.9 kg/m 2 . The age at menopause was provided for 538 women only-48.8 years (SD 3.9) and range 26 to 59 years (Q1, 47 years, Q3, 51 years) with a median of 50 years. The menopause had been natural in most women (90%). Further details describing the patient group (living situation, employment status, lifestyle factors) are summarized in Supplemental Table 1. The clinical risk factors of the study participants are summarized in Table 1. The majority of patients had not experienced previous fractures (88.3%). Although all patients had baseline DXA scans mandatory for the diagnosis of PMO and treatment initiation BMD results were available in the stored data archives for 70.1% of the patients only. Calcium supplementation was used by 67.7% of patients at baseline, while vitamin D by 73.6%. Among all recorded cases of comorbidities at baseline (1578), cardiovascular diseases constituted the majority of cases-787/1578 (49.9% of all comorbidities) and 787/1206 (63.3% of all patients). The majority of patients (78.4%) were taking concomitant medications. A detailed list of SCPs' specialties and treatment choices is presented in Supplemental Table 2. Figure 1 shows that from all 1206 initiated patients, 731 remained on treatment during the second year (60.6%), 524 during the third year (43.4%) and 403 only (33.4%) -at study end (fourth year). Patient Disposition The status of the participants at the end of the treatment period is reviewed in Table 2. Table 3 summarizes the changes in treatment modality during the study period. The fraction of patients for whom the therapy was stopped has decreased over time, from 39.4% (475) in the first year to 25.4% (133) in the third year. A big reduction in stopping oral BPs from the first to third years was noted. The most frequently stopped treatment was the oral BP (60.8%). Stopping denosumab was much rarer (14.5%). Stopping treatment was much more frequent than switching. The most frequent switch between treatments, across the entire trial period, was from oral BPs to denosumab (5.1%). Changes from denosumab to other treatments (excluding the no-treatment option) were less frequent. Table 4 shows the reasons for treatment change and treatment discontinuation for all patients who, during the 4-year period of follow-up, have changed their PMO treatment (N = 678). The most common reason for changing the PMO treatment was presumed lack of effect (10.3% of all patients). This so-called presumed lack of effect was mainly documented as lack of BMD increase on subsequent DXA scans, which reflected both the poor PCP communication and the wrong patients' perceptions of treatment success. Cessation and Substitution of Osteoporosis Medications The most frequent reasons for treatment discontinuation were financial (28.9%), being lost to follow-up (18.6%), and poor adherence (10.2%). The most common financial reason was the patient's inability or unwillingness to co-pay 50% of the treatment cost. Compared to The most common reason for switching from denosumab to another treatment option was an investigator's decision (9.0%). The most frequent reasons for denosumab discontinuation were financial ones (26.0%) and failure to appear (20.0%). Initial Treatment: BPs The most common reason for changing the oral BP treatment was presumed lack of effect (9.4%), while the most common reason for discontinuation was again financial in origin (35.3%), followed by failure to appear (in 12.8%), and poor adherence (in 11.7%). The most common reason for switching from iv BPs to other treatment options was the presumed lack of effect (in 22.4%) followed by an investigator's decision (12.2%). In the iv BPs subgroup, the most frequent reason for treatment discontinuation was failure to appear (26.5%) and poor adherence (10.2%). Treatment Changes Occurring Due to Differing Recommendations by the PCP AND SCP There were two patients (2/1206 = 0.17%) who had their SCP-recommended therapy switched by the PCP without the re-evaluation by the SCP. There was a single patient (1/1206 = 0.08%) who received denosumab by the SCP, but the PCP recommended oral BP for a financial reason. Overall treatment prescribing and monitoring of OP could be done with an overhaul, with PCPs having the leading role and SCPs-the supportive one. Table 5 summarizes the percentage of patients with full supplementation (both calcium and vitamin D at the same time) over the consecutive 4 years, split by the co-treatment administered at the time of the supplementation. A common pattern except for the i.v. BP group is that the percentage of supplemented patients was higher in the first year and either increased over time, or remained stable. However, the total number of patients dropped each year and so did the number of supplemented patients, thus keeping the percentages on stable or slightly increasing trend. Bone Mineral Density (BMD) Data A baseline DXA measurement had been performed in all patients, however BMD data of only 846 patients (70.1%) had been stored in the databases. In the second year, the number of patients with DXA measurements dropped to 108 out of 731 still on treatment (14.8%), in the third year-to 94 out of 524 on treatment (17.9%) and to 78 out of 403 (19.4%) within the fourth year. Therefore, the regular annual controls of BMD were the exception rather than the rule. Only data of patients who had neither changes nor gaps during the treatment were analyzed in an attempt to make T-scores more reliable. There was a large imbalance between the baseline (846) and subsequent years (78-108). There were a lot of improvements in the T-score over time, but no patient in neither test location nor treatment group crossed the -1.5 cutoff, and only a few crossed the -2.0 cutoff. Detailed results can be found in Supplemental Fig. 1. Table 6 shows the data on reported fractures. In total, 30 fractures were recorded in 27 patients during the 4-year period. Three patients Three patients experienced fracture at more than 1 year *Number of patients with at least one fracture in the 4-year period is N = 27 experienced more than one fracture; 33% of fractures occurred in the first year and 30% in the third year. The most common fracture locations were the spine (23.3%) and the wrist (20%). Both of these locations accounted for almost half of all fractures. The absolute numbers of fractures were too small to allow any comparisons between the different treatments. DISCUSSION This study analyzed 1206 female patients with newly diagnosed osteoporosis who were followed for 4 years. The majority of them (88.3%) had no history of previous fractures. BMD results were available per documentation in 70.1%. Calcium supplementation was used by 67.7% of patients at baseline, while vitamin D by 73.6%. Endocrinology was the most common specialty among prescribing SCPs (40.0%), followed by rheumatology (30.3%). Bisphosphonates (BPs) were the most common initiated treatment (72.7%), followed by denosumab (20.1%). Ibandronate (70.2%) and alendronate (24.2%) together constituted the majority of all prescriptions in the BPs group. From all initiated 1206 patients, 731 remained on treatment during the second year (60.6%), 524 during the third year (43.4%), and 403 only (33.4%) at study end (fourth year). In all groups, except the one on denosumab, the most common reason for switching the osteoporosis treatment was lack of effect. In the denosumab subgroup, it was an investigator's decision. Regarding the treatment discontinuation, the main reasons were financial in origin, plus poor adherence and patient's failure to appear. The main conclusion was that despite all efforts, the osteoporosis treatment was applied for very short and insufficient periods of time. Timely prescription of osteoporosis medications and proper adherence to therapy are the key factors in the strategy to reduce the risk of fragility fractures, as once again highlighted in the most recent scorecard for osteoporosis in Europe [12]. The systematic and active cooperation among the medical specialists involved in the continuous process of osteoporosis diagnosis and treatment is of vital importance. In addition, factors of administrative and financial character may play a pivotal role. The initial prescription (oriented mainly after BMD T-scores) of appropriate anti-osteoporotic medications in our country is restricted to specialists in rheumatology and endocrinology only [5,6]. The BMD scan is not reimbursed. In addition, the patient's access to the specialists is affected by the general practitioner's assessment and referral. The number of referrals to specialists is also limited by the National Health Insurance Fund [5]. Treatment of osteoporosis could be subject of specialist's follow-up once in 2 years, but the referral for this secondary assessment (limited in number) is again left in the field of the general practitioners [5]. Sometimes it is easier for them to stop the treatment rather than to initiate a costly re-assessment. The modest level of medication reimbursement (50%) and the lack of reimbursement for DXA measurements render the situation even worse. The role of the GP as coordinator and facilitator of the anti-osteoporotic strategy is crucial. Many studies reported barriers and gaps in osteoporosis treatment led by GPs [13][14][15][16]. A Spanish survey reported 63.4% of inadequacy in the diagnosis and treatment of osteoporosis [13]. Only 40.3% of those with indications for treatment received BPs and 47.9% with calcium and vitamin D. In our study, about two-thirds of all treated patients received proper supplementation. A follow-up survey among Czech GPs revealed that only 60% of the respondents were adherent to the guidelines [15]. Calcium supplementation was started by 41% of the respondents and vitamin D by 40% only. This study focused attention to the lack of possibility to prescribe selected drugs (in 61%) and the financial limits introduced by the health insurance authorities (44%) [15]. All of this increased the financial burden on the patient in addition to the low willingness of patients to pay for drugs out of pocket [15]. Two studies looking at the barriers to improvement in GPs revealed that GPs considered osteoporosis far less important than other diseases and shared their uncertainties about the interpretation of BMD tests [7,16]. In another study, the presence of major osteoporotic risk factors did not alter the likelihood for diagnostic and therapeutic interventions [14]. Another crucial factor for the selection of osteoporosis medications is the specialty of the prescribing physician. Our study showed some differences in drug preferences with endocrinologists prescribing much more often strontium ranelate than rheumatologists. Of note, the time period we analyzed was just before the classification of strontium salts as a third-line treatment option. A large registry-based study highlighted the role of the prescribing specialist [17]. According to this, specialists were more likely to prescribe a treatment other than oral bisphosphonates. The primary adherence to the prescribed treatment was higher with GPs (prescribing primarily oral BPs), but secondary adherence did not differ between GPs and specialists [17]. In our study, the percentage of patients on denosumab increased with time, primarily due to the generally decreasing number of patients still on treatment. The continuous use of BPs and denosumab in older adults was explored in more than one study [18,19]. In 100,000 newly initiated older adults in Canada, the duration of denosumab use was longer than that of BPs [18]. In this particular study, more BP users had discontinued therapy at day 365 (56.7%) than had the denosumab users (33.8%) [18]. An Irish study including 44 general practices reported 2-year persistence of 49.4% for oral BPs and 53.8% for denosumab [19]. Of note, less than 10% of the participants had been switched to other medications [19]. The patients' beliefs and concerns contribute largely to the worsening adherence to the osteoporosis treatment. A study following women treated with BPs documented variable reasons for discontinuation: withdrawal by another physician (40%), lack of motivation (20%), absence of BMD increase (14%), and many others [20]. In a previous retrospective, observational, multicenter chart review (with up to 24 months of follow-up) we analyzed postmenopausal women initiating 6-monthly denosumab injection or monthly oral ibandronate treatment [9]. At 24 months, 4.5% of women receiving denosumab had discontinued therapy compared with 56.2% of women receiving ibandronate. Median time to discontinuation was longer in the denosumab group (729 days; interquartile range (IQR), 728.3-729.0) than in the ibandronate group (367 days; IQR, 354.0-484.8; p \ 0.001). This previous study, however, included only two medications and the participants were followed by specialists dedicated to the management of osteoporosis [9]. The limitations of the present study are inherent to its observational and retrospective nature. Using post hoc data extracted from registries may be the source of the bias and missing data. The major strength of this study lies in the fact that it reflects the real-world situation in the management of osteoporosis in our country. Despite accumulating evidence of high fracture risk in subgroups of the Bulgarian population [3], adherence to treatment is not rising. The combination of medical and nonmedical factors (e.g., level and criteria for reimbursement) is directly compromising the proper management of osteoporosis in our country. Higher public awareness, higher reimbursement of DXA tests, and osteoporosis drugs might contribute to an improvement of the osteoporosis care and to closing the wide gap in fracture prevention. Different strategies for solving these problems have already been tried: issuing osteoporosis guidelines dedicated to GPs [21], testing priorities in specific focus groups [22], as well as introducing remote consultations (telemedicine), which would help a lot in the present COVID-19 era [23]. CONCLUSIONS The duration of osteoporosis treatment in realworld clinical practice is far from being optimal, usually below 3-4 years irrespective of fracture risk. Factors other than medical considerations are at play-regulations and limitations set by the Health Insurance Fund, patient's willingness and ability to pay out-of-pocket costs, and many others. The health authorities should be aware of the barriers to the proper prevention of the costly fragility fractures. We still need to Compliance with Ethics Guidelines. The study was conducted in accordance with all local legal and regulatory requirements and followed generally accepted research practices. In agreement with national law, the study protocol was approved by the Central Ethics Medicines Committee of the Bulgarian Regulatory Agency ( §HBG-0003/28.02.2019 and §ERRB/CT-0364/08.05.2019). Due to the retrospective nature of this study, informed consent was not required. First data were included in the electronic CRF on 12.05.2019 and last ones on 25.05.2020. Data Availability. Qualified researchers may request data from Amgen clinical studies. Complete details are available at the following: http://www.amgen.com/datasharing. Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/bync/4.0/.
2021-09-06T13:36:09.124Z
2021-09-06T00:00:00.000
{ "year": 2021, "sha1": "c72b30a21ede60b1661343a1e1b35bc5f441201c", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40744-021-00358-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c72b30a21ede60b1661343a1e1b35bc5f441201c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119283141
pes2o/s2orc
v3-fos-license
Yukawa Textures From Heterotic Stability Walls A holomorphic vector bundle on a Calabi-Yau threefold, X, with h^{1,1}(X)>1 can have regions of its Kahler cone where it is slope-stable, that is, where the four-dimensional theory is N=1 supersymmetric, bounded by"walls of stability". On these walls the bundle becomes poly-stable, decomposing into a direct sum, and the low energy gauge group is enhanced by at least one anomalous U(1) gauge factor. In this paper, we show that these additional symmetries can strongly constrain the superpotential in the stable region, leading to non-trivial textures of Yukawa interactions and restrictions on allowed masses for vector-like pairs of matter multiplets. The Yukawa textures exhibit a hierarchy; large couplings arise on the stability wall and some suppressed interactions"grow back"off the wall, where the extended U(1) symmetries are spontaneously broken. A number of explicit examples are presented involving both one and two stability walls, with different decompositions of the bundle structure group. A three family standard-like model with no vector-like pairs is given as an example of a class of SU(4) bundles that has a naturally heavy third quark/lepton family. Finally, we present the complete set of Yukawa textures that can arise for any holomorphic bundle with one stability wall where the structure group breaks into two factors. can be extremely useful. Instead of first computing the details of a compactification, calculating the Yukawa couplings and discovering, for example, that the top quark mass vanishes, one can analyze the broad features of the allowed interactions at the start to see if the model has any possibility of being phenomenologically viable. Green-Schwarz anomalous U (1) symmetries, and the phenomenological constraints arising from them, have been used extensively in model building in Type II theories (for example, see [37,38]) and have played an important role in recent work on D-brane instantons [39]- [42]. In addition, such effects have been used to discuss Yukawa textures and hierarchies in F-theory [43,44]. However, it is important to note that the source of the anomalous U (1) symmetries in the present worknamely, their origin in the global stability structure of the Kähler cone-is entirely new and provides an interesting contrast to the way that such symmetries arise in other contexts in string theory. It is also worth noting that the Yukawa textures explored in this work are distinct from those previously explored in the heterotic context [45,46]. For specificity, the explicit examples in this paper involve bundles defined by the monad construction [18]- [22] and by extension [14,28,29] over complete intersection Calabi-Yau threefolds [36]. However, our results and conclusions are completely general and apply to any holomorphic vector bundle with Kähler cone sub-structure defined on any Calabi-Yau manifold. The paper is structured as follows. In the next section, we review general heterotic compactifications as well as the mathematics and associated effective field theories of stability walls. In Section 3, we describe the Yukawa textures that can result from the presence of the simplest kind of stability wall. Sections 4 and 5 discuss two generalizations of this; first, to stability walls with more complicated internal structure and, second, to the case where multiple stability walls are present in a single Kähler cone. A phenomenologically interesting example of these ideas is presented in Section 6. Constraints imposed by stability walls on massive vector-like pairs of matter multiplets are analyzed in Section 7. In Section 8, we give our conclusions. The paper has two appendices. Appendix A presents a list of all possible Yukawa textures that can result from the simplest kind of stability walls. In Appendix B, we discuss some technical details associated with the phenomenologically realistic example of Section 6. General Definitions In E 8 × E 8 heterotic string and M-theory, compactification on a smooth Calabi-Yau threefold is not sufficient to ensure that the four-dimensional effective theory is N = 1 supersymmetric. Since heterotic compactifications necessarily include background gauge fields, supersymmetry is also dependent on the choice of gauge connection and its properties. Dimensional reduction yields the well-known result that to preserve supersymmetry, these gauge fields must solve the Hermitian Yang-Mills equations g ab F ab = 0 , F ab = 0 , Fāb = 0 . (2.1) The latter two equations simply require that the connection be holomorphic. However, the first condition, g ab F ab = 0, is a notoriously difficult partial differential equation to solve, involving not only the gauge connection but also the Calabi-Yau metric -an object known only numerically at best [47]- [50]. Fortunately, the Donaldson-Uhlenbeck-Yau theorem [51,52] where X is the Calabi-Yau manifold with Kähler form J and c 1 (F) is the first Chern class of F. A vector bundle, V , is said to be stable for a given choice of the Kähler form if every sub-bundle 1 actually defined with respect to to F in V with rk(F) < rk(V ) has slope strictly less than that of the bundle itself. That is, A bundle is called semi-stable if µ(F) ≤ µ(V ) for all proper sub-bundles F. We note that it is not stability that appears in the statement of the Donaldson-Uhlenbeck-Yau theorem, but polystability. A bundle is poly-stable if it is a direct sum of stable bundles, all of which have the same slope. That is, Clearly, all poly-stable bundles are semi-stable, but the converse does not hold. Hence, semi-stable bundles will be of interest to us only when they are also poly-stable. An essential property, both mathematically and for physical applications, of the notion of stability -as well as semi-stability and poly-stability -is that it depends explicitly on the choice of Kähler form J on X. To understand the exact meaning of this, it is useful to expand J in a basis J i , i = 1, . . . h 1,1 (X), of (1, 1)-forms as J = t i J i . The coefficients t i are the Kähler moduli. Inserting this into (2.2), the slope of any sub-bundle F can be written as where d ijk = X J i ∧ J j ∧ J k are the triple intersection numbers of X and c 1 (F) = c i 1 (F)J i . That is, the slope of each sub-bundle F is a calculable function of the Kähler moduli t i . It follows that whether or not a bundle is stable, poly-stable or semi-stable is, in general, a function of where one is in Kähler moduli space. A vector bundle V which is stable in one region of the Kähler cone of X may not necessarily be stable in another. Stability Walls and Kähler Cone Substructure How does one determine the the regions of stability/instability of a vector bundle? We begin by noting that the stability properties of a vector bundle for a choice of Kähler class 2 J will remain unchanged if that Kähler class is multiplied by a non-vanishing complex number. Hence, the stability properties of a bundle are the same along any one-dimensional ray in the Kähler cone. It follows that for a Calabi-Yau manifold with h 1,1 (X) = 1, a vector bundle will either be stable, or unstable, everywhere in the one-dimensional Kähler cone. We will, therefore, restrict our discussion to Calabi-Yau threefolds, X, with h 1,1 (X) ≥ 2. Now consider a holomorphic vector bundle, V , on X such that for at least one choice of Kähler form -and, hence, for the ray it defines -the bundle is slope stable. In this paper, we take all slope-stable bundles to be indecomposable 3 with structure group SU (n). Hence, the first Chern class satisfies c 1 (V ) = 0. It follows from (2.2) that the slope of V also vanishes. Thus, for an SU (n) bundle to be stable for a given value of the Kähler moduli, the slope of each of its sub-bundles, calculated using the corresponding Kähler form J, must be negative. It is quite possible to find bundles for which the slopes of all sub-bundles remain negative everywhere in the Kähler cone (for example, the tangent bundle, T X, to the Calabi-Yau threefold). Any such vector bundle will admit an SU (n) connection satisfying the Hermitian Yang-Mills equations for any values of the Kähler moduli. Now, however, consider a case where there is one particular sub-bundle F (itself stable) whose slope, while negative for the polarization where the bundle V is assumed stable, gets smaller and smaller as one moves in the Kähler cone, eventually going to zero. The condition µ(F) = 0 is one equation restricting h 1,1 Kähler moduli. That is, the vanishing of the slope of F defines a co-dimension one boundary -called a "stability wall" -in the Kähler cone. As we cross this wall, this sub-bundle becomes positive in slope and destabilizes the vector bundle. That is, the bundle no longer satisfies the Hermitian Yang-Mills equations and supersymmetry is broken. For such bundles, the Kähler cone has sub-structure [15,16,53]; that is, it can split into separate "chambers" with respect to the stability properties of V . In one of these chambers a solution to (2.1) can be found, and in the others it can not. This stability induced sub-structure, and the effective field theory [15,16] description of it, will be of central importance to this work. On the boundary between a supersymmetric and non-supersymmetric chamber of the Kähler cone, we know from the proceeding discussion that there is a sub-bundle F injecting into the bundle 2 The choice of a Kähler form J, is referred to as a "polarization" in the mathematics literature. 3 Note that decomposable vector bundles V = i V i can at best be best poly-stable, see (2.4). V which has the same slope as the bundle itself. That is, we can write an injective morphism 0 → F → V → . . . . Coherent sheaves form an Abelian category and, thus, one may always write a cokernel, K = V /F, to form a short exact sequence and re-express the bundle as the extension 0 → F → V → K → 0 . (2. 6) In other words, no matter how the bundle was originally defined, if it has a stability wall then it may be written as an extension 4 . Given that on the stability wall F injects into V and has equal slope, the only way in which V can preserve supersymmetry, according to the Donaldson-Uhlenbeck-Yau theorem, is if it splits into a direct sum of two pieces. In other words, supersymmetry is only preserved on the wall when the sequence (2.6) splits and Is this always possible? To answer this, note that the set of equivalent extensions, V , in (2.6) is described by the group Ext 1 (K, F). The split configuration, (2.7), corresponds precisely to the zero element in that space [16]. Thus, as we approach a stability wall in Kähler moduli space, the system can continue to preserve N = 1 supersymmetry 5 . The price for this, however, is a decomposition of the bundle into two pieces V = F ⊕ K. Such a splitting of the bundle on the stability wall corresponds physically to a change in the group in which the gauge field background of the compactification is valued. If we begin with a stable SU (n) bundle V then, at the stability wall, the structure group changes to S[U (n 1 ) × U (n − n 1 )] where n 1 is the rank of F. The exact splitting depends on the the choice of structure group SU (n) in the stable chamber and exactly which sub-bundle destabilizes the bundle at the stability wall. Generically, however, we can see that the effect of the splitting (2.7) will be to change the low-energy effective theory associated with this compactification. If we denote the commutant within E 8 of SU (n) as H -the symmetry group of the four-dimensional theory in the stable chamber -then the commutant of S[U (n 1 )× U (n − n 1 )] will be enhanced by one additional anomalous gauged U (1) symmetry to H × U (1). Example: An SU (3) Heterotic Compactification To give a concrete example of such a compactification, consider the Calabi-Yau threefold defined by a bi-cubic polynomial in P 2 × P 2 , (2.8) 4 Strictly speaking, this is true if the bundle has a stability wall caused by a single destabilizing sub-bundle. We will discuss more general cases in later sections. 5 Mathematically, this statement can be understood by saying that on the wall, the semi-stable bundle, V , is an element of an S-equivalence class [54]. Since each S-equivalence class contains a unique poly-stable representative, it is always possible for the bundle to decompose as in (2.7). where the superscripts are h 1,1 and h 1,2 respectively. On this manifold, let us define a holomorphic vector bundle, V , with structure group SU (3). The bundle is given by a two-step process. First construct a rank 2 bundle, G, by the so-called monad construction [18]- [22] via the short exact G is defined in terms of the bundle morphism, f , above as G = ker(f ). Next, we proceed to build the rank three bundle, V , out of the line bundle O(−1, 3) and G, by "extension". That is, The manifold (2.8) is a complete intersection Calabi-Yau manifold [36]. There are only two inde- To find where V is stable, one must find all sub-bundles F, calculate their slopes and check that these are all negative. For such an analysis, see, for example, [16,23]. Here, we simply present our results. Figure 1 shows the two-dimensional Kähler cone of Calabi-Yau threefold (2.8). The physical Kähler cone, where the Calabi-Yau is positive in volume and non-singular, is the complete colored region. The light blue, upper region, in Figure 1 is the set of polarizations for which the slope of each sub-bundle of the bundle is negative and, hence, the bundle is stable. Now note that the description of bundle (2.10) is already in the form (2.6). We can, therefore, simply read off F and K from (2.6) as It follows that the bundle V in (2.10) has a stability wall of the kind we have been describing. This wall is shown as the line in Figure 1. It separates the region of stability of V from its region of instability. The splitting V → F ⊕ K on the stability wall corresponds physically to a change in the group in which the gauge field background is valued. For this example, these gauge fields change from being valued in SU (3) in the interior of the supersymmetric region, to being valued in 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 00000000000 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 11111111111 The stability wall generated by O(−1, 3) in V occurs on the line with slope t 2 /t 1 = 2 + √ 7. The Particle Spectrum and Quantum Numbers An analysis of the particle spectrum and the associated quantum numbers, both in the interior of the stable region of the Kähler cone as well as on a stability wall, is most easily presented in the context of an explicit example. Let us use the Calabi-Yau threefold, X, and the SU (3) vector bundle presented in (2.8) and (2.9),(2.10) above. In the interior of the stable region, the background gauge fields have structure group SU (3) and the symmetry group of the the four-dimensional effective field theory is E 6 . Computing the matter spectrum of this low energy theory is an exercise in group theory and bundle cohomology [2]. All matter fields in the ten-dimensional theory are valued in the 248 representation of E 8 . The matter multiplets that appear in the four-dimensional spectrum are determined by the branching of this representation under 14) The first number in the brackets above is the dimension of a representation of E 6 and the second the dimension of a representation of SU (3). To find the multiplicity of each term, one must compute the number of zero-modes of the associated twisted Dirac operators on the internal space [2]. This is given by the dimension of the relevant bundle-valued cohomology group. The group Representation Field Name Cohomology The multiplicities for the specific indecomposable rank 3 vector bundle V defined in (2.10) are given in the fourth column. representations, four-dimensional field names and the associated cohomologies for a generic E 6 theory are indicated in the first three columns of Table 1. The dimensions of the cohomologies for the specific bundle V in example (2.10) are presented in the fourth column. We see, in particular, that we have 39 27 dimensional representations of E 6 . At this stage, there is nothing to suggest any sort of "texture" in the cubic self-interactions of these fields. Generically, one would expect all Yukawa terms which are allowed by E 6 gauge symmetry to appear. In fact, this is not the case, as we will show in the next section. For a Kähler form on the stability wall, the background gauge fields are valued in S[U (2) × U (1)] ∼ = SU (2)×U (1) and the symmetry group of the the four-dimensional theory is E 6 ×U (1). The method for computing the spectrum and quantum numbers on the stability wall is analogous to the procedure above. The only difference is that one now takes the gauge bundle to be V = F ⊕ K, rather than indecomposable and rank 3. The group theory which determines which multiplets can appear in four dimensions is now Note that each multiplet has an additional quantum number associated with the U (1) factor in the effective theory. The group representations, four-dimensional field names and the associated cohomologies for a generic E 6 × U (1) theory are indicated in the first three columns of Table 2. The multiplicity is found by calculating the dimension of each cohomology. The results for the decomposition F ⊕ K associated with the explicit example (2.11),(2.12) are given in the fourth column. It is important to note here that the extra U (1) symmetry is Green-Schwarz anomalous, as described in detail in [15,16]. Thus, the usual anomaly cancellation constraints on the charges do not apply. For the general form of U (1) charges possible in the present context, see [55]. We will come back to the anomalous nature of this U (1) in the following sections. One obvious question is: what is the relationship between the particle spectrum on the stability wall, given in Table 2, and the manifestly different spectrum in the interior of the stability region, presented in Table 1? Furthermore, how does one relate their two four-dimensional field theories? Representation Field Name Cohomology To answer these questions, we construct the effective theory on the stability wall and then consider small perturbations into the interior of the slope-stable region. Connecting the Two Theories The effective theories associated with the stable bundle V and the poly-stable bundle F ⊕ K, described generically in Section 2.2, can be related by considering the vacuum near the stability wall. This relationship is most easily illustrated using the specific example in Subsection 2.3. Begin with the Kähler moduli of the E 6 × U (1) theory on the stability wall in Figure 1. Then vary them continuously, moving away from the boundary and into the stable region of the Kähler cone. This should reproduce the physics of the E 6 compactification. As shown in [15,16], the effective theory both on and near the stability wall is described by a D-term associated with the enhanced gauged U (1) factor. It is given by where the charge 6 Q 2 = −3. The first term is a Kähler modulus dependent "Fayet-Iliopoulos" (FI) term. This is a multiple of the slope of the destabilizing sub-bundle, divided by the volume V of the Calabi-Yau threefold. The constants ǫ S and ǫ R are the usual expansion parameters defining four-dimensional heterotic M-theory [8]. It follows from the discussion in Subsection 2.3 that the FI term is positive in the non-supersymmetric (dark shaded) region of Figure 1, negative in the stable (light shaded) region, and vanishes on the boundary line between the two. The second term is the usual contribution to a D-term from charged matter. The fields shown in (2.17) are the E 6 singlets C 2 in Table 2. The positive definite field space metric G P Q appears since they are not generically canonically normalized. In the explicit example of Subsection 2.3, there are no C 1 fields in the spectrum. The 27 and 27 representations in Table 2, which are also charged under U (1), should appear in this D-term as well. However, the E 6 D-terms force the vevs of these fields to vanish, and hence, they can be safely ignored in the following discussion. Using the D-term (2.17), one can concretely specify the relationship between the effective theories in the stable, poly-stable and unstable regions of Figure 1. On the stability wall, µ(F) = 0 and the D U (1) contribution to the potential is minimized for C P 2 = 0. Hence, the theory is invariant with the spectrum of massless fields given in Table 2. Strictly speaking, the U (1) factor is Green-Schwarz anomalous. Hence, the associated gauge boson is not massless, even on the stability wall. The mass of this gauge boson was computed in [15,16,55,56]. On the stability wall it was found to be where G ij = − ∂ 2 lnV ∂t i ∂t j and s = ReS is the real part of the dilaton. This is parametrically lighter than the compactification scale and, hence, the Abelian gauge boson must be included in the four-dimensional effective theory. What happens to D-term (2.17) as one moves continuously off the stability wall and into the stable region of moduli space? Here, µ(F) < 0 and the C 2 fields acquire non-zero vevs so as to set D U (1) = 0 and minimize the potential. The C P 2 = 0 vevs thus spontaneously break U (1), reducing the symmetry to a pure E 6 gauge theory. Specifically, the mass of the U (1) gauge boson is enhanced [15,16] from (2.18) to As C 2 increases in magnitude, their contribution drives the U (1) gauge boson mass above the compactification scale. It must then be integrated out and removed from the four-dimensional theory. This process, both on and off of the stability wall, is simply a Higgs effect. Expanding each field as a small fluctuation around its vev and using D U (1) = 0, the D-term (2.17) is given to linear order by (2.20) From V = 1 2s (δD U (1) ) 2 , canonically normalizing the kinetic energy and using (2.19), one can extract that massive Higgs field as a linear combination of δt k and δC P 2 fluctuations. This is explicitly discussed in [15,16]. Suffice it here to say that on the stability wall, the Higgs field reduces to the linear combination of δt 1 , δt 2 perpendicular to the line in Figure 1. The associated linear combination of Kähler moduli axions acts as the Goldstone boson and is "eaten" so as to give additional mass to the U (1) gauge boson. Thus, near the stability wall one entire complex linear combination of Kähler moduli becomes heavy due to the Higgs mechanism. As one moves away from the stability wall in Kähler moduli space, the vevs of the C 2 fields adjust so as to minimize the potential. As discussed in [15,16], the δC P 2 terms quickly become the dominant contribution to the Higgs field. Thus, in the stable region far from the wall, essentially one complex C 2 field is lost to the Higgs effect. One can now explicitly describe the transition from the massless E 6 × U (1) spectrum on the stability wall, given in Table 2, to the E 6 zero-mode spectrum in the interior of the stable region, Table 1. Of the 21 C 2 fields on the stability wall, 1 of them is lost through the Higgs mechanism as one moves into the stable region. Integrating out the heavy U (1) gauge boson, the -3 charge of the remaining 20 C 2 fields can be ignored. These combine with the 67 ψ fields of Table 2 to correctly reproduce the 87 uncharged bundle moduli of the E 6 theory in Table 1. Furthermore, when the U (1) symmetry is integrated out, the quantum numbers distinguishing the two types of 27 fields at the stability wall, 3 with U (1) charge -2 and 36 with charge +1, no longer label the spectrum. Thus, we find the expected 39 27 fields of Table 1. This correspondence between the massless spectrum near a stability wall and that in the interior of the stable region was proven in complete generality in [16] 7 . Finally, let us start once again on the stability wall. Now, continuously vary the moduli into the unstable region, where µ(F) > 0. In principle, the C 1 fields with U (1) charge Q 1 = +3 could cancel the positive FI-term. However, for the bundle in (2.11) and (2.12), we see from Table 2 that there are no C 1 fields present in the spectrum. Therefore D U (1) = 0 and supersymmetry is broken, as we expect from the stability analysis. The Charged Bundle Moduli C i and Branch Structure In this subsection, for specificity, we consider rank three bundles whose Kähler cone contains at least one stability wall. Furthermore, our analysis is confined to a single wall where the SU (3) structure group decomposes into S[U (2) × U (1)]. Hence, the E 6 gauge group is enhanced by a single U (1) factor, giving rise to one Abelian D-term in the effective theory. Our discussion will, therefore, be applicable to the specific example discussed in Subsections 2.3, 2.4 and 2.5, but will be considerably more general. We emphasize that the type of conclusions drawn from this analysis will remain unchanged for bundles of higher rank, and for stability walls described by more than one D-term. Consider a general rank three bundle V , destabilized by a single sub-bundle F as in (2.6), 7 Note that in moving between the stable region and the poly-stable wall, only the chiral asymmetry need be preserved. The actual number of 27 and 27 representations does not necessarily remain the same. In particular, massless vector-like pairs on the stability wall can become massive in the stable region of moduli space. We will return to the issue of massive vector-like pairs, and possible constraints on them, in Section 7. which generates Kähler cone sub-structure of the form discussed in Section 2.2. In general, for such a bundle, there are precisely two types of bundle moduli charged under the extended U (1) symmetry. These are denoted C 1 ,C 2 and arise from the cohomologies shown in Table 2. These charged bundle moduli, by acquiring vevs to cancel the FI-term, play a central role in controlling the supersymmetry of the theory. In the specific example of Section 2.3, only negatively charged C 2 fields appeared in the spectrum. The D-term potential generated by these fields exactly reproduced the regions of slope stability and instability shown in Figure 1. For a more general bundle, however, it is possible that both fields C 1 , C 2 in Table 2 are present in the spectrum. In this case, the U (1) D-term takes the form Now there are two terms available to cancel the Kähler moduli dependent FI-term. As we will see however, they play very different roles and C 1 , C 2 can never obtain non-zero vevs simultaneously. To show this, first note, that in addition to the D-term (2.21), one must also consider the superpotential. Again ignoring E 6 non-singlets, this can be written as 8 where the indices on both fields and couplings are suppressed. In the stable region of Kähler moduli space, the four-dimensional effective theories we are considering have supersymmetric, Minkowski vacua. Therefore, as we vary the Kähler moduli away from the stability wall into the µ(F) < 0 region of bundle V , we must preserve supersymmetry and avoid introducing a cosmological constant. The relevant equations, in addition to the vanishing of D-term (2.21), are With µ(F) < 0 in (2.21), one might suppose that to preserve supersymmetry, the fields C 1 and C 2 could both get vevs such that the last two terms in D U (1) cancel the FI term. However, substituting these two non-zero vevs into equations (2.23), it is clear that no such solution is possible. This is most easily verified by noting that, without loss of generality, one can choose a basis of field space so that only one of the C 1 fields and one of the C 2 fields has a non-vanishing vev. Thus, to move into the stable region of V and obtain a Minkowski vacuum, the only choice available is to take all C p 1 = 0 and to choose non-vanishing C P 2 vevs so that the first and last terms in (2.21) cancel. What happens in the chamber of V where µ(F) > 0? Here, it would appear from (2.21), (2.23) that supersymmetry could still be preserved by the reverse happening; that is, C 1 fields getting non-vanishing vevs while all C 2 vevs are zero. However, as we show in the remainder of this subsection, within the context of our chosen geometry, i.e. the bundle V defined by (2.6), only C 2 fields can have non-zero vevs. Hence, in the µ(F) > 0 chamber of the Kähler cone supersymmetry is spontaneously broken by the D-term. The key to explaining this fact, and distinguishing the fields C 1 and C 2 , can be found in the associated algebraic geometry. Although they behave as charged matter fields on the stability wall, the C 1 ,C 2 fields can also be viewed geometrically as the moduli which control the "mixing" of the components of F ⊕ K together to form an indecomposable bundle. To see this, recall how matter fields arise in a heterotic compactification. For dimensional reduction, the ten-dimensional E 8 gauge fields, A, on the "visible sector" fixed plane are expanded in a decomposition which is related to the bundle structure group. On the stability wall, the relevant ansatz is The dots indicate terms involving other fields, such as F , from Table 2. From (2.15) we see that the adjoint of E 8 breaks up into a series of pieces, one which is (1, 2) 3 and another (1, 2) −3 , under the branching to E 6 ×SU (2)×U (1). T (1) and T (2) in (2.24) are precisely these gauge group generators, with the indices x and y running over the 2 representation of SU (2). The symbols ω (1) and ω (2) denote harmonic one-forms valued in F * ⊗ K and F ⊗ K * respectively. Hence, the number of C 2 fields is found by counting the independent one-forms valued in F ⊗ K * , while the C 1 fields arise as the independent one-forms valued in F * ⊗ K. This can be re-expressed in terms of Ext-groups [28] as From (2.24) we see that, when we give a C field a vev, the ten-dimensional gauge connection changes its expectation value. Equation (2.25) tells us what this change means in terms of bundle structure. The Ext-groups correspond to the moduli spaces of two different extension bundles [28,14], respectively. V andṼ are referred to as an extension and its "dual" extension. They are not in general isomorphic. It follows from (2.25), (2.26) that when C 2 = 0, A in (2.24) becomes an irreducible connection on V. Similarly, comparing (2.24), (2.25) and (2.27), we see that giving C 1 a vev corresponds to A becoming an irreducible connection onṼ . However, since V andṼ are not isomorphic, for a given geometry, one can have either non-vanishing C 2 or non-vanishing C 1 , but not both. This is the higher-dimensional manifestation of the statement derived in effective field theory earlier in this subsection: C 1 and C 2 can be never be non-zero simultaneously. Note that the bundle V discussed at the beginning of this subsection is of the type (2.26). This explains why only its C 2 fields can get a non-zero vev. Importantly, however, one could just as easily have analyzed the stability regions ofṼ defined by (2.27), where C 1 can be non-zero. These two "branches" of the vacuum space, where C 2 ≥ 0, and C 1 ≥ 0, respectively, intersect at exactly one locus, the stability wall, where both vevs vanish and the connection in A lives on the bundle F ⊕ K. Thus, by changing the vevs of the four-dimensional fields, one can move smoothly between non-isomorphic internal gauge bundles for heterotic compactifications 9 . In the following, we will discuss the theory corresponding to only one branch at a time. A more detailed study of this stability wall induced branch structure, and transitions between such theories, will appear separately [60]. Wall Induced Yukawa Textures We can now turn to the main question of this paper -can the existence of a stability wall constrain the physics of a compactification, even when the vacuum is in the interior of the stable region? The answer, as we will see, is affirmative. In this section, we continue to illustrate the main ideas using rank three bundles whose Kähler cone contains a stability wall where the SU (3) structure group Thus, on and near this wall, the E 6 gauge group is enhanced by a single U (1) factor, giving rise to one Abelian D-term in the effective theory. The types of conclusion drawn from this analysis remain unchanged for bundles of higher rank, and for stability walls with more than one D-term. Textures Near a Stability Wall Consider a heterotic compactification associated with a bundle V of the form (2.6). On and near the stability wall, the superpotential is constrained by the gauge symmetry of the four-dimensional theory, including the extra U (1). Using Table 2, the relevant matter field superpotential consistent 9 Note that the D-term in (2.21), and its associated quantities, are only defined up to an overall sign. However, the relative sign between the C terms and the FI term in (2.21) is fixed, and arises from the choice of embedding of [16] and anomaly cancellation [55]. Since we began by describing the geometry of the bundle in (2.6) and (2.26), here we have chosen the sign conventions in (2.21) so that the FI term is equal to a positive multiple of µ(F ). Had we begun with (2.27) instead, the opposite sign convention could, of course, be taken. Note that since µ(F ) = −µ(K), whichever sign convention is chosen, the sign of the FI term will be opposite in the two branches. with gauge invariance is given by 10 Note that no quadratic terms appear, since all of these superfields are zero-modes of the compactification. Furthermore, terms of dimension six or higher in E 6 non-singlet fields are not of interest to us, so we ignore them. Finally, we have displayed only the lowest dimensional terms required in our analysis. Each term can be multiplied by any positive integer power of C 1 C 2 . Such terms do not change the subsequent analysis and, hence, in the interests of brevity, we suppress them. On the stability wall µ(F) = 0 and, hence, the FI term in (2.21) vanishes. In order to have both D U(1) = 0 and a solution to (2.23), it follows that C 1 = C 2 = 0. Substituting this into (3.1), the most general tri-linear couplings possible between E 6 families on the stability wall are Note that only one type of coupling appears. All others, such as F 3 1 , vanish. This is an extremely restrictive texture of Yukawa couplings. The fact that a Yukawa texture emerges precisely on the stability wall is, perhaps, of limited interest. Although some model building has been carried out on such a locus [27], it is more common to build standard model-like physics in the interior stable region. Let us analyze, therefore, what happens to the texture, (3.2), as we move into this chamber. Consider a point in the stable region close to, but not on, the stability wall. Here µ(F) < 0, which implies, through the vanishing of the D-term (2.21) and equations (2.23), that C 1 = 0 and C 2 = 0. Using this in (3.1), the allowed cubic matter couplings become Note that the non-zero C 2 vevs have allowed some Yukawa couplings missing in (3.2) to "grow back" from higher dimensional terms. These are expressed in boldface. This is not true of all Yukawa couplings however. Specifically, the F 3 1 andF 3 2 terms are still forbidden, despite the fact that the extended U (1) gauge symmetry is spontaneously broken. That is, there remains a non-trivial texture. Thus, we have demonstrated the existence of non-trivial Yukawa texture induced by a stability wall, even for small deformations of the moduli into the stable region. However, can one extend the analysis of this subsection to moduli deep in the interior of the stable chamber? To answer this, let us recall the effective field theory descriptions associated with 1) being on the wall, 2) near the wall and 3) far from the wall in the stable region. On the wall, C 1 = C 2 = 0, both C 1 , C 2 are massless and the U (1) vector boson has a non-zero mass given in (2.18). Since this 10 Note that if some of the fields, such as the C 1 's, do not appear in the low energy spectrum, that is, if the cohomology Table 2 vanishes as in the example of Subsection 2.3, then the following discussion will lead to even more restrictive Yukawa textures. mass is significantly smaller than the compactification scale, the extended U (1) should not be integrated out of the low energy theory. The superpotential is then restricted by the U (1) charges to expression (3.1) and the Yukawa couplings to (3.2). Moving away from the wall, C 1 = 0 and C 2 = 0. However, the non-zero vevs of the C 2 fields enlarge the mass of the U (1) gauge boson via expression (2.19), give an equivalent mass to a linear combination of δt k ,δC 2 and mass to one combination of C 1 fields. As long as the mass of the U (1) gauge boson remains controllably below the compactification scale, the U (1) should still not be integrated out of the theory and the superpotential continues to be given by (3.1) and the Yukawa couplings by (3.3). This defines what it means to be near the wall. What happens far from the wall? By definition, this occurs when C 2 approaches a value such that the two terms in (2.19) become of equal size. It then follows from (2.19) that the U (1) gauge boson and the δC 2 masses, as well as the C 1 mass, become as large as the compactification scale and, hence, these field must be integrated out of the effective theory. There are two consequences of this. First, the linear combination of C 2 fields with the non-zero vev is no longer in the spectrum and, hence, one can not write higher dimension terms proportional to powers of C 2 as in (3.3). Second, the 27 3 that do occur are no longer necessarily constrained by the U (1) quantum numbers. Hence, it would appear that the Yukawa textures found near the stability wall do not necessarily persist into the interior of the stable region. However, as we now show, the Yukawa textures do persist. To prove this, we use the notion of holomorphy. Holomorphy of the Superpotential and General Textures For a generic heterotic compactification which preserves N = 1 supersymmetry, has an E 6 GUT factor in its four dimensional gauge group, and has vanishing cosmological constant, any matter superpotential Yukawa coupling in the effective low-energy theory is of the form λF 3 , where F is either a 27 or a 27 of E 6 . Furthermore, each coefficient must be a holomorphic function on the complex vacuum manifold M of flat directions of the effective potential energy. Note that these couplings only depend upon the C and φ fields, which we have already encountered, and the complex structure moduli of the Calabi-Yau threefold, z a . Now consider the following general theorem. on open intersections, we see from the above theorem that both the F 3 1 andF 3 2 couplings must vanish everywhere -that is, they vanish identically in the complete vacuum space, not just near the stability wall. An open cover of this form can be found on any smooth manifold. On the other hand, any Yukawa couplings, such as F 1 F 2 2 or F 3 2 , whose holomorphic parameters do not vanish in an open region near the wall, will not vanish anywhere in the interior of the stable chamber with the possible exception of isolated regions of higher co-dimension. We conclude that: Yukawa textures appearing near the stability wall due to invariance under the extended U (1) charge, persist throughout the entire stable region, arbitrarily far from the wall, even though the U (1) has been integrated out of the theory. This result follows simply from the holomorphicity of the superpotential. We have been considering the branch of the vacuum where, near the wall, C 2 = 0 and C 1 = 0. In general, as discussed in Subsection 2.6, there is second branch defined by (2.27), where µ(K) < 0 (i.e. µ(F) > 0), C 1 = 0 and C 2 = 0 In this second branch, we see from (3.1) that there can be non-vanishing Yukawa couplings, such as λ 1 C That is, their intersection is necessarily closed. One can, therefore, have a holomorphic function, such as the F 3 1 Yukawa coupling, that is vanishing everywhere on one branch of the vacuum space and non-zero on the other -there being no overlapping open sets to "communicate" between the two. It follows that we have to make our previous conclusion more specific. That is: In a given branch of the theory, Yukawa textures near a stability wall persist in the entire stable chamber of that branch. A stable region associated with a different vector bundle separated by a stability wall, that is, in a different branch of the theory, need not have identical Yukawa textures. A Higher-Dimensional Perspective Before proceeding, let us analyze from a higher-dimensional perspective what is happening when Yukawa couplings "grow back". Begin on the stability wall. Expand the dimensional reduction ansatz (2.24) to include, for example, the F -fields in Table 2. Then The one-forms ω are all harmonic with respect to the connection, built out of the background gauge field A, appropriate to the representation of the gauge group within which it is valued. Dimensional reduction then determines the Yukawa coupling parameters as integrals of the cubic product of these forms over the Calabi-Yau threefold. For example, the F 3 2 Yukawa coupling is proportional too where Ω is the holomorphic three-form and f xyz projects the wedge product of three one forms onto the gauge singlet. Thus, the texture (3.2) we observed on the stability wall arising from the extended U (1) gauge invariance can be viewed simply as the vanishing or non-vanishing of such integrals. For example, on the wall integral (3.6) vanishes, simply as a consequence of the contraction of the one forms with f . As one moves away from the stability wall, the C fields must acquire non-zero vevs to cancel the FI term. Near the stability wall, where these vevs are small, their contribution to the connection can be dealt with perturbatively. Expanding C i = C i + δC i , instead of (3.5) one could write The one-formsω can now be taken to be harmonic with respect to connections built out of and the Yukawa coupling parameters become integrals of cubic products of these forms. For example, Thus, the texture (3.3) near the stability wall arising from the spontaneous breaking of extended U (1) gauge invariance can be viewed as the vanishing or non-vanishing of these integrals. As an example, (3.9) no longer vanishes. What happens deep in the interior of a stable chamber? Far from the stability wall, the vevs of the C fields become so large that their contribution to the connection is comparable to A. At this stage, perturbative expansion (3.8) breaks down and (3.5) becomes ηb T (4) + . . . , (3.10) where the one-formsω are harmonic with respect to connections built out ofÃ. Unfortunately, the indecomposable connectionà is no longer related to the reducible connection A on the stability wall via a perturbative expansion. Hence, a priori one has no idea what the texture of the cubic ω integrals are. Unlike the case near the wall, texture here cannot be found by inserting the nonzero C vevs into (3.1). However, the analysis of the preceding subsection shows that a connection between the two theories can indeed be determined using the holomorphicity of the superpotential. With these observations on holomorphy and general textures in hand, we turn next to a more complicated, and restrictive, example of wall-induced Yukawa textures. One Wall with Two D-Terms In the previous section, we considered the case of a single stability wall in the Kähler cone where the bundle splits into two pieces. There are two immediate generalizations of this. Here, we describe what happens in cases where the bundle splits into more than two pieces on a single wall. In the next section, we discuss the situation where multiple stability walls are present inside the Kähler cone. The simplest case where a bundle can split into more than two pieces occurs for an SU (3) structure group. Therefore, as previously, we will illustrate the main ideas using rank three bundles whose Kähler cone contains a single stability wall. Now, however, the structure ] on this wall. The conclusions drawn from this analysis remain unchanged for bundles of any rank. Consider an SU (3) bundle V that splits on the stability wall, not as V = F ⊕ K as in the previous section, but rather as where l 1 ,l 2 , and l 3 are line bundles. In the F ⊕ K case, F was the destabilizing sub-bundle, the vanishing of whose slope defined the stability wall. For the decomposition Given (4.1), the group theory which determines which multiplets can appear in the fourdimensional theory near the stability wall is where the bold face number is the dimension of the E 6 representation and the subscripts are the two U (1) charges. The multiplicity of each such multiplet is determined by the number of zero-modes of the associated six-dimensional Dirac operator. These are given by the dimensions of bundlevalued cohomology groups. The representations, field names and the associated cohomologies for a generic bundle of type (4.1) are listed in the first three columns of Table 3. Note that there are now six different types of charged bundle moduli, that is, C-fields, of the kind described in Section 2.6. As discussed in that subsection, the C fields are intimately related to the branch structure of the theory. We now generalize the analysis of Section 2.6 to the present case. contain the slope µ(l 1 ) and µ(l 2 ) respectively. Although both slopes vanish on the stability wall, the assumption that the associated line bundles destabilize V implies that their slopes become negative in the interior of the stable chamber. Now note that, in addition to these two D-terms, one must consider the superpotential. Ignoring the E 6 non-singlets, this can be written as For simplicity, here and in the remainder of the paper, we suppress indices and the coefficients in front of each term. In addition, we work only to the dimension required for our analysis. In any stable region, the four-dimensional effective theory has a supersymmetric vacuum with vanishing cosmological constant. Therefore, as we vary the Kähler moduli away from the stability wall into the µ(l 1 ) < 0, µ(l 2 ) < 0 region, in addition to the vanishing of the two D-terms, we must set = 0, we find that there are six supersymmetric branches associated with this stability wall in the Kähler cone -each branch specified by a pair of non-vanishing C fields. For example, one branch is given by where all other C = 0. In terms of sequences, the different possible C field vevs correspond to the different ways of building a bundle V from the three constituent line bundles l 1 , l 2 and l 3 . Let us take the case (4.5), whereC 2 and C 3 have non-zero vevs, as a specific example. Consider the two sequences The moduli space of the first sequence is described by Ext 1 (l 3 , l 1 ) ∼ = H 1 (l 1 ⊗ l * 3 ). Therefore, this extension is non-trivial, that is, W = l 1 ⊕ l 3 , if and only if one is at a non-zero element of this cohomology group. We see from Table 3 that this corresponds, in field theory language, to <C 2 > = 0 . Sequence (4.7) is a non-trivial extension if and only if one is at a non-trivial element in Ext 1 (W, l 2 ) ∼ = H 1 (l 2 ⊗ W * ). Using the dual sequence to (4.6), we find Since all vevs for theC 1 fields vanish, it follows from Table 3 that this branch is confined to the zero-element of H 1 (l 2 ⊗ l * 1 ). The long exact sequence associated with (4.8) then simplifies to It follows that any non-zero element of H 1 (l 2 ⊗ l * 3 ), the cohomology associated with the fields C 3 , maps to a non-zero element of H 1 (l 2 ⊗ W * ), the cohomology associated with bundle (4.7). That is, the deviation of the bundle, V , away from its split point in sequence (4.7) is controlled by the C 3 = 0 condition in the field theory. Putting everything together, we conclude that V in (4.7) is indeed the bundle corresponding to the branch of the vacuum space where C 2 = 0, C 3 = 0 and all other C vevs vanish. A similar analysis can be performed for any other allowed branch. We now turn to an analysis of the allowed Yukawa textures. All of the fields in Table 3, if present in a specific example, are massless near the wall. The most general superpotential for Representation Field Name Cohomology Multiplicity Table 3: The representations, field content and cohomologies of a generic E 6 × U (1) × U (1) theory associated with a poly-stable bundle V = l 1 ⊕ l 2 ⊕ l 3 on the stability wall. The multiplicities for the explicit bundle defined by (4.16) are given in the fourth column. cubic matter interactions invariant under the E 6 × U (1) × U (1) symmetry, including the purely C field superpotential in (4.3), is given by where terms are shown in the order of increasing dimension and we do not display any coefficients or indices. No quadratic terms appear, since all these superfields are zero-modes of the compactification. Furthermore, interactions of dimension six or higher in E 6 non-singlet fields f andf are not relevant to the discussion, so we ignore them. Finally, displayed are the lowest dimension terms required in our analysis. Each interaction in (4.10) can be multiplied by any positive integer power of neutral combinations of C fields. These do not change the subsequent analysis and, hence, in the interests of brevity, we suppress them. Examining (4.10), we see that the only Yukawa couplings present on the stability wall, where all C field vevs vanish, are This is a very restrictive texture. As in the previous section, some of the missing Yukawa couplings can "grow back" as one moves away from the stability wall into a stable chamber of the Kähler cone. Returning to (4.10), it is clear that there are several possible Yukawa textures that can result from the the splitting of an SU (3) bundle into three line bundles on the stability wall. Which texture occurs depends on which C fields get non-zero vevs, that is, which branch of the theory one is on. For the representative branch discussed above, where C 2 = 0, C 3 = 0 and all other C vevs vanish, we find that Comparing to (4.11), it follows that there are five different types of Yukawa couplings which can "grow back" as we deform to the indecomposable bundle described by (4.6) and (4.7). These are shown in boldface. Be this as it may, it is important to note that there remain many Yukawa couplings, such as f 3 1 ,f 3 2 , f 2f 2 1 , which are forbidden by the the extended U (1) × U (1) symmetry. Finally, using the holomorphy analysis from subsection 3.2, we conclude that everywhere in this stable chamber the Yukawa texture is given by (4.13) An Example As an example of a stability wall of the type discussed in this section, consider the bundle defined on the complete intersection Calabi-Yau threefold respectively. Given this explicit example, one can calculate the multiplicity of each multiplet described in (4.2). These are presented in the fourth column of Table 3. Note that there are 20 f 2 fields and no other 27 multiplets of E 6 . Additionally, there are 40f 3 fields but no other 27 anti-generations. Importantly, the only C fields which appear areC 2 and C 3 , precisely the fields that got nonzero vevs in our preceding discussion. Combining this information with (4.13), we find that there are no Yukawa couplings at all, either between three 27's or between three 27's. The residual symmetries left over in the interior of this stable region, as a result of the presence of the stability wall, completely remove all couplings between the matter families in this example. This is a good illustration of the importance of residual symmetries. In some cases they can be extremely restrictive, and could forbid some, or all, of the interactions required by phenomenology. Two Walls in the Kähler Cone In Section 3, we considered a single stability wall in the Kähler cone where the bundle split into two pieces. This was generalized in the preceding section to the case of a bundle which split into three or more pieces, again on a single stability wall. More generally, however, a supersymmetric chamber in the Kähler cone can be surrounded by multiple stability walls. In this section, we turn our attention to this situation. The simplest examples occur in vacua with h 1,1 = 2. In this case, there can be at most two stability walls bounding a supersymmetric region (see Figure 2). In the most basic examples the vector bundle splits into just two pieces on each wall. We will restrict our discussion to rank three cases in order to illustrate these multi-wall scenarios, and the Yukawa textures they give rise to. We emphasize, however, that the general type of conclusions drawn from this analysis remain unchanged for h 1,1 ≥ 3, vector bundles of any rank, and for more general decompositions of the bundle. Clearly, the analysis of Sections 2 and 3 applies to each of the two stability walls individually. Each wall, therefore, places constraints on the terms in the four-dimensional theory. The question of exactly how these constraints interrelate, however, is not easily answered. Note that the description of the effective theory, including the labeling of the fields, and possibly even the number of vectorlike pairs, changes between boundaries. For example, consider a vacuum which, in the interior of the stable region, has three chiral 27 matter families of E 6 . Furthermore, assume that at the "upper" stability wall no family/anti-family pairs appear and that two families get charge q 1 and one family charge Q 1 under the extended U (1) symmetry. Now suppose that at the "lower" boundary this second stability wall gives two families of charge q 2 and one of charge Q 2 under its extra Abelian gauge group. In general then, every field in the problem can carry two additional quantum numbers, one associated with the U (1) at the upper boundary and the second with the Abelian symmetry at the lower boundary. Each U (1) gauge symmetry only appears near the stability wall which gives rise to it, and so the associated charges only have meaning in that part of field space. We have two "near wall" theories with no overlapping region of validity. Given this, when we consider the fields at a generic point in the slope-stable region, how do we correlate the charges which they acquire as we near each of the two walls? For example, do we have two charged objects which pick up charge q 1 near one wall and q 2 near the other, or perhaps simply one and some fields with charges Q 1 and q 2 ? As may be expected, answering this type of question and, hence, describing the physics of multiple stability walls is, in general, example dependent. To untangle the most general constraints on the theory requires a careful observation of the chosen fields and geometry. There are some cases where the result is particularly simple, however, and we will give an illustrative example here. To begin, consider the complete intersection manifold A stability analysis as in [16,19,23] reveals that this bundle is destabilized by a pair of rank two sub-bundles, namely where c 1 (F 1 ) = (−3, 1) and with c 1 (F 2 ) = (2, −1). The vanishing of the slope of the first of these sub-bundles, F 1 , provides a "lower" boundary wall to the stable region of Kähler moduli space, while the vanishing of the slope of the second, F 2 , provides an "upper" boundary to this region. The stability wall structure for this example is given in Figure 2. We will consider each boundary in turn, and the effective field theory associated with it, before combining our observations to find the constraints on the full theory at a generic point in the stable region. Let us begin our analysis on the "lower' boundary wall defined by the sub-bundle F 1 . For the theory to be supersymmetric, the bundle V in (5.2) must "split" on this stability wall into the under the extended E 6 × U (1) gauge symmetry, are given in Table 4. To third order in the matter fields, the invariant superpotential is As always, we ignore irrelevant higher dimension terms. Note that on the lower stability wall, all 27 3 Yukawa couplings are forbidden entirely. Furthermore, the 27 3 couplings exhibit a very restrictive texture. For example, the f 3 i , i = 1, 2, 3 terms are absent. What happens for small deformations away from this wall into the stable chamber? Since one can describe the stable bundle V in terms of this de-stabilizing sub-bundle as we see that C 1 ∈ H 1 (X, F 1 × K * 1 ) must acquire a non-zero vev in order to cancel the FI piece of the D-term associated with the upper boundary, see (2.17). As a result, the 27 3 Yukawa coupling C 1 f 3 3 can "grow back" near this stability wall. It follows from the holomorphy analysis of subsection 3.2 that one expects Representation Field Name Cohomology Multiplicity Representation Field Name Cohomology Multiplicity Table 5. The relevant superpotential is now given by Note that on the upper stability wall, the 27 3 Yukawa couplings exhibit a very restrictive texture. Furthermore, all 27 3 terms are disallowed. What happens for small deformations away from this wall into the stable chamber? Note that while both charged moduliC 1 ,C 2 are present (recall that these are the moduli responsible for "re-mixing" F 2 ⊕ K 2 , into V in (5.2)), as discussed in Section 2.6, only one of them can get a vev in the stable region. Since F 2 in V is the destabilizing sub-bundle at the upper boundary, it is clear that we can describe V in the stable region as As a result, it isC 1 ∈ H 1 (X, F 2 × K * 2 ) which controls the movement away from the upper stability wall into the indecomposable gauge configuration. This can also be seen by inspecting the charges of the various fields in the U (1) D-term associated with the upper boundary, see (2.21). It follows from the slope of F 2 in Figure 2 that it is theC 1 fields that must acquire a non-zero vev. Furthermore, the (C 1C2 ) 2 term in (5.8) assures that the vevs ofC 2 must be zero in the stable region. As a result, of the five C field dependent matter couplings in (5.8), three "grow back" to contribute to Yukawa couplings near this stability wall. It follows from the holomorphy analysis of subsection 3.2 that one expects everywhere in the interior of the stable chamber. Now consider the theory deep in the stable region, away from these two boundaries. Using (5.2), one can find the spectrum of the "standard" heterotic compactification in the stable region. Since V is a stable SU (3) bundle, H 0 (X, V ) = H 3 (X, V ) = 0 [2,17]. It follows that the long exact sequence in cohomology associated with (5.2) splits into As a result, we see that multiplets of E 6 ; that is, four chiral and nine vector-like pairs of matter families. How then does this general theory relate to the effective theories at each boundary wall? To answer this, consider the alternative descriptions of V in terms of F 1 , (5.6), and in terms of F 2 , (5.9). From the associated long exact cohomology sequences, we find that in the stable region It then follows from Tables 4 and 5 that The key point in this example is that, on the lower stability wall all of the families acquire the same charge. Equally, on the upper stability wall all of the anti-families acquire the same charge. Thus, we are able to correlate the charges picked up by the matter fields at the two different walls in an unambiguous manner! Note that the number of chiral families and the number of vector-like pairs stays the same throughout moduli space. Using this result, one can now impose the constraints from each stability wall on couplings throughout the entire Kähler cone. First observe from (5.7) and (5.17) that the constraints from the lower wall completely forbid any 27 3 Yukawa couplings in the stable chamber. It then follows from (5.17) that the couplings f 2 1f 2 ,f 2 2f 1 andf 3 2 in (5.10), while not forbidden by gauge invariance at the upper boundary, are none-the-less vanishing everywhere due to the constraints from the lower wall. Second, observe from (5.7) and (5.18) that the lower wall constraints do allow 27 3 Yukawa couplings in the stable chamber, but only in a specific texture with nine terms. We see from (5.18) that this is group theoretically consistent with the existence of thef 3 3 in (5.10). However, the gauge symmetry of the upper wall would allow 9 3 + 9(9 − 1) + 9 = 165 such terms. It follows that additional texture is imposed by the constraints of the lower wall to reduce this number to 9. Note from (5.7) and (5.10) that these nine holomorphic parameters, while non-vanishing in the interior of the stable region, must depend on bundle moduli in such a way that they all go to zero at the upper boundary, while eight remain non-zero at the lower wall. We conclude that it is possible to trace the constraints from both boundary stability walls into the interior of the stable chamber. At a generic point of this four-generation, nine vector-like pair E 6 theory, we find that there are no 27 3 couplings allowed and only 9 specific 6 Textures in a Three Generation Model One Heavy Family and Other Textures Previously, we investigated Yukawa textures arising from stability walls of SU (3) bundles. In this section, we discuss stability walls and their constraint on matter textures in a more phenomenologically realistic context. Specifically, we consider the SO(10) theory associated with an SU (4) bundle. We further assume that this bundle is destabilized by a single rank two sub-bundle which gives rise to a single stability wall in the Kähler cone and a single D-term. In the stable chamber, the structure group of V is SU (4) and, hence, the low energy theory has a gauged SO(10) symmetry. As in previous sections, along the wall of poly-stability the vector bundle splits into a direct sum where now both sub-bundles have rank two. The structure group then changes from SU (4) to , the symmetry of the four-dimensional theory is enhanced by an anomalous U (1). On the stability wall, where V decomposes as (6.1), the fields carry an extra U (1) charge in addition to their SO(10) content. We present the generic zero-mode spectrum in Table 6. Our goal is to illustrate how the stability wall can constrain Yukawa textures in a phenomenologically realistic context. Therefore, we will only consider bundles leading to three generations of chiral matter. To simplify the analysis, these bundles will be further restricted so that the multiplicities of the 16 and 10 fields on the stability wall, and, hence, in any of its branches, are and no other SO(10) non-singlets occur. The theory generically contains both C 1 ∈ H 1 (X, F 1 ×F * 2 ) with charge +2 as well as To cubic order in the matter fields, the SO(10) × U (1) invariant superpotential is As always, we ignore irrelevant higher dimension terms. On the stability wall, the FI piece of the associated D-term vanishes. To have an N = 1 supersymmetric Minkowski vacuum, if follows from (2.21) and (6.3) that C 1 = C 2 = 0. Therefore, This is a very restrictive Yukawa texture, giving non-vanishing mass to only one matter family. What happens for small deformations away from this wall into a stable chamber? To do this, one has to specify which of the two rank two sub-bundles in (6.1) destabilizes V . Let us first choose this to be F 2 . Then, V can be constructed from the sequence and it is C 2 ∈ H 1 (X, F 2 × F * 1 ) that controls the movement away from the stability wall into the indecomposable gauge configuration. This can also be seen by inspecting the charges of the various fields in the U (1) D-term associated with the wall, see (2.21). Since we have chosen µ(F 2 ) < 0 in the stable region, it is the C 2 fields that must acquire a non-zero vev. Furthermore, the (C 1 C 2 ) 2 term in (6.3) assures that the vevs of C 1 must vanish in the stable region. As a result, the two C 2 field dependent matter couplings in (6.3) "grow back" to contribute to Yukawa couplings near the stability wall. It follows from the holomorphy analysis of subsection 3.2 that one expects everywhere in the interior of the stable chamber for this branch of the vacuum. Note that if the Kähler moduli were stabilized close to, but not on, the stability wall, the four-dimensional SO (10) theory would have one heavy family and a hierarchy for the remaining two generations controlled by powers of C 2 . Let us now consider the second branch where F 1 is the destabilizing rank two sub-bundle. Then, V can be constructed from the sequence and it is C 1 ∈ H 1 (X, F 1 × F * 2 ) that controls the movement away from the stability wall into the indecomposable gauge configuration. Since now µ(F 1 ) < 0, it follows from c 1 (V ) = 0 that µ(F 2 ) = −µ(F 1 ) > 0. Hence, the FI term in D U (1) , which is proportional to µ(F 2 ), is positive and it is the C 1 fields that acquire a non-zero vev while the vevs of C 2 vanish. One must then conclude from (6.3) that no Yukawa couplings can "grow back" near the stability wall in this branch. It follows from the holomorphy analysis of subsection 3.2 that one expects everywhere in the interior of the stable chamber for this branch of the vacuum. Therefore, stability wall constraints can provide a natural way of obtaining a single heavy family in heterotic three family vacua. Representation Field name Cohomology Table 6: The spectrum of a generic SU (4) bundle decomposing into two rank 2 bundles, F 1 ⊕ F 2 , on the stability wall. The resulting structure group is S[U (2) × U (2)]. An Explicit Three Generation Model In this subsection, we present an example of a three generation model with the stability wall structure described above and only one heavy family. To begin, consider a vector bundle over a simply connected Calabi-Yau threefold, X, which admits a fixed-point free, discrete automor- This discrete symmetry allows one to construct a smooth quotient manifold X = X/(Z 3 × Z 3 ) that is not simply connected 11 . By choosing a vector bundle V over the "upstairs" manifold, X, which admits an equivariant structure under this symmetry, one can create a bundleV on the "downstairs" threefoldX. This "quotienting" process is somewhat convoluted mathematically and, since it is not the central focus of this paper, we present here only the spectrum and properties of the final bundleV onX. The derivation of this bundle in terms of its descent from the "upstairs" theory, as well as relevant technical details, are given in Appendix B. The Calabi-Yau threefold is taken to be a Z 3 × Z 3 quotient of the bi-cubic hypersurface in is related to those on X via the quotient map. Specifically, given the projection map q : X →X, a divisorĤ ofX is related to some divisor H on X via q * (Ĥ) = H. Using this, we choose a basis of divisorsĤ i , i = 1, 2 onX to be related to H i via the pull-backs Using the divisor/line bundle correspondence, the basis of Kähler forms ofX are then related to those on X by q * (Ĵ 1 ) = 3J 1 and q * (Ĵ 2 ) = J 1 + J 1 . We define the rank four SU (4) vector bundle onX via the exact sequence 0 →F 1 →V →F 2 → 0 , (6.11) whereF 1 ,F 2 are rank two bundles constructed from and Q 1 , Q 2 are rank three bundles defined via their pull-backs to sums of line bundles on X. Since c 1 (F 1 ) = (−2, 4) and c 1 (F 2 ) = (2, −4), then c 1 (V ) = 0 and V in (6.11) defines an SU (4) bundle overX. The resulting four-dimensional theory has SO(10) gauge symmetry. The matter spectrum ofV is derived in Appendix B and given by n 16 = h 1 (X,V ) = h 1 (X,F 1 ) + h 1 (X,F 2 ) = 2 + 1 = 3 , 14) 11 The first fundamental group of the quotient manifold is which is very similar to (6.2) in the preceding subsection, with the slight exception that there are two 10's. The bundle, (6.11), is not stable everywhere in the Kähler cone. By construction,V is destabilized by the bundleF 1 in some region of Kähler moduli space. The region of stability is shown in Figure 3. This should be compared with the stability wall associated with the "upstairs" bundle on X, presented in Figure 4 of Appendix B 12 . On the stability wall,V decomposes asV →F 1 ⊕F 2 and the structure group changes from SU (4) Table 7. This is a subset of the generic spectrum of Table 6. Note that, in addition to the matter multiplicities (6.14), there are 9 C 1 type fields. However, no C 2 fields appear. Hence, this example describes the second branch of the generic vacuum discussed above. It follows that everywhere in the interior of the stable chamber. We conclude that this explicit vacuum naturally Representation Field name Cohomology Multiplicity Table 7: The "downstairs" field content of the explicit bundle decompositionV →F 1 ⊕F 2 defined by (6.11), (6.12) and (6.13). has one heavy family within the context of a realistic particle physics model. Constraints on Massive Vector-Like Pairs Extended U (1) gauge symmetry constrains the superpotential on and near any stability wall and, by holomorphicity, in the interior of each stable chamber in the Kähler cone. So far, we have focused on the implications of this for cubic matter interactions, that is, Yukawa textures. However, the 12 The stability wall structure of a bundleV on a quotient manifold is entirely determined by the stability structure of V on X. Since only those sub-bundles of V which are equivariant under the finite group action descend to sub-bundles ofV onX, the number of stability walls can at most decrease in going from X toX. existence of stability walls constrains all terms in the superpotential, not just Yukawa couplings. In this section, we broaden our analysis to couplings involving vector-like pairs of matter multiplets. We show that extended U (1) symmetry can forbid many, and sometimes all, such pairs from gaining superpotential mass terms. This can have important implications for heterotic model building. Generically, the zero-mode spectrum of a bundle on a stability wall arises from the cohomology of the sub-bundles into which it decomposes. In particular, matter can be in both a non-singlet representation and its conjugate representation of the low-energy gauge group. All such matter can occur on the stability wall, their multiplicity depending on the specific vacuum chosen. As one moves away from the wall into a stable chamber, the zero-mode spectrum can change. The Atiyah-Singer index theorem [28] requires that the chiral asymmetry of the matter representations be preserved. For example, for a stable SU (3) bundle V which decomposes into V = F ⊕ K on the stability wall, However, the actual number of matter representations need not stay the same. Specifically, as one moves away from the wall, certain U (1) charged C fields get a vev so as to preserve N = 1 supersymmetry. In principle, these can induce a non-vanishing mass for any vector-like pair of matter representations. As we have already seen, however, the extended U (1) symmetry imposes serious constraints on cubic, and higher, matter couplings. We expect there to be vector-like "mass texture" as well. As throughout this paper, we find it easiest to analyze vector-like pair masses within the context of explicit examples. One Wall with One D-Term Let us first consider the class of vacua discussed in Subsection 2.6 and Section 3. In this case, h 1,1 (X) = 2 and V is an SU (3) bundle which decomposes at a single stability wall into V = F ⊕ K, where F and K have rank one and two respectively. The generic spectrum on the wall arises as the product cohomologies of F and K, and is labeled by representations of the extended E 6 × U (1) four-dimensional gauge group. This is presented in Table 2. The most general gauge invariant superpotential involving terms cubic in the F 's was given in (3.1). We now extend this to include all relevant terms involving 27 · 27 vector-like pairs of matter multiplets. The result is where terms are shown in order of increasing dimension and we have suppressed all parameters and indices. Note that no quadratic terms appear, since, on the wall, all matter fields are zero-modes. Finally, each term can be multiplied by any positive power of C 1 C 2 . Such terms do not change the subsequent analysis and, in the interest of brevity, we ignore them. As discussed previously, on the stability wall the requirement of N = 1 supersymmetry and vanishing cosmological constant constrains C 1 = C 2 = 0. It follows from (7.2) that consistent with the fact that F 1 , F 2 andF 1 ,F 2 are all zero-modes on the wall. What happens as we move into the interior a stable region? As discussed in Section 2.6, there are two stable branches of moduli space. These are specified by choosing either C 1 = 0, C 2 = 0, corresponding to µ(F) < 0, or C 1 = 0, C 2 = 0, corresponding to µ(K) < 0. Consider the first branch. In this case, it follows from (7.2) and the holomorphicity of the superpotential that everywhere in this chamber of Kähler moduli space Note that the non-zero C 2 vevs have allowed some vector-like mass terms missing in (7.3) to "grow back". These are expressed in boldface, as were Yukawa couplings that regrew away the wall. We conclude that in the interior of the stable chamber specified by C 1 = 0, C 2 = 0, superfields F 2 andF 1 appear in non-vanishing mass terms. However, the extended U (1) gauge symmetry on the stability wall forbids vector-like masses for F 1 andF 2 from developing. Now consider the second branch. In this case, it follows from (7.2) and holomorphicity that everywhere in this stable chamber W vec−like pairs = F 1F2 . (7.5) Hence, in the interior of the stable chamber specified by C 1 = 0, C 2 = 0, the extended U (1) gauge symmetry on the stability wall, while allowing superfields F 1 andF 2 to appear in mass terms, forbids vector-like masses for F 2 andF 1 . This is a clear example where the stable chambers next to a stability wall exhibit non-trivial vector-like mass textures; allowing some mass terms while forbidding others. One Wall with Two D-Terms We now move on to consider the class of vacua discussed in Section 4. In this case, h 1,1 (X) = 2 and V is an SU (3) bundle which decomposes at a single stability wall into V = l 1 ⊕ l 2 ⊕ l 3 , where l i , i = 1, 2, 3 are line bundles. The generic spectrum on the wall arises as the product cohomologies of l 1 , l 2 , l 3 and is labeled by representations of the extended E 6 × U (1) × U (1) four-dimensional gauge group. This is presented in Table 3. The most general gauge invariant superpotential involving cubic couplings in the F 's was given in (4.10). We now extend this result to include all relevant terms involving 27 · 27 vector-like pairs of matter multiplets. The result is No quadratic terms appear since all superfields are zero-modes on the wall. We have only indicated terms involving at most two different C fields. Vevs of the product of three or more different C fields must necessarily vanish in any branch. Finally, each term can be multiplied by any positive integer power of neutral combinations of C fields. Such terms do not change the subsequent analysis. On the stability wall, the requirement of supersymmetry and vanishing cosmological constant constrains the vevs of each C field to vanish. Hence, W wall vec−like pairs = 0 , (7.7) consistent with the fact that all f andf matter fields are zero-modes on the wall. What happens as we move into a stable region? As discussed in Section 4, there are six stable branches of the moduli space. Each branch is specified by a different pair ( C i , C j ), ( C i , C j ) or ( C i , C j ) being non-vanishing, with all remaining vevs zero. To be specific, let us choose the branch defined by C 2 = 0, C 3 = 0. It then follows from (7.3) and holomorphicity that in the interior of this branch of Kähler moduli space Note that the non-zeroC 2 , C 3 vevs have allowed some vector-like mass terms missing in (7.7) to "grow back". Therefore, in the interior of the stable chamber specified by C 2 = 0, C 3 = 0, matter multiplets f 3 ,f 1 andf 2 appear in non-vanishing mass terms. However, the two extended U (1) gauge symmetries on the stability wall forbid vector-like masses forf 3 , f 1 and f 2 from developing. We conclude that the extended U (1) gauge symmetries on stability walls in the Kähler cone can lead to restrictive vector-like mass textures. Generically, these textures can disallow some vectorlike pairs from having a superpotential mass term, a restriction of consequence for phenomenology. Hence, when building realistic smooth heterotic models, it is essential to include all stability walls and their associated constraints in the analysis. This makes theories with only chiral matter appear much more attractive from this perspective. Conclusions In previous work [53,15,16], "stability walls", that is, boundaries separating regions in Kähler moduli space where a non-Abelian internal gauge bundle either preserves or breaks supersymmetry, were explored. The four-dimensional effective theories valid near such boundaries provide us with an explicit low-energy description of the supersymmetry breaking associated with vector bundle slope stability. The central feature of a stability wall is that, near such a locus in moduli space, the internal gauge bundle decomposes into a direct sum and, as a result, the four-dimensional effective theory is enhanced by at least one Green-Schwarz anomalous U (1) symmetry. In this paper, we have used this effective theory to investigate the structure and properties of heterotic theories with stability induced sub-structure in their Kähler cones. Specifically, we have used the theory near the stability wall, with its enhanced U (1) symmetries, to constrain the form of the N = 1 superpotential W . Using the fact that the superpotential is a holomorphic function, it is possible to extend these constraints throughout the entire moduli space. As a result, deep into the stable regions of the Kähler cone, where supersymmetric heterotic compactifications are normally considered, strong constraints on the superpotential still persist. Without knowledge of the global supersymmetric properties of the vector bundle (that is, a full understanding of its slope stability), these important textures would be inexplicable or, more seriously, go unnoticed if the Yukawa couplings were not explicitly computed. We would like to point out that some of the couplings that are disallowed in the perturbative textures discussed in this paper may well be reintroduced by non-perturbative effects, such as membrane instantons [61,62]. Such couplings would be hierarchically smaller than those present perturbatively. This interesting possibility, which also is strongly constrained by the additional Abelian symmetries on the stability walls, will be addressed in a future publication. We stress again that the existence of stability walls, and their consequences, are the generic situation for a heterotic compactification. In most cases, vector bundles that are slope stable somewhere in moduli space are not slope stable for all polarizations. Hence, the constraints described in this paper must be considered to have a full understanding of the effective theory. Indeed, all three of the main methods of bundle construction in the heterotic literature-monad bundles [20,21,19,26,18], bundles defined by extension [28,14] and the spectral cover construction A All Textures from a Single D-term Stability Wall In this Appendix, we present all Yukawa textures that can result from holomorphic vector bundles with a single stability wall where the bundle splits into a direct sum of two factors. These are an important sub-class of Yukawa textures that can appear naturally within the context of heterotic string and M-theory. A.1 An SU (3) Bundle with a Stability Wall and One D-Term We begin with a compactification of heterotic theory on a Calabi-Yau threefold with a rank three holomorphic vector bundle V . For any Kähler form in the stable chamber, the structure group is an indecomposable SU (3) leading to an E 6 gauge group in the low-energy theory. At the stability wall, where the bundle splits into two parts, an SU (3) bundle necessarily breaks into a rank 2 and a rank 1 piece, which we will denote by F and K respectively. That is, The structure group of this bundle is S[U (2) × U (1)] ∼ = SU (2) × U (1), leading to an enhanced This decomposition indicates which representations of E 6 × U (1) can possibly appear as fields in the four-dimensional effective theory. To find out how many of each multiplet is actually present, one must calculate the dimension of the cohomology groups indicated in Table 8. As discussed in Section 3, only one of the two fields C 1 ,C 2 can get a vev. If C 1 = 0, then the allowed Yukawa couplings are As discussed in the text, we suppress the arbitrary coefficients in front of each term for simplicity. Terms allowed in the dimension three (in superfields) superpotential on the stability wall are shown in standard type. Yukawa terms that originate as higher dimensional operators involving powers of C 1 , which are "grown back" upon re-entering the interior supersymmetric region where < C 1 > = 0, are indicated in boldface. On the other hand, if C 2 = 0, then we find Note that on the stability wall the U (1) charges strongly restrict the allowed Yukawa couplings. As one moves away from the stability wall into the stable chamber, a number of previously disallowed couplings "grow back". However, not all terms allowed by the E 6 symmetry can reappear. For example, anf 3 1 term can never be generated in superpotential (A.4). A.2 An SU (4) Bundle with a Stability Wall and One D-Term Now consider a compactification of heterotic theory on a Calabi-Yau threefold with a rank four holomorphic vector bundle V . For any Kähler form in the stable chamber, the structure group is an indecomposable SU (4) leading to an SO(10) gauge group in the low-energy theory. There are now two ways in which the bundle can split at a stability wall. We treat each case in turn. We emphasize that both cases can be realized by rank four bundles with a single stability wall. A.2.1 Case 1 The first of the two cases corresponds to the bundle splitting into a rank 3 and a rank 1 piece, which we shall denote by F and K respectively. That is, The structure group of this bundle is SU (3) × U (1), leading to an enhanced SO(10) × U (1) gauge group in the effective theory. The relevant group theory here is This decomposition indicates which representations of SO(10) × U (1) can possibly appear as fields in the four-dimensional effective theory. To find out how many of each multiplet is actually present, Representation Field name Cohomology Table 9: SU (4) one D-term case 1. one must calculate the dimension of the cohomology groups indicated in Table 9. As discussed previously, only one of the two C 1 ,C 2 fields can have a non-zero vev. If C 1 = 0, then the allowed Yukawa couplings are On the other hand, if C 2 = 0, then we find A.2.2 Case 2 The second SU (4) case corresponds to the rank 4 bundle splitting into two rank 2 pieces denoted by F 1 and F 2 . That is, The cohomology associated with each field is given in Table 10. As before, only one of the C 1 , C 2 fields can get a vev. Which vev is non-zero determines the structure of the Yukawa couplings. For < C 1 > = 0, we find that whereas for < C 2 > = 0 the following texture appears A.3 An SU (5) Bundle with a Stability Wall and One D-Term Consider compactification of heterotic theory on a Calabi-Yau threefold with a rank five holomorphic vector bundle V . For any Kähler form in the stable chamber, the structure group is an indecomposable SU (5) leading to an SU (5) gauge group in the low-energy theory. As in the SU (4) case, there are two ways in which such bundles can split at a stability wall. We treat these sequentially. A.3.1 Case 1 First consider the case where the bundle splits into a rank 4 and a rank 1 piece, denote by F and K respectively. That is, The structure group of this bundle is SU (4) × U (1), leading to an enhanced SU (5) × U (1) gauge group in the effective theory. The relevant branching of the 248 representation here is The multiplicities of the matter multiplets, in terms of cohomologies of F and K, may be found in Table 11. As in the previous cases, there are two possible textures. For < C 1 > = 0, we find that where we have grouped all of the 5 · 10 · 10, 5 · 10 · 10, 10 · 5 · 5 and 10 · 5 · 5 couplings together on different lines. For < C 2 > = 0, the following texture appears A.3.2 Case 2 Second, a rank 5 bundle can split into a rank 3 and a rank 2 piece, G and F respectively, at the stability wall. That is, The structure group of this bundle is SU (2) × SU (3) × U (1), leading to an enhanced SU (5) × U (1) gauge group in the effective theory. The relevant branching of the 248 representation is given by The multiplicities of each representation, as seen in four dimensions, are given in Table 12. As A.4 Stability Wall Texture: A Three Family Mass Hierarchy An important question in string model building is the following: is there a natural texture in a heterotic vacuum which leads, perturbatively, to a one heavy and two light families? To explore this issue, let us consider an SO(10) theory of the type described in case A.2.2 where < C 1 > = 0. Choose the bundle so that the SO(10) non-singlet field content is two f 1 fields, one f 2 and one h 1 only. This is a three generation model with one pair of Higgs doublets (inside h 1 ). From (A.14), one sees that the only allowed perturbative Yukawa coupling of this vacuum is That is, there is one massive third family and two light generations as desired. Therefore, one can expect one heavy family to arise naturally in some smooth compactifications of the heterotic string. B Equivariant structures and quotient manifolds In this Appendix, we briefly outline the procedure for constructing a vector bundle on a manifold X/Γ for some discrete group Γ. This is intended to provide only a brief introduction to the construction. For a more detailed discussion of building equivariant structures and Wilson line symmetry breaking on non-simply connected Calabi-Yau manifolds see [18,30] and [63] for useful numeric tools for these calculations. To begin, we consider a rank 4 bundle, V , on the bicubic hypersurface in P 3 × P 3 , If we denote the coordinates on P 2 × P 2 by {x i , y i }, where i = 0, 1, 2, then a freely acting Z 3 × Z 3 symmetry is generated by [64], : : x k → e 2πik 3 x k , y k → e − 2πik 3 y k . As shown in [64], the most general bi-degree {3, 3} polynomial invariant under the above symmetry is given by where j, k = 0, 1, 2 and there are a total of 12 free coefficients, A, above. We shall take these coefficients to be generic (i.e. random integers) in the following. With the choice of the invariant polynomial (B.3), we can define the smooth quotient manifoldX = X/G. The quotient manifold is related to X via the natural projection map, q : X →X. Using this relationship, we note that any vector bundleV onX can be related to a vector bundle V on X via the pullback map, q * . That is, for any bundleV overX, is an isomorphism for some bundle V on X. This pulled-back bundle, V is characterized by the geometric property of equivariance. A vector bundle on X will descend to a bundleV onX if for each element g ∈ G, g : X → X, there exists a bundle isomorphism φ g , satisfying two properties. First, φ g must cover the action of g on X such that the following diagram commutes and in addition, φ g must satisfy a so-called co-cycle condition, namely that for all g, h ∈ G, The set of such isomorphisms φ g are referred to collectively as an equivariant structure. The morphisms φ g form a representation of the group that act on the bundle, the so-called lifting of (B.2) to V . In addition, φ induces a representation of the group that acts on the cohomology H i (X, V ) and it is precisely the invariant elements of this cohomology (under the action inferred from φ) that descend to the quotient manifold X/Γ. We will return to this later. For now, we consider the description of the bundle V on X. B.1 The "upstairs" theory For our current purposes, we shall choose the following equivariant bundle V on X. The bundle is defined by the following short exact sequence: The bundle V is defined as an extension of F 1 by F 2 , where F i are rank 2 bundles defined by monad sequences. Since c 1 (F 1 ) = (−2, 4) and c 1 (F 2 ) = (2, −4), we see that c 1 (V ) = 0 and V is a an SU (4) bundle. To analyze the properties of the four-dimensional effective theory associated toV (including the stability-wall induced textures in its Yukawa couplings) we must first consider the "upstairs" bundle V and use its properties to determine those ofV , "downstairs". To begin then, the "upstairs" spectrum of V is given by A simple analysis along the lines of [16,23], shows us that F 1 and F 2 are both stable independently, but that F 1 ∈ V itself destabilizes V in a part of its Kähler cone. Since c 1 (F 1 ) = (−2, 4), we see that there is a stability wall when t 2 /t 1 = 1 + √ 3. We find that the regions of stability for V on X are as shown in Figure 4. At this stability wall, the poly-stable decomposition of V is as the direct sum of the two rank 2, bundles, V → F 1 ⊕ F 2 . As we have argued in the previous sections, this supersymmetric decomposition of V changes the structure group of the bundle from SU (4) to S[U (2) × U (2)] and hence, an extra U (1) appears in the low energy gauge symmetry. At this locus in moduli space, the visible matter fields, the 16s and 10s in (B.10), carry a charge under the enhanced U (1) as shown in Table 13 (the general case of such a decomposition is given in Table 10 in Appendix A). B.2 The "downstairs" theory We turn now to the final, three generation theory on the quotient manifoldX = X/Z 3 × Z 3 . By quotienting the Calabi-Yau threefold in (B.1) by the discrete symmetry in (B.2), we form the manifold X = and that the charged bundle moduli C 1 become C 1 : h 1 (X,F 1 ×F 2 * ) = 9 (B.14) From the above, it is clear that we have produced a three generation SO(10) GUT theory. However, we can go still further by introducing Wilson lines which will break SO(10) to SU (3) × SU (2) × U (1) Y × U (1) B−L . We shall not go into this breaking here, but refer the reader to [30,29,18] for details of breaking SO(10) with Z 3 × Z 3 Wilson lines.
2010-01-13T21:49:56.000Z
2010-01-13T00:00:00.000
{ "year": 2010, "sha1": "4b27382f3606fd40e2712dd243691e230dc5150a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1001.2317", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4b27382f3606fd40e2712dd243691e230dc5150a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
270959305
pes2o/s2orc
v3-fos-license
Plastic Deformation Mechanism and Slip Transmission Behavior of Commercially Pure Ti during In Situ Tensile Deformation : The plastic deformation modes of commercially pure titanium (CP-Ti) were studied using an in situ tensile test monitored by electron-backscatter-diffraction (EBSD) assisted slip trace analysis. The plastic strain was primarily accommodated by prismatic slip, followed by deformation twins and pyramidal slip. The slip transmission between two adjacent grains was predicted using the geometric compatibility factor m (cid:48) , which influenced not only the degree of stress concentration but also the activity of dislocation slip systems. Stress concentration mainly occurred at GBs with an m (cid:48) less than 0.5 and could be released by the activities of pyramidal slip or deformation twins with high critical shear stress (CRSS). Introduction Commercially pure titanium (CP-Ti) has been widely used in the biomedical field because of its excellent corrosion resistance, high fracture toughness, and good biocompatibility [1,2].CP-Ti has a hexagonal close-packed (hcp) structure at room temperature and exhibits complex plastic deformation mechanisms due to the low symmetry of the hcp structure.Four slip systems, including prismatic slip, basal slip, pyramidal a , and c + a slips, and six deformation twins (DTs), including the 1012 1011 , 1121 1126 , and 1123 1122 tension twins and the 1011 1012 , 1122 1123 , and 1124 2243 compression twins, have been reported [3][4][5].For a particular grain, the ease of these deformation modes is generally determined by the critical shear stress (CRSS) and the Schmid factor (SF). Recently, the phenomenon of slip transmission has been observed in some specific circumstances.It has been proposed that the slip transmission between grain boundaries (GBs) is an important mode of coordinated deformation and plays an important role in the damage of polycrystalline [6][7][8][9].Slip transmission behavior can be predicted using the geometric compatibility factor m .The m is given by m = cosϕcosγ where ϕ is the angle between the normal direction of the slip planes on both sides of the GBs, and γ is the angle between the slip directions of two adjacent grains.In general, slip is hindered by the GBs when m is less than 0.7; however, it can pass the GBs with little obstacle when m is larger than 0.7 [10][11][12][13][14]. GB-and twin-boundary-(TB) cracking are the major crack initiation mechanisms of Ti alloys due to the blocking of GBs and TBs on dislocations [15][16][17].The crack initiation of a Ti alloy is delayed when the stress or strain concentration at GBs and TBs is alleviated through slip transmission.A better understanding of slip transmission is of great significance for improving the mechanical properties of Ti and its alloys. In this work, CP-Ti (TA2) is selected and deformed by in situ tensile tests.The activities of slip systems and DTs are identified based on electron backscatter diffraction (EBSD) and slip trace analysis [18][19][20][21], and the emphasis is placed on slip transmission behavior and its influence on the activity of deformation modes. Experimental TA2 bars with a diameter of 10 mm were supplied by the Northwest Institute For Non-ferrous Metal Research, and their chemical composition is shown in Table 1.The as-received bars were first annealed at 450 • C for 1 h with furnace cooling to achieve a low residual stress state, and then dog-bone-shaped in situ tensile samples with a gauge section of 4 mm length, 3.5 mm width, and 0.5 mm thickness were cut from the annealed bars using discharged machining.The samples were mechanically ground to 1500 grit and then electropolished to produce a mirror-like surface using 10% HClO 4 and 90% C 2 H 5 OH.In situ tensile tests were conducted by scanning electron microscopy (SEM, Gatan mtest 2000, Gatan, Pleasanton, CA, USA) at a strain rate of 2.1 × 10 −4 s −1 (a loading velocity of 8.3 × 10 −4 mm•s −1 ).For the observation of microstructure, the tensile tests were interrupted by controlling the strain at ~2.9% (with displacement at 0.117 mm) and ~22.5%.The in situ tensile tests were stopped after the strain of ~22.5% for observation using a confocal laser scanning microscope (CLSM, ZEISS, Oberkochen, Germany).The EBSD measurements were conducted at an accelerating voltage of 20 kV with different step sizes depending on the desire (0.2~1 µm).The grain size, crystal orientation, and SF before and after in situ tensile testing were analyzed using Channel 5 software 5.11, Oxford Instruments, Abingdon, UK.The slip activities of the deformed grains were identified based on EBSD (Oxford Nordlys Max, Oxford Instruments, Abingdon, UK) and slip trace analysis [22,23].The geometric compatibility factor m of the grains after in situ tension was calculated using Mtex-5.7.0 and Matlab software 2018a, MathWorks, Natick, MA, USA, and the influence of slip transmission behavior on the activity of deformation systems was further analyzed. Initial Microstructure The initial microstructural characteristics, including the band contrast (BC) image, inverse pole figure (IPF), and a kernel average misorientation (KAM) map, as well as the grain size distribution, the cumulative frequency of SF for the different deformation modes, and the KAM distribution, are shown in Figure 1a-f, respectively. It is clear that the initial microstructure was a typical equiaxed α grain with an average grain size of 13.5 µm.The IPF suggests the c-axials of most grains were parallel with the normal direction, as shown in Figure 1c.It can be found from Figure 1d that the frequencies of SFs greater than 0.3 for prismatic slips, pyramidal slips, 1012 1011 twins, and basal slips were 87%, 82%, 82%, and 38%, respectively.Because the CRSS of the prismatic, basal, pyramidal slips, and 1012 1011 twins were measured to be 96 ± 18 MPa, 127 ± 33 MPa, 240 MPa, and 494 MPa, respectively [24,25], it was inferred that prismatic slips were primarily activated during in situ tensile deformation.Figure 1e,f demonstrate that the density of the geometry necessary dislocation (GND) in the initial stage was relatively low.It is clear that the initial microstructure was a typical equiaxed α grain with an average grain size of 13.5 µm.The IPF suggests the c-axials of most grains were parallel with the normal direction, as shown in Figure 1c.It can be found from Figure 1d that the frequencies of SFs greater than 0.3 for prismatic slips, pyramidal slips, 101 2 〈1 011〉 twins, and basal slips were 87%, 82%, 82%, and 38%, respectively.Because the CRSS of the prismatic, basal, pyramidal slips, and 101 2 〈1 011〉 twins were measured to be 96 ± 18 MPa, 127 ± 33 MPa, 240 MPa, and 494 MPa, respectively [24,25], it was inferred that prismatic slips were primarily activated during in situ tensile deformation.Figure 1e and f demonstrate that the density of the geometry necessary dislocation (GND) in the initial stage was relatively low. Dislocation Slip during In Situ Tensile Testing The displacement-load curve during in situ tensile testing is shown in Figure 2a.The SEM micrograph and the corresponding IPF at the tensile displacement of 0.117 mm (at a strain of ~2.9%) along the direction parallel to the black double-headed arrow are shown in Figure 2b,c.Obviously, a number of parallel slip bands and deformation twins (DTs) were observed in some grains, which was marked by the number of the IPF.The activated deformation modes in the marked grains were distinguished by trace analysis, and the activated deformation modes, as well as their SFs, were marked in the SEM image.The red, yellow, and blue lines represent the plane traces of prismatic slip, pyramidal slip, and DTs, respectively. In Figure 2d,e, the KAM map illustrates that, after a tensile strain of ~2.9%, the plastic strain mainly accumulated at some GBs, and some GBs could not be resolved due to the large amount of dislocation pile-up, as indicated by the white arrows in Figure 2f.Meanwhile, the average KAM increased from the initial microstructure of 0.35° to 0.49°.The density of the GND ( ) was estimated using the following formula [26][27][28]: Dislocation Slip during In Situ Tensile Testing The displacement-load curve during in situ tensile testing is shown in Figure 2a.The SEM micrograph and the corresponding IPF at the tensile displacement of 0.117 mm (at a strain of ~2.9%) along the direction parallel to the black double-headed arrow are shown in Figure 2b,c.Obviously, a number of parallel slip bands and deformation twins (DTs) were observed in some grains, which was marked by the number of the IPF.The activated deformation modes in the marked grains were distinguished by trace analysis, and the activated deformation modes, as well as their SFs, were marked in the SEM image.The red, yellow, and blue lines represent the plane traces of prismatic slip, pyramidal slip, and DTs, respectively. In Figure 2d,e, the KAM map illustrates that, after a tensile strain of ~2.9%, the plastic strain mainly accumulated at some GBs, and some GBs could not be resolved due to the large amount of dislocation pile-up, as indicated by the white arrows in Figure 2f.Meanwhile, the average KAM increased from the initial microstructure of 0.35 • to 0.49 • .The density of the GND (ρ GND ) was estimated using the following formula [26][27][28]: where KAM avg is the average value of KAM, b is the magnitude of the Burgers vector of dislocations (a = 0.295 nm and c = 0.468 nm for HCP Ti), and R is the step size (0.5 µm).We concluded that the density of GND increased obviously after tensile deformation.Further observations found that not all GBs had strong hindrance to dislocation and accumulated a large amount of plastic strain.Combined with the geometric compatibility factor (m ) map in Figure 2f, it was found that plastic strain accumulation mainly occurred at GBs with an m less than 0.7.Because the cracks were preferentially initiated at GBs that accumulated serious plastic strain during further deformation, slip transmission behavior had an influence on the accumulation degree of plastic strain at GBs.In other words, the accumulation degree of the plastic strain at GBs decreased with the increase in m . where KAM is the average value of KAM, is the magnitude of the Burgers vector of dislocations (a = 0.295 nm and c = 0.468 nm for HCP Ti), and R is the step size (0.5 µm).We concluded that the density of GND increased obviously after tensile deformation.Further observations found that not all GBs had strong hindrance to dislocation and accumulated a large amount of plastic strain.Combined with the geometric compatibility factor ( ) map in Figure 2f, it was found that plastic strain accumulation mainly occurred at GBs with an less than 0.7.Because the cracks were preferentially initiated at GBs that accumulated serious plastic strain during further deformation, slip transmission behavior had an influence on the accumulation degree of plastic strain at GBs.In other words, the accumulation degree of the plastic strain at GBs decreased with the increase in . Deformation Twins during In Situ Tensile Testing As an important deformation mode in HCP Ti, DTs (including primary twins and secondary twins) were also frequently observed in some grains, as shown in Figure 3a.The misorientations of the primary twins and the secondary twins in the grains are shown in Figure 3b.It can be seen that the misorientation of the primary twin was ~64°, the misorientation of the secondary twin was ~43° relative to the matrix, and the misorientation of the secondary twin relative to the primary twin matrix was ~84°.The variants of the primary twin and secondary twin were determined by pole figure, as shown in Figure 3c,d Deformation Twins during In Situ Tensile Testing As an important deformation mode in HCP Ti, DTs (including primary twins and secondary twins) were also frequently observed in some grains, as shown in Figure 3a.The misorientations of the primary twins and the secondary twins in the grains are shown in Figure 3b.It can be seen that the misorientation of the primary twin was ~64 • , the misorientation of the secondary twin was ~43 • relative to the matrix, and the misorientation of the secondary twin relative to the primary twin matrix was ~84 • .The variants of the primary twin and secondary twin were determined by pole figure, as shown in Figure 3c,d, respectively.It was determined that the primary twin was the (2112) [2113] compression twin, and the secondary twin was the (0112) [0111] tensile twin [29,30]. A larger deformed region was observed to quantitatively analyze the activated deformation mode using slip trace analysis, as shown in Figure 4a,b.The frequency of the activated deformation mode was counted, as shown in Figure 4c.It was revealed that the activated deformation modes were prismatic slip, pyramidal slip, and DTs, and their frequencies were 55%, 21%, and 24%, respectively.This result indicated that prismatic slip is the dominant deformation mechanism, and DTs and pyramidal slip are auxiliary plastic deformation mechanisms.A larger deformed region was observed to quantitatively analyze the activated deformation mode using slip trace analysis, as shown in Figure 4a,b.The frequency of the activated deformation mode was counted, as shown in Figure 4c.It was revealed that the activated deformation modes were prismatic slip, pyramidal slip, and DTs, and their frequencies were 55%, 21%, and 24%, respectively.This result indicated that prismatic slip is the dominant deformation mechanism, and DTs and pyramidal slip are auxiliary plastic deformation mechanisms. Slip Transmission Behavior during In Situ Tensile Testing Two different cases of slip transmission behavior are shown in Figure 5.In Figure 5b, the slip transmission occurred between grains A and B. The activated slip systems of grain A and grain B were prismatic slips (101 0) [12 10] and (11 00) [112 0], and their SFs were 0.49 and 0.39, respectively.The calculation of prismatic slip between grains A and B was highlighted with red wireframe, as shown in Figure 5c, and further observation found that the maximum value of was 0.94, which indicates that the GB between grains A and B had little obstacle to dislocation slip.In contrast, since the maximum value of the prismatic slip between grains C and D was only 0.55, the prismatic slip in grain C (a) A larger deformed region was observed to quantitatively analyze the activated deformation mode using slip trace analysis, as shown in Figure 4a,b.The frequency of the activated deformation mode was counted, as shown in Figure 4c.It was revealed that the activated deformation modes were prismatic slip, pyramidal slip, and DTs, and their frequencies were 55%, 21%, and 24%, respectively.This result indicated that prismatic slip is the dominant deformation mechanism, and DTs and pyramidal slip are auxiliary plastic deformation mechanisms. Slip Transmission Behavior during In Situ Tensile Testing Two different cases of slip transmission behavior are shown in Figure 5.In Figure 5b, the slip transmission occurred between grains A and B. The activated slip systems of grain A and grain B were prismatic slips (101 0) [12 10] and (11 00) [112 0], and their SFs were 0.49 and 0.39, respectively.The calculation of prismatic slip between grains A and B was highlighted with red wireframe, as shown in Figure 5c, and further observation found that the maximum value of was 0.94, which indicates that the GB between grains A and B had little obstacle to dislocation slip.In contrast, since the maximum value of the prismatic slip between grains C and D was only 0.55, the prismatic slip in grain C (a) Slip Transmission Behavior during In Situ Tensile Testing Two different cases of slip transmission behavior are shown in Figure 5.In Figure 5b, the slip transmission occurred between grains A and B. The activated slip systems of grain A and grain B were prismatic slips (1010) [1210] and (1100) [1120], and their SFs were 0.49 and 0.39, respectively.The m calculation of prismatic slip between grains A and B was highlighted with red wireframe, as shown in Figure 5c, and further observation found that the maximum value of m was 0.94, which indicates that the GB between grains A and B had little obstacle to dislocation slip.In contrast, since the maximum m value of the prismatic slip between grains C and D was only 0.55, the prismatic slip in grain C could not pass through the GB between grains C and D, which resulted in the activation of pyramidal slip in grain D, as shown in Figure 5d-f.The activated slip systems of grains C and D were prismatic slip (1100) [1120] and first order pyramidal slip (0111) [2110], and their SFs were 0.40 and 0.32, respectively.Obviously, slip transmission behavior had an influence on the stress concentration and activation of the slip systems.When m was greater than 0.7 (for example, the GB between grains A and B), the slip transmission could effectively coordinate the macroscopic strain.When m was less than 0.7 (for example, the GB between grains C and D), the stress concentration at the GB could be released by the activation of the pyramidal slip system to avoid premature crack nucleation at GBs [31]. could not pass through the GB between grains C and D, which resulted in the activation of pyramidal slip in grain D, as shown in Figure 5d-f.The activated slip systems of grains C and D were prismatic slip (11 00) [112 0] and first order pyramidal slip (01 11) [211 0], and their SFs were 0.40 and 0.32, respectively.Obviously, slip transmission behavior had an influence on the stress concentration and activation of the slip systems.When was greater than 0.7 (for example, the GB between grains A and B), the slip transmission could effectively coordinate the macroscopic strain.When was less than 0.7 (for example, the GB between grains C and D), the stress concentration at the GB could be released by the activation of the pyramidal slip system to avoid premature crack nucleation at GBs [31]. Low induced not only the activity of pyramidal slip but also the nucleation of DTs, as shown in Figure 6.It can be observed that there was no slip activity in grain F and only DT occurred.The DT nucleated from the GB between grains E and F, and this DT was identified as (2 112) [2 113] from the IP map and the misorientation distribution in Figure 6c,d of the prismatic slip between grains E and F was 0.61.The poor geometric compatibility between the two grains led to stress concentration at the GB, which was conducive to the nucleation of a DT at the GB [32][33][34][35].The stress concentration at the GB can be proved by the KAM map, as shown in Figure 6f.It is clear that the intensity of KAM at the GB between grains E and F was obviously higher those that of the other regions, as indicated by the white arrow.Low m induced not only the activity of pyramidal slip but also the nucleation of DTs, as shown in Figure 6.It can be observed that there was no slip activity in grain F and only DT occurred.The DT nucleated from the GB between grains E and F, and this DT was identified as (2112) [2113] from the IP map and the misorientation distribution in Figure 6c,d.The calculation of SF indicated that the SF of the activated (2112) [2113] twin was 0.35, and the largest SF of prismatic slip was 0.45.The phenomenon that the (2112) [2113] twin with high CRSS and low SF was activated instead of prismatic slip with low CRSS and high SF was related to the m value between grains E and F. As shown in Figure 6e, the maximum m of the prismatic slip between grains E and F was 0.61.The poor geometric compatibility between the two grains led to stress concentration at the GB, which was conducive to the nucleation of a DT at the GB [32][33][34][35].The stress concentration at the GB can be proved by the KAM map, as shown in Figure 6f.It is clear that the intensity of KAM at the GB between grains E and F was obviously higher those that of the other regions, as indicated by the white arrow. Surface Topography after In Situ Tensile Testing The surface topography after an in situ tensile stain of ~22.5% was observed by a confocal laser scanning microscope (CLSM), as shown in Figure 7. Clearly, the surface became rugged due to coordinated deformation between the different grains, and the fluctuation was mainly concentrated at the GB region.Close observation found that the surface bulge was mainly observed at the GBs between grains with slip activity and grains without slip activity.This phenomenon further indicated that, when slip transmission cannot occur at GBs, there is obvious stress concentration at that point [36].This stress concentration can be released by inducing other deformation modes that are not easy to activate, such as pyramidal slip and DTs.Otherwise, it will develop into crack initiation sites. Surface Topography after In Situ Tensile Testing The surface topography after an in situ tensile stain of ~22.5% was observed by a confocal laser scanning microscope (CLSM), as shown in Figure 7. Clearly, the surface became rugged due to coordinated deformation between the different grains, and the fluctuation was mainly concentrated at the GB region.Close observation found that the surface bulge was mainly observed at the GBs between grains with slip activity and grains without slip activity.This phenomenon further indicated that, when slip transmission cannot occur at GBs, there is obvious stress concentration at that point [36].This stress concentration can be released by inducing other deformation modes that are not easy to activate, such as pyramidal slip and DTs.Otherwise, it will develop into crack initiation sites. Surface Topography after In Situ Tensile Testing The surface topography after an in situ tensile stain of ~22.5% was observed by a confocal laser scanning microscope (CLSM), as shown in Figure 7. Clearly, the surface became rugged due to coordinated deformation between the different grains, and the fluctuation was mainly concentrated at the GB region.Close observation found that the surface bulge was mainly observed at the GBs between grains with slip activity and grains without slip activity.This phenomenon further indicated that, when slip transmission cannot occur at GBs, there is obvious stress concentration at that point [36].This stress concentration can be released by inducing other deformation modes that are not easy to activate, such as pyramidal slip and DTs.Otherwise, it will develop into crack initiation sites. Conclusions (1) Based on EBSD characterization and slip trace analysis, the active deformation modes of CP-Ti after an in situ tensile strain of ~2.9% were prismatic slip (55%), pyramidal slip (21%), and deformation twins (24%); (2) Slip transmission had an obvious influence on the activities of the deformation mode, which were predicted using a geometric compatibility factor.Slip transmission in CP-Ti tended to occur between the same slip types (prismatic slip to prismatic slip).The stress concentration in GBs was released by slip transmission to accommodate coordinated deformation; (3) Poor geometric compatibility between two adjacent grains led to stress concentration at the GBs, which was conducive to the activity of pyramidal slip or the nucleation of deformation twins. Metals 2022 , 9 Figure 1 . Figure 1.The initial microstructural characteristics of TA2: (a,b) BC map and the distribution of grain size; (c,d) IPF and cumulative frequency of SF for different deformation modes; and (e,f) KAM map and the frequency distribution of KAM. Figure 1 . Figure 1.The initial microstructural characteristics of TA2: (a,b) BC map and the distribution of grain size; (c,d) IPF and cumulative frequency of SF for different deformation modes; and (e,f) KAM map and the frequency distribution of KAM. Figure 2 . Figure 2. The displacement-load curve and the deformed microstructures of TA2 during in situ tensile test: (a) displacement-load curve, (b) SEM image, (c) IPF, (d) KAM map, (e) frequency distribution of KAM, and (f) the geometric compatibility factor ( ) distribution map of the prismatic slip. Figure 2 . Figure 2. The displacement-load curve and the deformed microstructures of TA2 during in situ tensile test: (a) displacement-load curve, (b) SEM image, (c) IPF, (d) KAM map, (e) frequency distribution of KAM, and (f) the geometric compatibility factor (m ) distribution map of the prismatic slip. Figure 3 . Figure 3. Variant analysis of primary twin and secondary twin: (a) IPF; (b) misorientation between primary twin and secondary twin; (c,d) pole figures (PFs) of primary and secondary twin variants. Figure 4 . Figure 4.The analysis of deformation mechanisms of TA2 after a strain of ~2.9%:(a) SEM with the traces of different deformation modes; the traces of prismatic, basal, and pyramidal planes are indicated by the black, red, and blue lines, respectively; (b) IPF, (c) the fraction of different deformation modes. Figure 3 . Figure 3. Variant analysis of primary twin and secondary twin: (a) IPF; (b) misorientation between primary twin and secondary twin; (c,d) pole figures (PFs) of primary and secondary twin variants. Figure 3 . Figure 3. Variant analysis of primary twin and secondary twin: (a) IPF; (b) misorientation between primary twin and secondary twin; (c,d) pole figures (PFs) of primary and secondary twin variants. Figure 4 . Figure 4.The analysis of deformation mechanisms of TA2 after a strain of ~2.9%:(a) SEM with the traces of different deformation modes; the traces of prismatic, basal, and pyramidal planes are indicated by the black, red, and blue lines, respectively; (b) IPF, (c) the fraction of different deformation modes. Figure 4 . Figure 4.The analysis of deformation mechanisms of TA2 after a strain of ~2.9%:(a) SEM with the traces of different deformation modes; the traces of prismatic, basal, and pyramidal planes are indicated by the black, red, and blue lines, respectively; (b) IPF, (c) the fraction of different deformation modes. Figure 5 . Figure 5. Slip transmission behavior between different grains: (a,d) SEM with the slip trace and SF of the activated slip systems; (b,e) IPF; (c) calculation between grains A and B; (f) calculation between grains C and D. . The calculation of SF indicated that the SF of the activated (2 112) [2 113] twin was 0.35, and the largest SF of prismatic slip was 0.45.The phenomenon that the (2 112) [2 113] twin with high CRSS and low SF was activated instead of prismatic slip with low CRSS and high SF was related to the value between grains E and F. As shown in Figure 6e, the maximum Figure 5 . Figure 5. Slip transmission behavior between different grains: (a,d) SEM with the slip trace and SF of the activated slip systems; (b,e) IPF; (c) m calculation between grains A and B; (f) m calculation between grains C and D. 9 Figure 6 . Figure 6.The nucleation of DT was induced by stress concentration at the GB: (a) IPF, (b) SEM, and (c) PF of 112 2 ; (d) the distribution of misorientation along the line in (a,e); the calculation of between grains E and F; and (f) KAM map. Figure 7 . Figure 7.The surface topography of CP-Ti after in situ tensile stain of ~22.5%:(a) the optical microstructure, and (b) the corresponding two-dimensional surface topography. Figure 6 . Figure 6.The nucleation of DT was induced by stress concentration at the GB: (a) IPF, (b) SEM, and (c) PF of 1122 ; (d) the distribution of misorientation along the line in (a,e); the calculation of m between grains E and F; and (f) KAM map. Figure 6 . Figure 6.The nucleation of DT was induced by stress concentration at the GB: (a) IPF, (b) SEM, and (c) PF of 112 2 ; (d) the distribution of misorientation along the line in (a,e); the calculation of between grains E and F; and (f) KAM map. Figure 7 . Figure 7.The surface topography of CP-Ti after in situ tensile stain of ~22.5%:(a) the optical microstructure, and (b) the corresponding two-dimensional surface topography.Figure 7. The surface topography of CP-Ti after in situ tensile stain of ~22.5%:(a) the optical microstructure, and (b) the corresponding two-dimensional surface topography. Figure 7 . Figure 7.The surface topography of CP-Ti after in situ tensile stain of ~22.5%:(a) the optical microstructure, and (b) the corresponding two-dimensional surface topography.Figure 7. The surface topography of CP-Ti after in situ tensile stain of ~22.5%:(a) the optical microstructure, and (b) the corresponding two-dimensional surface topography.
2022-04-29T15:55:24.238Z
2022-04-24T00:00:00.000
{ "year": 2022, "sha1": "ec2bfc436e23db78e067c38c1bbc6e8455d88f63", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/12/5/721/pdf?version=1650784140", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1e56864cf0f5898125159a313afd8aebd1efac16", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
228326767
pes2o/s2orc
v3-fos-license
CHEMICAL BURNS – CASE PRESENTATION Burns are entirely particular lesions, which must always be regarded as severe, especially at extreme ages, which affect the body in its entirety. Both the local lesion and the general bodily reaction are dynamic and entail characteristic sequences, which can be anticipated and prevented, in order to reduce the risk of complications and to provide the best possible vital, functional and aesthetic prognosis. The human skin, the largest organ of the body and the most important immune organ, consists of two layers – epidermis and dermis. The action of the thermal agent, irrespective of its aetiology, most commonly affects the epidermis and more or less deep areas of the dermis, depending on the temperature and duration of exposure. In the most severe cases, the dermis is destroyed in its entirety and sometimes, sub-dermic structures are affected as well. INTRODUCTION Depending on the depth of the burn lesion, we distinguish: Superficial burns (epidermal, Ist degree)solar burns, short-term exposure to liquids or other thermal agents with temperatures below 50 degrees C. They have the following characteristics: -only damage the epidermis; -red and slightly edematous appearance of the skin; -sensation of pain and local heat. Spontaneous healing, in 2-3 days, without permanent consequences. Superficial partial burns (superficial, dermal II nd A degree) -they damage the epidermis in its entirety and the dermis and its skin appendages only partially; -blisters, perilesional edema, pink aspect; -local inflammation and abundant exudate; -thirst, oliguria, if more than 10% of the surface of the body is burned in grown-ups and more than 5% in a small infant; -spontaneous healing in 7-14 days, without permanent scarring damage. Deep partial burns (deep dermal, IInd B degree) -damage the epidermis in its entirety and the dermis in depth; -blisters and white or burning-red eschar; -moderate exudate, intense local inflammation; -intense pain; -thirst, oliguria, more marked effect on the medical state; -healing is possible (for limited surfaces), in 14-21 days, with scarred areas. Total burns (full thickness of the dermis, subdermal, III-IV degrees) -completely destroy the epidermis, dermis, sin appendages and sometimes the sub-dermal structures; -broken blisters, painless white or white-gray eschar; -significant and early arising perilesional edema, exudate in low quantities; -a marked effect on the medical state, even when under 10% of the bodily surface area is burned; -spontaneous healing over a long period of time and with permanent scarring consequences (1,2,4,5,8). The initial evaluation of the severity of the burn is essential for establishing the therapeutic indications, as well as for the prognosis. The essential elements which need to be taken into account in the evaluation of the severity of a burn are: -age -given the same depth and burned surface, a burn is all the more severe as the age of the patient is lower. A burn must always be regarded as severe in a nursing baby. -burned surface is the essential element in the assessment of the severity, prognosis and treatment regimen in burns. There is a direct correlation between the burned surface and the risk of death. In the case of adults and teenagers over 15 years of age, the burned surface is estimated based on the Wallace "rule of nines", while for young infants, the estimation is based on the Lund-Browder chart. -the depth of the burn -for the same burned surface, the deeper the burn, the more severe it is. The assessment of the depth must be carried out dynamically (every 2-5 days). -location -burns located in certain areas of the body are considered severe -airways, face, hands, legs, perineum, burns with a circular distribution. -etiological agent -for a child, any electrical and chemical burn must be regarded as severe and it requires an initial assessment in a specialised hospital environment, even if on the initial exam, the skin lesions appear to be minor. -concomitant trauma -preexisting afflictions and deficiencies -inadequate treatment on the scene of the accident (2,3,5). PROGNOSTIC SCORES Severity based on the burned surface -a burn is considered severe if it affects over 5% of the body surface in a child between 0-2 years of age and over 10% in children between 3-15 years of age. Severity based on the surface and depth -the presence of a burn throughout the thickness of the dermis, regardless of its size, in a child between 0-3 years of age and over 2% of the body surface at any age, requires hospitalisation and surgical indication. The Standard Burn Units Score is measured by adding the burned surface to the triple of the burned surface in the thickness of the dermis. The Classification of the American Burn Association: -minor burns -can benefit from outpatient treatment; -moderate, potentially severe burns -require assessment and hospitalisation in specialised centres; -major, severe burns -compulsory and initial hospitalisation in burn centres. Abbreviated Burn Severity Index (ABSI) takes into consideration multiple parameters -sex, age, burned airways, burned surface and it is very often used on an international level. The incidence of chemical burns has increased and it has progressively diversified due to the industrialisation process. Currently, substances which cause burns are widespread both in the professional and domestic environment. The degree of tissue damage, as well as the level of systemic toxicity are caused by the chemical nature of the substance, its concentration, the duration of exposure and the mechanism of action (2,3,5). On the basis of the mechanism of action, chemical agents which can cause burns can be: -reducing substances -they act by reducing lesions, an exothermic reaction (diborane, lithium aluminium hydride) -oxidative substances -act by adding an oxygen, sulphur or halogen atom to the structure of proteins, which alters their functionality (sodium hypochlorite, potassium permanganate, peroxides, chromic acid) -corrosive substances -corrode the skin and cause massive protein denaturations (phenols, sodium hydroxide, potassium, ammonium and calcium) -toxic plasma substances -form esters with proteins or inhibit inorganic ions, which are required for the normal cellular function (formic, acetic, oxalic, hydrofluoric acid) -desiccants -hygroscopic agents, which extract water from the tissues, within normally exothermic reactions (concentrated sulphuric acid) -vesicants -act by DNA alkylation, producing vesicles as a result of protease release from the lysosomes of altered basal cells (2,3,5). Prompt intervention at the scene of the accident is essential for reducing the severity of the injuries and diminishing the risk of systemic toxicity: -f ast removal of soaked clothing (precautionary measure to avoid contamination of the environment or of the surrounding people). Abundant wash of the wounds and contaminated skin. The wash dilutes the chemical agent and removes it from the skin, it corrects the hygroscopic effect which certain agents have on the skin. The wash needs to be conducted using large quantities of water, at a temperature of 25-30 degrees C, over the course of 15-30 minutes. The application of neutralising solutions, which in most cases produce an exothermic reaction, is contraindicated, as it can aggravate the initial lesion. The body temperature is monitored and the systemic toxic impact which the causal agent can have is assessed, measuring the gasometry and the serum ionogram, repeated during the first 24-36 hours or until the metabolical stabilisation. Compared to thermal burns, chemical burns often require additional analgesia. The burn wound is classified and treated according to the same principles as in the case of thermal burns. Its particularities are its progressive nature and the lengthy period of the healing process (2,4). Treatment of minor burns Cleaning of the burn wounds -antiseptic non-irritating solutions (chlorhexidine in normal saline, benzalkonium chloride). Treatment of moderate and severe burns The patient with medium and severe burns, regardless of age, is best looked after within complex, interdisciplinary teams which belong to burn units or compartments within the departments/clinics of plastic surgery within complex large hospitals. The treatment of burns "throughout the thickness of the dermis" and of partially deep burns is surgical (3,5,6,8). The essential elements of an efficient treatment are: -prompt, efficient and adequate hydroeletrolytic resuscitation -prevention and therapeutic control of acute phase complications -systemic inflammatory response syndrome and multiple organ failure -pain therapy -pain prevention and control -nutritional and immune support -local treatment provided once or twice a day -excision -early grafting of the burns "throughout the thickness of the dermis" -aggressive surgical treatment in extensive, predominantly deep burns -physio-and kinesiotherapeutical intervention, as well as early and constant psycho-social counselling, over the entire therapeutical process -active participation of the patient's family in the therapeutical process (2,5). CASE REPORT Female patient, 15 months of age, from the rural area, is admitted with burn lesions caused by contact with the veterinary medicine -Vital Browhich is a combination of organic lactic, butyric and formic acid. It is administered to the drinking water of chickens and adult poultry daily. The main effect of its administering is improving digestion, namely increasing the degree of assimilation of the fodder and implicitly, of the average daily gain. The second effect is the prevention of diseases with pathogenic bacteria susceptible to the acidic environment created by the medicine. The lesions are located on the right side of the anterior trunk, the right thigh, the distal 1/3 of the right forearm, IIA-IIB degree, approximately 10% of the body surface, with postcombustional shock. The past medical history is insignificant, with the exception of the intermittent respiratory disorders. Physical examination upon admission The patient weighing 9.5 kg shows up in the emergency room with a medium general state, conscious, cooperating status, with uncharacteristic facies. Upon inspection of the skin, chemical burn lesions are identified on the anterior side of the left thigh, the anterior 1/3 of the right forearm, the right 1/2 of the trunk and anterior abdomen, which have an aspect of brown-gray, adherent, relatively supple eschar, surrounded by painful areas of erythema, moderate lesional and perilesional edema. The physical examination of the respiratory system was normal upon admission, as well as the examination of the cardiovascular apparatus, which din not reveal any pathological elements. Initial laboratory investigations -patient with inflammatory syndrome and hydroelectrolytical imbalances. Cultures and the antibiogram are harvested from the burn wounds -Enterococcus Faecalis, sensitive to Ampicilin, Ciprofloxacin, Gentamicin and Streptomycin, resistant to Erythromycina and Tetraciyline. As a first intention treatment in the emergency room, the cleaning and excisional debridement of the wound is performed, with the removal of lesioned tissues, as well as the application of dressings. The treatment for hydroelectrolytic rebalancing is initiated according to the age of the patient and the burned surface, with the GALVERSTON rebalancing formula, nutritional support, therapeutical prevention and control of acute phase complications, preventions and control of infections. The daily dressing of burn wounds is carried out in order to delineate deep areas from the superficial areas of the burn with arginine sulfadiazine. Seven days after the accident occurred, surgery is performed in order to practice the excision of the postcombustional eschar on the anterior trunk and the anterior side of the right thigh, covering the de- Post-surgery, the treatment for infection prevention, to maintain hydroelectrolytic balance, nutritional support are continued. Daily dressing of the wound is carried out until discharge. The evolution is favourable, the patient exhibits a good general, stable hemodynamic and cardio-respiratory state, with preserved appetite and present diuresis. Locally -the evolution is favourable, with healing postburn lesions, with the integration of skin grafts and healing of the donor areas. It is recommended that after the discharge, the daily dressing of the wounds with epithelial bandages and the scar tissue prophylaxis through massage with moisturizing cream, silicone foil and elastic bandage are continued. DISCUSSION In the case of paediatric population, young age (below 2 years old) is an additional risk factor. The depth of the lesions depends on the nature of the chemical agent, its concentration and the duration of the contact. The particularity of the hereby case is the relatively large afflicted body surface area, namely 10% of the body surface, the young age of the patient (15 months), compared to the vast majority of chemical burn cases which reach It is essential for the burn patient to immediately commence the hydroelectrolytic recovery, according to the age and burned surface. The best indicator for a successful resuscitation is the assessment of diuresis.The primary goal after the acute phase is to restore and preserve tissue perfusion and prevent ischaemia produced by post-combustion shock with hypovolemic and cellular disorders. Burn injuries can be intricate as far as their depth is concerned. Lesions which have a deep burn appearance (III-IV degree) need early surgical treatment, which consists of excision of the eschar, followed by the closure of the resulting soft tissue defects. If the defects are small, they can be closed through direct suture. Larger skin defects need skin grafts or local plasty in order to be closed. In the present case, the lesions on the forearm and trunk were of a lesser degree -a conservative treatment could be applied: bandage and epithelial creams, compared to the lesions on the thigh, which required surgical treatment. CONCLUSIONS As far as local care is concerned, chemical burns follow the same principles used for thermal burns. Chemical burns tend to be deeper than they appear upon initial examination. The skin lesions have a progressive nature, and the conservatory treatment requires a longer time for the healing process and for the formation of disabling scarring sequelae. Under these circumstances, the therapeutical recommendation is early excision and grafting, solution which offers the best longterm results.
2020-07-16T09:08:41.570Z
2018-09-30T00:00:00.000
{ "year": 2018, "sha1": "afdc697ccea6658b714f09f940be7e71d8104e2f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.37897/rmj.2018.3.5", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e5d9bb991861c58e369192ef91cf56c947cae594", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262181446
pes2o/s2orc
v3-fos-license
The Utility Model Relates to a Branch Salivary Suction Device for Oral Cavity : Background: In the process of dental diagnosis and treatment, a large number of pathogenic microorganisms and aerosols in the mouth of patients are easy to spread into the office, resulting in microbial contamination of the office air. In order to reduce the formation of droplets and aerosols and reduce the dissemination of blood and saliva, negative pressure suction is usually used to reduce most of the aerosols. Objective: In order to reduce the formation of droplets and aerosols and the dissemination of blood and saliva, a branch saliva suction device with both strong and weak suction functions is needed. Methods: A branch saliva suction device with both strong and weak suction functions was designed. Including grip, extension rod, bending pipe, strong straw, pressure sensor, hard pipe, weak straw, sliding block. By inserting a weak straw at the outside of the extension rod, and setting a strong straw at the end of the extension rod, the strong straw and weak straw are combined, so as to improve the applicability of the device and facilitate the user to operate with one hand. By setting a sliding block on the outside of the extension rod, the sliding block can be connected with the outside of the weak straw, and the length of the weak straw can be freely adjusted according to the need, which is convenient to adjust the weak straw, and at the same time, the weak straw can be collected. Results: A branch oral salivary suction device was designed to improve the efficiency of diagnosis and treatment and the comfort of patients. Conclusion: To design a branch saliva suction device for oral use. Introduction The salivary aspirator is an indispensable instrument in the process of oral diagnosis and treatment.Due to the widespread use of high-speed turbines, ultrasonic dental cleaners and high-pressure cooling water, a large number of pathogenic microorganisms in the mouth of patients and aerosols produced by cutting teeth are easy to spread into the clinic, leading to microbial pollution in the air of the clinic [1].In order to reduce the formation of droplets and aerosols and the diffusion of blood and saliva, the use of strong suction can reduce most of the aerosols, but the oral mucosa is fragile, and the strong suction head is close and easy to cause mucosal damage.Therefore, a weak suction tube is needed to suck out the patient's saliva, cooling water, and blood to keep the oral surgical field clear.In addition, during the process of filling or planting teeth, the existing saliva suction tube can easily suck out the expensive bone powder while sucking out the blood.At present, strong suction and weak suction are needed in clinical operation, but it is inconvenient for doctors to operate with a single hand, which takes time and effort.Therefore, a branch mouth saliva suction device with both strong suction and weak suction functions was designed. Materials and Methods In the process of treatment, the dentists need to make the saliva suction tube suck out the saliva in the mouth cavity.The traditional saliva suction tube is a head, which is less efficient.The branch salivary suction device for oral use includes an extension rod inserted at the front end of the grip, and the end of the extension rod is connected with a bending pipe.One end of the bending pipe is connected with a strong straw, and the bottom of the grip is installed with a pressure sensor for detecting the pressure intensity.The inside of the extension rod is arranged with a fanned hard tube, and the side of the hard tube is inserted with a weak straw, the outer wall of the weak straw is connected with an arc slide block, the outside of the extension rod is extended up and down the chute, the slide block is located in the inside of the chute, the back end of the weak straw is connected with the connecting pipe installed in the handle through the bellow.It can ensure that the weak straw is always connected with the connecting pipe in the process of moving up and down, so as to maintain the stability of the suction of the weak straw.The outside of the grip has a non-slip bulge, which can prevent the grip from separating from the hand of the staff and improve the stability of the connection.The outside of the handle has a groove, and the inner wall of both sides of the groove has an inward sloping guide groove, and a disk-like sliding wheel is installed between the two groups of guide grooves.With an inward-sloping guide slot, the spacing between the sliding wheel and the hard tube can be adjusted as the pulley slides. Results A branch oral salivary suction device was designed to improve the efficiency of diagnosis and treatment and the comfort of patients.When in use, the external end of the strong straw is in contact with the user's oral cavity, and the rubber contact set outside the end of the end is in contact with the inner wall of the user's oral cavity.The liquid in the patient's mouth enters the outside of the liquid inlet tube through the gap between the rubber contacts, and then enters the hard tube from the liquid inlet tube.Other objects in the mouth cannot enter the liquid inlet tube, which can prevent the liquid inlet tube from being blocked.Improve the stability of the device when used. Discussion Iatrogenic infection in stomatology department is not only a local problem of oral instrument infection, but also an important part of the contamination of dental unit waterline system.[2] Water and aerosols emitted by mobile phones and dental cleaners are easy to cause air pollution.The first function of saliva absorption is to avoid air pollution [3][4].The second function of saliva suction is to make the patient more comfortable; the patient's mouth is constantly secreting saliva, and the exudate and secretion in the mouth are removed to avoid the patient sitting up frequently [5][6]. By inserting a weak straw on the outside of the extension rod and setting a strong straw on the end of the extension rod, it can combine strong suction and weak suction well, so as to improve the applicability of the device and facilitate the user to operate with one hand [7][8].By setting a sliding block outside the extension rod, the sliding block is connected with the external connection of the weak straw, and the length of the weak straw can be freely adjusted according to the need, which is convenient to adjust the weak straw and accommodate the weak straw at the same time [9]. By setting a groove with a sliding wheel on the outside of the handle, the suction of the weak straw can be adjusted according to the need to improve the applicability of the device [10].The rubber contact and liquid inlet pipe are arranged outside the end, which can prevent the liquid inlet pipe from being blocked and improve the stability of the device when used.By setting the rubber tube at the bottom of the weak suction, the rubber tube corresponds to the sliding wheel, which can reduce the air volume of the weak straw and reduce the suction when the sliding wheel is squeezed inward [11][12].The groove is opened close to the weak straw, and the outer wall of the sliding wheel is fitted to the weak straw.When weak straws needed to be used, the sliding block was pushed up manually so that the weak straws moved up together and out of the extension rod [13].Therefore, the connecting tube is connected with the saliva suction device before use, so that the suction generated by the saliva suction device acts directly on the weak straw and the hard tube, and then the user can use the weak straw and the strong straw at the same time, and can be operated with one hand [14]. Conclusions The salivary suction technique refers to the use of a saliva suction device to suck away water mist, debris, blood and saliva in the oral cavity during the process of oral diagnosis and treatment, so as to maintain a clear operating field and assist the smooth operation [15].A branch saliva suction device for oral use combines strong suction and weak suction, which is convenient for operation; In addition, the length of the weak straw can be adjusted as needed, and the suction force of the weak straw can be adjusted as needed, which improves the quality of treatment.
2023-09-24T16:14:16.426Z
2023-09-18T00:00:00.000
{ "year": 2023, "sha1": "6a509a4964c9d544b6f025905bed46d95fc1ce6e", "oa_license": "CCBY", "oa_url": "https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijdm.20230902.13.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "04e6fb1c2ef477acf185c7b7c5c5d1e568b63a5a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
270793629
pes2o/s2orc
v3-fos-license
Characteristics of gut microbiota and serum metabolism in patients with atopic dermatitis Abstract Background Atopic dermatitis (AD) is a chronic inflammatory skin disease that affects 15%‐30% of children and 10% of adults globally, with its incidence being influenced by genetic, environmental, and various other factors. While the immune plays a crucial role in the development, the composition of gut microbiota and serum metabolites also contribute to its pathogenesis. Subject Study the characteristics of gut microbiota and serum metabolites in patients with atopic dermatitis Method In this study, we collected stool and serum samples from 28 AD patients and 23 healthy individuals (NC) for metagenomic sequencing of gut microbiota and non‐targeted metabolomic sequencing of serum. Result Our results revealed a lower diversity of gut microbiota in the AD group compared to the NC group. The predominant Phylum in AD patients were Bacteroidetes, Pseudomonas, and Verrucomicrobia, with the most dominant bacterial genus being Faecalibacterium. At the species level, Prevotella copri and Faecalibacterium prausnitzii were found to be the most abundant bacteria. Significant differences in serum metabolite profiles were observed between NC and AD patients, with noticeable variations in metabolite expression levels. The majority of metabolites in the serum of AD patients exhibited low expression, while a few showed high expression levels. Notably, metabolites such as Cholesterol glucuronide, Styrene, Lutein, Betaine, Phosphorylcholine, Taurine, and Creatinine displayed the most pronounced alterations. Conclusion These findings contribute to a further understanding of the complexities underlying this disease. INTRODUCTION Atopic dermatitis (AD) is a chronic, recurrent, inflammatory skin disease mediated by genetic and environmental factors, as well as immune mechanisms. 1 Its primary clinical features include eczema-like skin lesions accompanied by skin dryness and itching.Patients may experience other atopic conditions such as asthma, allergic rhinitis, and allergic conjunctivitis. 2Globally, 15%-30% of children and 10% of adults have a history of AD, with over one-third of patients falling into the moderate to severe category. 3,4Currently, it is widely believed that immune abnormalities and impaired skin barrier function are pivotal the development of AD. 5,6 , Nonetheless, recent studies have increasingly linked alterations in the gut microbiota of AD patients to abnormal inflammatory responses. 7e gut microbiota plays a critical role in regulating both acquired and innate immune responses, essential for maintaining immune homeostasis.Changes in the gut microbiota may activate T cells, trigger the inflammatory process, and induce immune dysregulation. 8This dysbiosis is linked to autoimmune diseases, including arthritis (RA), type 1 diabetes, autism, inflammatory bowel disease (IBD), systemic lupus erythematosus (SLE).Furthermore, changes in the gut microbiota are also implicated in conditions like AD and other skin diseases, such as psoriasis. 9,10AD is a highly heterogeneous inflammatory skin disease, with previous research focusing on the skin barrier and Th2 cell-mediated inflammatory responses. 11However, there is a scarcity of research examining AD through a metabolomics lens.Metabolites, as downstream products of cellular metabolism, offer insights into biological system changes at the cellular level, aiding in the identification of novel therapeutic targets. To further investigate the gut microbiota composition and serum metabolite changes in AD patients, this study employed metagenomics and untargeted metabolomics methods to compare the characteristics and composition of the gut microbiota and differences in serum metabolites between AD patients and healthy individuals.Studying the role of gut dysbiosis and metabolic changes in the pathogenesis of inflammatory diseases provide valuable insights into the origins and progression of the condition, paving the way for the development of more precise predictive and therapeutic strategies. Study subjects AD patients (diagnosed by two dermatologists) and healthy individ- Inclusion criteria were as follows: 1.Not having received biologics, systemic glucocorticoids or immunosuppressive agents, and antibiotic treatment for at least 6 months (some patients had received traditional Chinese medicine, topical medications, narrowband ultraviolet therapy, etc., but not within the 6 months prior to enrollment); 2. Aged 18-60 years; 3. Han Chinese ethnicity; 4. Residing in the Suzhou area for at least 1 year. Exclusion criteria were as follows: 1. Systemic diseases such as diabetes, autoimmune diseases, malignancies, infections, digestive system diseases, etc. 3. Consuming yogurt, pickles, or rice wine in the week prior to sampling. Collection of stool and blood samples Sterile fecal specimen collection tubes were provided to the patients before treatment of AD patients.Under professional guidance, approximately 5 g of fresh midstream feces was collected, placed in a cryovial, and sealed. Fecal DNA extraction and metagenomic analysis Bacterial DNA from feces was obtained using the DNA PowerSoil Pro kit.The total DNA quality was evaluated using 1% agarose gel electrophoresis and a spectrophotometer.The DNA was randomly fragmented into 350 bp fragments using a Covaris ultrasonic disruptor, followed by end repair, A-tailing, adapter ligation, purification, PCR amplification, and library preparation steps.Subsequently, the DNA library was constructed using KAPA HyperPlus free PCR and sequenced on the Illumina PE150 platform as per the manufacturer's instructions.The MOCAT2 software was used for quality control of all raw metagenomic sequencing data.The raw sequence reads were trimmed using the SolexaQA package, with lengths less than 30 bp and quality scores less than 20 being removed.The quality control criteria were A260/280 between 1.8-2.0 and A260/280 > 1.5.To remove contaminated reads, the filtered reads were compared to the human genome using SOAPaligner to obtain clean raw reads. To obtain configurations for subsequent annotation and prediction, SOAPdenovo software was used to assemble the clean raw reads. 2.4 Metabolomics information analysis Statistical analysis Baseline characteristics were expressed as means and standard deviations.Statistical analysis was performed using the R program (version 3.5.1).Due to some variables in the study exhibited homoscedasticity or deviated from a normal distribution, non-parametric Wilcoxon rank-sum tests were applied.In statistical terms.Significance was determined at p < 0.05.Disparities in species abundance results at the phylum, genus, and species levels between the two groups were assessed using non-parametric tests, with p-values corrected using the Benjamin-Hochberg method and a threshold of 0.05.Enrichment was deemed significant when the LDA value was >2.0 and the p-value was < 0.05. RESULTS This study enrolled a total of 51 participants, comprising 28 AD patients and 23 healthy individuals (NC), who met stringent diagnostic and inclusion criteria.Serum samples underwent macro-genomic sequencing, while fecal samples were subjected to non-targeted LC-MS analysis.Various microbial subgroups were screened using K-W, Wilcoxon rank-sum tests, and abundance restrictions, and key gut microbiota were pinpointed using LEfSe analysis.Furthermore, differential metabolites identified from the non-targeted LC-MS data were analyzed for metabolic pathway enrichment to highlight crucial serum metabolites in key metabolic pathways.A heatmap model was established by matching each participant's feces and serum samples to explore the correlation between gut microbiota and serum metabolites (Figure 1). F I G U R E 1 A schematic of the design and the experimental flow diagram. Patient basic information Patients were matched based on their dietary habits and basic clinical characteristics (age, gender, and BMI) to avoid confounding factors affecting group differentiation.The serum total IgE levels in AD patients were significantly higher than in the NC group (Table 1).All fecal samples were yellow and soft, with no statistical differences between the two groups.To ensure the accuracy of subsequent analyses, raw sequencing data from 51 fecal samples (28 AD, 23 NC) were selected and summarized for statistical analysis and gene prediction.A total of 51 serum samples (28 AD, 23 NC) were included for analysis. The baseline peak chromatograms (BPC) of all QC samples overlapped well, indicating good instrument status and stable signals throughout the sample detection and analysis process.Compounds with a relative peak area coefficient of variation (CV) less than or equal to 30% in the QC samples accounted for more than 60% of the total compounds, indicating sufficient data quality. AD gut microbiota characteristics The Abundance statistical analysis of gut microbiota in AD and NC demonstrated that the species accumulation curves in each group reached a plateau, indicating sufficient sampling size and sequencing depth to adequately capture biological diversity.This suggests that increasing the sample size would not substantially enhance microbiota richness (Figure 2A).Although the differences were not statistically significant (p > 0.05), differences in Chao1 index, Shannon, and Simpson index at the species level were observed, indicating lower α-diversity of gut microbiota in AD compared to NC (Figure 2b-d). Based on the abundance of gut microbiota in AD patients and healthy individuals, the Firmicutes, Bacteroidetes, and Proteobacteria accounted for over 75% of the total abundance, being dominant in both groups.Compared to NC, the relative abundance of Bacteroidetes, The gut microbiome community is divided into two groups.(A) Rarefaction curves between the number of samples and the number of genes.In all samples, the number of genes approached saturation.(B) chao1 index, (C) Shannon index, (D)Simpson index, (E-G) The top 10 representative phyla, genera, and species as well as their proportions in each of the two groups, (H) A Venn diagram displaying group overlaps revealed that 325685 of the total richness of 2123609 genes were unique to AD.The blue circle represents AD, and the green circle represents NC. Proteobacteria, and Verrucomicrobia increased in AD, whereas the relative abundance of Firmicutes and Actinobacteria decreased (Figure 2E). At the genus level, the Bacteroides and Faecalibacterium were most dominant in both groups.Compared to NC, the relative abundance of Bacteroides increased, while the relative abundance of Faecalibacterium decreased in AD.Additionally, the relative abundance of Prevotella, Bacteroides, Veillonella, Escherichia, Megamonas increased in the AD group, while the relative abundance of Collinsella and Roseburia decreased (Figure 2F).Regarding the species level, besides Prevotella copri having the highest abundance in AD, Faecalibacterium prausnitzii was the most abundant in AD.Furthermore, Faecalibacterium prausnitzii was the most prevalent bacterium in NC (Figure 2G).Additionally, Venn plots showed that 1 886 383 out of 2 123 609 genes were shared between both groups, while 325 685 genes were unique to AD (Figure 2H). Further identification of specific bacterial taxa with significant differences between the two groups was performed using LEfSe (Figure 3).The Bacillota, Prevotella, Uroviricota, Catenibacterium, Cau-doviricetes, Dorea, Oscillospiraceae, Catenibacterium, Clostridium were significantly enriched in the gut microbiota of healthy controls, while Enterocloster bolteae and Mediterraneibacter_ruminococcus_gnavus were significantly enriched in the gut microbiota of AD patients.These identified taxa were visually represented on the evolutionary tree to demonstrate their phylogenetic distribution and the significant differences in LDA scores.These findings indicate substantial alterations in the gut microbiota composition between the two groups (Figure 4). Non-targeted metabolic characteristics in AD patients serum The occurrence of AD is accompanied by changes in the metabolic status of local tissues and the circulatory system.The gut microbiota can produce metabolites or small molecules that are absorbed by the host intestine into the bloodstream.These metabolites can were downregulated in the serum of AD patients, while a smaller subset of metabolites exhibited upregulation (Figure 6). To better understand the relationship between differential metabolites and the pathogenesis of AD, we conducted metabolic pathway enrichment analysis on the KEGG IDs of differential metabolites. The analysis revealed metabolic pathways where differential metabolites were significantly enriched with a p-value < 0.05, and a bubble plot illustrating these pathways was generated.Compared to the NC group, the AD group exhibited significant increases in various pathways, including the biological synthesis of arginine and proline, valine, leucine, and isoleucine biosynthesis, glycerophospholipid metabolism, metabolic pathways, shigellosis, lysine degradation, mTOR signaling pathway, amino acid biosynthesis, choline metabolism in cancer, mineral absorption, ABC transporters, arginine and proline metabolism, D-amino acid metabolism, central carbon metabolism in cancer, aminoacyl-tRNA biosynthesis, and protein digestion and absorption (Figure 7). Metabolite tracing analysis revealed that 48 metabolites were derived from both the microbiota and the host, 4 metabolites were exclusively derived from the microbiota, and 3 metabolites were solely derived from the host.Compared to the NC group, in the serum of AD patients, the metabolites derived from both the microbiota and the host were partially upregulated and partially downregulated.Among them, the most significantly upregulated metabolites Correlation analysis between metagenomics and metabolomics By conducting correlation analysis and joint analysis of matched microbiome and metabolome data from serum and feces (28 cases in the AD group, 23 cases in the NC group), the relationship between significant differential microbial taxa in the gut microbiota of AD patients and serum metabolites was explored.By calculating the Spearman correlation coefficients between different species and metabolites, a correlation matrix was obtained, and the top 10 differential species and metabolites with the smallest p-values in each omics were selected to generate a heatmap, as shown in the results for AD and NC.The significantly enriched Enterocloster bolteae and Ruminococcus_gnavus in the gut microbiota of AD patients did not show a significant correlation with the significantly altered metabolites in the serum of AD patients (Figure 8). DISCUSSION The skin and the gut share many similarities as they both function as active, intricate immune and neuroendocrine organs, regularly exposed to the external environment, and harbor diverse microbiota. 13,14,15,16The normal functioning of the skin and gut plays a crucial role in maintaining homeostasis in the organism. In recent years, there has been growing interest in the role of gut microbiota in the pathogenesis of AD.Studies have shown that the The horizontal axis represents the Rich Factor corresponding to each pathway, and the vertical axis represents the name of the KEGG metabolic pathway.The size of the circles represents the number of differential metabolites enriched in that pathway. F I G U R E 8 The correlation analysis between Enterocloster bolteae and Ruminococcus_gnavus and significantly different metabolites. diversity of gut microbiota increases continuously after birth, especially in the first 5 years of life. 17,18The gut microbiota actively participates in regulating various physiological processes, including intestinal endocrine function, cell proliferation, synthesis of diverse compounds, detoxification, immune responses, and the development and upkeep of the intestinal mucosa. 19Imbalance or disruption in gut microbiota has been linked to an increased risk of immune-related diseases associated with lifestyle, such as asthma and metabolic disorders.Research suggests that gut dysbiosis may be associated with a variety of immune-mediated diseases, including asthma and metabolic diseases.Gut bacteria can influence sugar metabolism through mechanisms such as regulating energy absorption, fat metabolism, bile acid metabolism, and affecting the production of short-chain fatty acids. 20,21,22The immune mechanisms of AD are intricate, prompting further investigation into the interplay between gut microbiota and chronic inflammation as well as immune system-related disorders.This study elucidates changes in the gut microbiota of AD patients through metagenomic analysis, which identifies organisms at the species or even strain level, in contrast to 16S rRNA sequencing analysis, which is limited to identifying organisms at the genus level. 23e findings of this study indicate that the distribution of gut microbiota in AD patients has changed, but there is no significant difference in microbiota diversity.The gut microbiota characteristics of recruited AD patients showed an increase in the abundance of Bac- 25 In a broader context, Lin et al. examined the gut microbiota of 394 healthy individuals across seven Chinese cities and found that geographic location had a more pronounced impact on gut microbiota diversity and composition than ethnicity. 17In our study, the richness of gut microbiota at the phylum level in AD patients and healthy adults was similar, possibly due to all participants being from the same region.The abundance of Proteobacteria in the gut microbiota of AD patients has been found to increase. Currently, there is limited research on the relationship between Proteobacteria and the mechanisms of AD occurrence.Xiao et al. found that compared to healthy individuals, patients with psoriasis have increased abundance of Proteobacteria in the gut microbiota.Therefore, it is speculated that Proteobacteria may play a promoting role in chronic inflammation and immune-mediated diseases. 26Pseudomonas is a harmful bacterium associated with intestinal barrier function and infections.Studies have shown that Pseudomonas is related to the pathogenesis of inflammatory bowel disease and can disrupt the balance of other microbial communities. 27Currently, there is no research specifically linking Pseudomonas to AD.However, the results of this study suggest a potential correlation between the presence of Pseudomonas and the development of AD, although further investigation is needed.Prevotella, traditionally associated with a healthful plant-based diet and probiotic functions, 28 exhibited higher abundance in the gut of AD patients in this study, potentially indicating a relationship between Prevotella and pro-inflammatory activities. 29 the genus level, the gut microbiota of both groups was mainly composed of Bacteroides and Faecalibacterium.Compared to the NV group, the relative abundance of Bacteroides, Prevotella, Bacteroides fragilis, Veillonella, Escherichia, and Megamonas was increased in AD, while the relative abundance of Faecalibacterium, Collinsella, and Roseburia was decreased.Research has shown that infants who developed AD in childhood have lower levels of Faecalibacterium in their gut compared to healthy newborns. 30Additionally, metabolites released by Faecalibacterium may induce anti-inflammatory cytokines and inhibit the production of pro-inflammatory cytokines. 31Studies have reported lower levels of short-chain fatty acids (SCFAs) in the feces of AD patients, particularly butyrate.This may be due to a decrease in producers of butyrate and propionate, especially within the Faecalibacterium genus. 32Collinsella has been reported to produce ursodeoxycholic acid, which has been shown to inhibit the binding of SARS-CoV-2 to angiotensin-converting enzyme 2, suppress proinflammatory cytokines such as TNF-α, IL-1β, IL-2, IL-4, and IL-6, and exhibit antioxidant and anti-apoptotic effects. 33The reduction in Collinsella abundance in the fecal microbiota of AD patients may be linked to the inflammatory processes.Roseburia belongs to the phylum Firmicutes, class Clostridia, and order Clostridiales, typically representing 0.9%-5.0% of the total microbiota. 34,35It is one of the major producers of butyrate in human feces and has been reported to have broad anti-inflammatory and metabolic regulatory effects in various disease models. 36Therefore, Roseburia may serve as a potential probiotic that could have a certain impact on the onset and treatment of AD. Many studies have shown that compared to healthy individuals, the diversity of gut microbiota in AD patients decreases, with an increase in the proportion of Enterobacteriaceae and a decrease in the proportion of Bacteroidaceae.The results of this study show that the proportions of Enterobacteriaceae and Bacteroidaceae are both increasing.This may be related to the complex influencing factors of gut microbiota.In this study, the proportions of Escherichia coli and Veillonella were found to increase in AD patients.Some studies support our findings, as they have found that these two genera are enriched in the feces of children with eczema and AD. 37As an important genus in the gut microbiota of Asians, the proportion of Klebsiella was found to increase in the gut microbiota of AD patients in this study, but research on its relationship with the disease is still in the early stages.Some data suggest its association with inflammatory bowel disease, colorectal cancer, ankylosing spondylitis (AS), obesity, and the nervous system. 38,39However, the specific causal relationship and molecular mechanisms are still under investigation. Compared to the NC group, Enterocloster bolteae and Ruminococ-cus_gnavus are enriched in the gut microbiota of AD patients.A prospective study comparing twin cohorts (n = 30) with age-matched singletons (n = 14) found an increase in Ruminococcus_gnavus before the onset of allergy symptoms, which was associated with the coexistence of respiratory allergies or respiratory allergies with AD. 40 Another study sequencing the fecal samples of 50 infants with eczema and 51 healthy infants found a significant enrichment of Lactobacillus in infants with eczema. 41Research by John Penders and others has confirmed that the presence of Enterocloster bolteae in infants increases the risk of developing eczema, wheezing, and allergic reactions.Furthermore, the presence of Enterocloster bolteae has been found to increase the risk of AD. 42 Compared to AD patients, the gut microbiota of healthy individuals contains a greater variety of enriched bacteria. Therefore, it can be hypothesized that the onset of AD is not only due to an increase in the abundance of some harmful bacteria but also likely related to a decrease in the abundance and diversity of beneficial bacteria. In our study, AD patients showed significant differences in serum metabolism compared to healthy controls.In AD patients, the most significantly upregulated metabolites include cholesterol glucuronide, styrene, and other metabolites of unknown origin; while the most significantly downregulated metabolites include lutein, betaine, phosphatidylcholine, taurine, and creatinine.Styrene is mainly metabolized by P450 enzymes in the body to form Styrene oxide (SO), which has strong oxidative properties and can cause lipid peroxidation, leading to cellular oxidative damage.Styrene oxide is also an electrophilic compound that can form covalent bonds with proteins, nucleic acids, and other nucleophilic biomolecules, causing changes in protein function and genetic mutations, ultimately resulting in cell damage. 43search by Tanaka M and others on the effects of styrene monomers on a mouse model of atopic dermatitis found that exposure to certain doses of styrene monomers can exacerbate mite allergen-related atopic dermatitis-like skin lesions, with skin changes consistent with the overall trend of histamine levels in ear tissues. 44The mechanism by which styrene promotes the development of AD may involve promoting excessive proliferation and activation of mast cells, but further research is needed for specifics.Additionally, some epidemiological studies suggest a positive correlation between styrene exposure and asthma, indicating that styrene may have an adjunctive role in allergic reactions. 45tein is a major component of natural carotenoids in nature and is an oxidized form of carotenoids.Lutein varieties include lutein, zeaxanthin, β-cryptoxanthin, capsanthin, astaxanthin, and fucoxanthin. 46e beneficial effects of lutein supplements and the local application of these carotenoids on human skin have been shown to be beneficial. 47,48In a study by Lee EH et al., it was found that lutein can reduce the production of reactive oxygen species (ROS) after UVR exposure, thereby reducing UVB-induced epidermal hyperplasia and acute damage in hairless mice. 49Research by Lucas R et al. indicated that levels of lutein in the plasma of AD patients are lower than in healthy individuals and are negatively correlated with SCORAD, consistent with the results of this study. 50Therefore, it can be speculated that increasing the level of lutein in the body through dietary supplementation or other methods may reduce the severity of skin lesions in AD and provide some relief.The specific mechanism of action requires further in-depth study. Betaine is a stable, non-toxic natural substance.Because its structure resembles glycine and has three additional methyl groups, it is also known as trimethylglycine. 51Betaine is mainly derived from food in the human body and is distributed in the kidneys, liver, and brain.It can also be synthesized from choline in the body.Research supports the view that both humans and animal neonates have high concentrations of betaine in their bodies. 52,53Studies have shown that betaine can improve the metabolism of sulfur amino acids and exert an antioxidant effect. 54Additionally, betaine inhibits the NF-κB signaling pathway, which controls many genes related to inflammation, including proinflammatory cytokines such as tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), and interleukin-23 (IL-23).Many inflammatory diseases are associated with chronic activation of NF-κB. 40,55,56ltiple studies have shown that betaine can inhibit the activation of the NLRP3 inflammasome. 57,58,59Furthermore, betaine can regulate energy metabolism, alleviate chronic inflammation, restore the balance between synthesis and oxidation, and help reduce fat accumulation. 57ult plasma betaine concentrations are negatively correlated with body fat percentage; individuals with higher plasma betaine concentrations often have better fat distribution. 60The occurrence and development of AD are correlated to varying degrees with the NF-κB pathway, interleukins, and metabolic disorders, making the role of betaine in AD worthy of further exploration. Taurine plays an important role in reducing lipid peroxidation (LPO) products, thereby protecting cells from tissue damage. 61In addition, it also has an inhibitory effect on cell apoptosis.For example, taurine treatment in ischemia-reperfusion (IR) rats can eliminate mucosal damage and protect intestinal epithelial cells from apoptosis 62 ; taurine can significantly reduce the infarct area in ischemic stroke 63 ; and it can protect rat myocardium from mitochondrial and endoplasmic reticulum stress by interfering with mitochondrial-dependent apoptosis and unfolded protein-related apoptosis. 64In our study, decreased taurine levels in the serum of AD patients may weaken the protective effect against cell apoptosis and cell damage, leading to the development of inflammation. The correlation analysis of gut microbiota and serum metabolites revealed that two significantly enriched bacteria in the gut microbiota of AD patients and significantly altered metabolites in the serum of AD patients showed no obvious correlation.The possible reasons for this speculation may be related to the small sample size; in addition, it is possible that the microbiota does not directly affect these metabolites but influences their levels through other mediators. In this study, we examined the variances in gut microbiota and serum metabolites between AD patients and healthy adults.We observed that the richness and diversity of gut microbiota were lower in AD patients compared to the healthy population, with Enterocloster bolteae and Ruminococcus_gnavus being more abundant in the gut microbiota of AD patients.The majority of metabolites in the serum of AD patients showed downregulation when compared to those in healthy individuals.However, we did not identify a significant correlation between the aforementioned bacteria and the notably altered metabolites. There are several limitations to this study: (1) the small sample size made it challenging to assess the influence of AD disease duration, severity, complications, and treatment on gut microbiota and metabolism; (2) the utilization of metagenomic sequencing technology for analyzing the characteristics of gut microbiota, genetic functions, and associated metabolites in AD patients.Nevertheless, the sample reference gene database may not encompass information on all existing species or organisms, but rather on most species and strains.(3) Dietary habits were not specifically evaluated through questionnaires, which could potentially affect the gut microbiota of the participants. ORCID Yibin Zeng https://orcid.org/0009-0007-7873-3342 uals who visited the First Affiliated Hospital of Soochow University from January 2022 to June 2023 were evaluated.We conducted a cross-sectional survey of AD patients (n = 28) and healthy individuals (n = 23) matched for age, gender, and body mass index (BMI) to compare the differences in gut microbiota and serum metabolites.This study was approved by the Ethics Committee of the First Affiliated Hospital of Soochow University and complied with the Helsinki Declaration.All subjects provided informed consent.The questionnaire included personal and clinical information, filled out by patients and clinical doctors. 100 µL of serum samples were transferred to EP tubes, mixed with 400 µL of extraction solution (methanol: acetonitrile = 1:1 (V/V) containing a mixture of isotope-labeled internal standards), vortexed for 30 s, sonicated for 10 min (ice bath), incubated at −40 • C for 1 h, and then centrifuged at 4 • C, 12 000 rpm (centrifugal force 13 800 (× g), radius 8.6 cm) for 15 min.The supernatant was taken for analysis.An equal amount of supernatant from all samples was mixed to create a QC sample for analysis.The project used a Vanquish (Thermo Fisher Scientific) ultra-high performance liquid chromatography system, and the target compounds were chromatographically separated using a Waters ACQUITY UPLC BEH Amide (2.1 mm × 100 mm, 1.7 µm) liquid chromatography column. F I G U R E 3 Key species selection by LEfSe.Differential microbial score chart: the higher absolute value of the score, the greater the contribution of the microbe to the difference.either promote health or potentially harm health, thereby affecting the host's physiological functions.12In this study, a comprehensive analysis detected a total of 19 480 metabolites, encompassing host metabolites, microbial metabolites, and metabolites of unidentified origin across all samples.Based on the abundance of metabolites detected through non-targeted metabolomics, Orthogonal Partial Least Squares Discriminant Analysis (OPLS-DA) was performed.According to the scatter plot, a clear separation between samples from the AD group and the NC group, with the comparison test data indicating no signs of overfitting (generally, the closer the slopes of the R2Y and Q2Y lines are to 0, the more likely the model is to overfit).The metabolic differences between the AD and NC groups are represented in a volcano plot (Figure 5).Hierarchical clustering analysis of differential metabolites revealed substantial variations in metabolite expression levels in the serum of AD patients compared to healthy controls.A majority of metabolites F I G U R E 4 Cladogram generated from the LEfSe analysis indicating the phylogenetic distribution of the microbiota of AD and control groups from phylum to species.F I G U R E 5 Overview of altered serum metabolism in AD (n = 28) and NC (n = 23).(A) PLS-DA shows the differences between the groups' metabolites.(B) The two rightmost points in the figure are the actual R2Y and Q2 values of the model, and the remaining points are the R2Y and Q2 values obtained by randomly arranging the samples used.(C) Volcano plot, metabolites that differ between AD and NC. 24cteroidetes in AD children at the age of 1, while Li et al. did not find significant changes in phylum-level microbiota in AD children,24suggesting a potential influence of geographic location on gut microbiota composition.For example, children from northeastern Thailand have a richer abundance of fragile Bacteroidetes, Bifidobacteria, Prevotella, Clostridia, Rectal Clostridia, and Lactobacilli compared to children from central Thailand. teroidetes, Pseudomonas, and Actinobacteria, and a decrease in Firmicutes and Actinobacteria, which is consistent with some previously reported outcomes.The study found an increase in Bacteroidetes in the gut microbiota of AD patients, but Abrahamsson et al. reported lower abundance of
2024-06-29T06:17:20.518Z
2024-06-28T00:00:00.000
{ "year": 2024, "sha1": "b2267a734a1da95ed793fb0e0ec63e16c6db1e2a", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d656a364eb4fd99202cc3413d2d7602a116a043c", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
258344105
pes2o/s2orc
v3-fos-license
Abdominal Ecchymosis: Emergency, or Urgen-C? Scurvy is a multisystem disease caused by vitamin C deficiency, historically associated with lethargy, gingivitis, ecchymosis, edema, and death if left untreated. Contemporary socioeconomic risk factors for scurvy include smoking, alcohol abuse, fad diets, mental health conditions, social isolation, and economic marginalization. Food insecurity is also a risk factor. This report describes a case of a man in his 70s who presented with unexplained dyspnea, abdominal pain, and abdominal ecchymosis. His plasma vitamin C level was undetectable, and he improved with vitamin C supplementation. This case highlights the significance of awareness of these risk factors and emphasizes the need for a comprehensive social and dietary history to enable the timely treatment of this rare but potentially fatal disease. Introduction Scurvy is a disease caused by a deficiency in ascorbic acid, an essential dietary nutrient that plays a critical role in collagen synthesis. As a cofactor for proline and lysine hydroxylases, vitamin C (VC) stabilizes the tertiary structure of the collagen molecule and promotes its gene expression [1]. When VC stores are depleted, and serum concentrations fall below 0.2mg/dL (11uM/L), disequilibrium occurs between non-VC dependent tissue remodeling collagenases resulting in systemic degradation and friability of connective tissues. This underlying pathophysiologic process leads to the clinical manifestations of scurvy, which include lethargy, gingival hypertrophy and bleeding, easy bruising, edema of the extremities, and death if left untreated. Although scurvy is most commonly associated with the vitamin C-deficient diets of seafaring explorers in the 15th and 16th centuries, descriptions of the disease date back to as early as 1550 BC Egypt [2][3][4][5]. The Scottish naval surgeon James Lind is credited with the first observation that eating lemons and oranges could produce a relatively quick cure [6]. Centuries later, scurvy is recognized as a multisystem disease resulting from VC deficiency in acute micronutrient malnutrition. However, it has a complex relationship with modern socioeconomic risk factors, including smoking, alcohol use disorder, fad diets, mental health conditions, social isolation, and low socioeconomic status [2,7]. Food insecurity is also a risk factor as it is directly related to a suboptimal diet and can be seen as a ramification of economic marginalization [7]. Intensifying regional conflicts have caused starvation. These famine conditions occur in areas with high soil fertility and highlight the sociopolitical dimensions of modern scurvy. Case Presentation A man in his 70s with a history of tobacco abuse, depression, and anxiety presented with complaints of dyspnea and abdominal pain. The patient had been experiencing progressively worsening dyspnea for the past three days, which he attributed to feeling anxiety. He also reported experiencing generalized abdominal pain and unexplained ecchymosis which was not associated with trauma and had an unknown time frame FIGURE 2: Flank Ecchymosis The patient appeared mildly dehydrated and was tachypneic upon presentation. Examination revealed significant ecchymosis across his lower abdomen. The patient had missing teeth, which he had lost progressively over years, and changes in his lower extremity hairs that were later identified to be "corkscrew" in nature Figure 3. FIGURE 3: Corkscrew hair Initial laboratory studies revealed a hemoglobin level of 10.9 g/dL (reference range 13.5-16.5 g/dL) and a mean corpuscular volume of 91.1 fL (reference range 80-100 fL). Workup for dyspnea revealed a normal alveolar-arterial gradient and no evidence of acidosis. The patient was initially diagnosed with an acute anxiety attack, and his dehydration was resolved with IV fluids. Further exploration of the unexplained abdominal ecchymosis revealed negative workup results for infection, renal failure, cirrhosis, and coagulopathy. A more detailed psychosocial history revealed that the patient was living in isolation in an apartment and was unable to care for himself adequately due to agoraphobia and deep depression triggered by the loss of his mother. He had been prescribed escitalopram four years prior to admission. Six months prior to admission, his dose of escitalopram was increased from 10mg to 20mg. However, at the time of admission, he was not taking any medication. For the past several years, he had been eating canned foods and meats with no fresh fruits or vegetables. Given his exam findings in the presence of a vitamin C-devoid diet, scurvy was suspected and confirmed when a serum vitamin C level was below the level of detection. He was not screened for other micronutrient deficiencies since processed grains and meat contain chromium, copper, selenium, and zinc. Dietary and social work interventions were initiated, and the patient was prescribed supplemental vitamin C. Ascorbic acid 250mg was administered orally daily starting on the day of admission. His anxiety improved within 36 hours of administration of ascorbic acid. He was seen by a psychiatrist who noted a marked improvement in mood, though there was no documentation of any scoring for depression or anxiety. He was able to be discharged to sub-acute rehabilitation after four days. He was visited approximately seven days into his sub-acute rehabilitation stay and he was engaging in psychotherapy and physical therapy, improving social interactions with family, and eating regularly. A primary care visit 22 days after supplementation noted his motivation was better with improved mood and interactivity. He was not seen by a psychiatrist for follow-up. He was living with a cousin and his depressive symptoms improved. The exam at this clinic visit noted the resolution of abdominal ecchymosis. Follow-up with his primary care physician revealed a significant improvement in the patient's cutaneous symptoms, and he had markedly improved his self-care and outlook. He was well-groomed, interactive, and had made a plan with his cousin to improve his home life. In the modern era of fortified diets, scurvy is rare in industrialized countries, except in patients with a history of alcohol or drug dependence with severely restricted diets devoid of fruits and vegetables. Institutionalized and socially isolated patients represent a population at increased risk of the disease [2,8]. Healthy individuals with normal diets transiently hold approximately 1,500mg of total body VC with the highest concentrations found in the cutaneous tissues [1]. If not replenished with diet, this store depletes within one to three months and manifestations of scurvy begin to appear [3]. Vitamin C deficiency impairs collagen synthesis, which weakens blood vessels and makes them more prone to rupture, leading to ecchymosis and other bleeding disorders. Collagen is an important structural protein in the walls of blood vessels that provides tensile strength and resistance to deformation. Inadequate vitamin C levels lead to impaired collagen synthesis resulting in weakened vessel walls that are prone to rupture, bleeding, ecchymosis, and other bleeding disorders such as petechiae and purpura [9,10]. Several studies have investigated the relationship between vitamin C deficiency and bleeding disorders. A study conducted on guinea pigs demonstrated that vitamin C deficiency resulted in a reduction of collagen content in their vessel walls, leading to hemorrhages and subcutaneous bleeding [11]. In another study, vitamin C supplementation was shown to improve platelet function and reduce the risk of bleeding in patients with uremia [12]. A systematic review of studies on vitamin C and bleeding disorders concluded that vitamin C deficiency is a significant risk factor for bleeding and that vitamin C supplementation can improve bleeding time and reduce the incidence of bleeding. [13]. Clinical manifestations resulting from the friability of tissues and vasculature include gingival hyperplasia and hemorrhage, "corkscrew" body hair with perifollicular hyperkeratosis and perifollicular petechial hemorrhage, and ecchymosis with minor trauma [14]. The mechanism behind the formation of corkscrew hair in scurvy is not well understood. However, it has been suggested that vitamin C plays a role in the regulation of hair follicle development and maintenance, including the production of collagen, a major component of the hair shaft. Corkscrew hair is a characteristic feature of scurvy, and it is caused by the weakening of the hair shaft due to vitamin C deficiency, which leads to abnormal keratinization of the hair follicles [15]. The hair becomes thin and brittle, and the hair shafts lose their normal elliptical shape and become twisted, resembling a corkscrew. Vitamin C is required for the hydroxylation of proline and lysine residues in collagen, which stabilizes the triple helix structure of collagen molecules and promotes the formation of strong and resilient tissues [16]. It is worth noting that corkscrew hair is not specific to scurvy and can also be seen in other conditions that affect the hair shaft, such as trichothiodystrophy, a rare genetic disorder. [17]. Prolonged cases of vitamin C deficiency may eventually progress to the notorious breakdown of previously well-healed scars. Additional systemic effects of scurvy are seen in advanced cases such as edema, diarrhea, xerosis, and anemia as well as neuropsychiatric prodromal symptoms early in the disease process including lethargy, classically described as lassitude, and weakness that can be mistaken for depression. Advanced neurologic effects seen in later diseases include peripheral neuropathy and seizures [3]. Anemia is frequently associated with scurvy and may be the result of reduced iron absorption in the absence of dietary VC but also in hemolysis in advanced cases [2,3]. Scurvy is diagnosed clinically and is supported by confirming low serum levels of ascorbic acid. A preferred alternative to plasma vitamin C levels is the measurement of leukocyte vitamin C concentration as it is a more accurate reflection of long-term intake of vitamin C and tissue stores; however, this test is less commonly available [18]. Treatment of scurvy is by oral or IV vitamin C supplementation, which can show symptomatic improvement over the course of approximately seven days, though initial clinical improvements can be seen in as little as 24 hours [2]. A wide range of replacement doses has been shown to be efficacious. Therefore, replacement should be individualized to the patient and can range from 300mg to 1000mg daily for one month [2,14,19]. Scurvy remains an important cause of mortality in the global context despite the perceived ease of prevention and cure. The World Food Programme (WFP) released an update to the Global Report on Food Crises to draw attention to a sharp increase in economic and political instability caused by the Covid-19 pandemic that has exacerbated ongoing climate-related food crises of which scurvy remains an issue [20]. ChatGPT was used to assist with this case report. The background, case, discussion, and conclusions represent the original work of the authors, ChatGPT was used to find keywords Figure 5. FIGURE 5: ChatGPT helps to find keywords We did not use it for any writing of this article other than to help with the writing style. ChatGPT was not used to make any diagnosis or to develop conclusions Figure 6. FIGURE 6: ChatGPT helps find a better phrase when prompted The limitations of open artificial intelligence are still significant so all medical case reports will need authors to guide and confirm what is written. ChatGPT uses a database that is several years old and so that poses a limitation. Another limitation and risk would be the introduction of any sensitive patient information or sensitive scientific information. In fact, many companies have asked employees not to enter any sensitive or proprietary information into ChatGPT. Conclusions It is important to know about and recognize scurvy as it is still seen in modern times. Indeed, epidemic scurvy is still reported in some areas of the world. Specifically, intensifying regional conflicts have caused catastrophic famine conditions, wherein an extreme lack of food is leading to widespread starvation and death for an estimated 8 million people and an additional 20 million in nearing catastrophic famine conditions. Paradoxically, these famine conditions increasingly occur amongst regions of the world with the highest-rated soil fertility and are an important indicator of the transcendent sociopolitical dimensions of ancient and modern scurvy. Our patient presented to a North American hospital in an urban setting with acute, though non-specific complaints from which initial workup did not clarify a diagnosis until the suspicion of scurvy arose after exploring a more thorough psychosocial history. Our patient's case serves as a reminder that scurvy is not limited to the past and can manifest in modern times when mental health conditions and food insecurity intersect. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-04-27T15:20:12.960Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "2a56274ae513de4cd42cacd23bb01d3ce9819538", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/142579/20230425-15848-822xoe.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eacedc9f751240c5a7c12596e58d003492b30c15", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2361350
pes2o/s2orc
v3-fos-license
On the generalized unitary parasupersymmetry algebra of Beckers-Debergh An appropriate generalization of the unitary parasupersymmetry algebra of Beckers-Debergh to arbitrary order is presented in this paper. A special representation for realizing of the even arbitrary order unitary parasupersymmetry algebra of Beckers-Debergh is analyzed by one dimensional shape invariance solvable models, 2D and 3D quantum solvable models obtained from the shape invariance theory as well. In particular in the special representation, it is shown that the isospectrum Hamiltonians consist of the two partner Hamiltonians of the shape invariance theory. Introduction Supersymmetry (symmetry between the fermionic and bosonic degrees of freedom) has important role [1,2] in analyzing of the quantum mechanical systems, since it can study remarkable properties including the degeneracy structure of the energy spectrum, the relations among the energy spectra of the various Hamiltonians and etc.. In particular in this theory, the energy eigenvalues are necessarily non-negative and the energy of non-zero (zero) ground state is related to the broken (unbroken) supersymmetry. For the first time, Rubakov and Spiridonov [3] extended the supersymmetry which is called parasupersymmetry and it describes an essential symmetry between bosons and parafermions. Then, it was realized that the parasupersymmetry presented in Ref. [3] is of order p = 2 and Khare [4,5] generalized the Rubakov-Spiridonov (R-S) parasupersymmetry to arbitrary order p ≥ 1: Here, Q 1 and H stand for a parasupercharge and the bosonic Hamiltonian. The relations (1) for p = 1 and p = 2 are reduced to the supersymmetry algebra and the Rubakov-Spiridonov (R-S) parasupersymmetry algebra [3], respectively. In addition, the relations (1) describe the unitary parasupersymmetry algebra of arbitrary order p, since Q † 1 is the Hermitian conjugate of the parasupercharge Q 1 . So, it is evident that similar relations are satisfied under interchange of Q 1 and Q † 1 in (1). Before the extension of the R-S parasupersymmetry algebra to arbitrary order p, for the p = 2 case remarkable and interesting discussions had been made [6][7][8][9]. For instance, the motion of a spin-1 particle along the z-axis in the presence of a magnetic field can be described by a parasupersymmetric Hamiltonian which is obtained by the simple harmonic oscillator and Morse potentials [10], and also the problem was generalized [11] for the spin-p 2 particles. Meanwhile, in the context of quantum field theory, the R-S parasupersymmetry algebra of order p = 2 leads to the infinite bosonic and parafermionic variables [12]. In a special formulation of the R-S parasupersymmetric quantum mechanics [13], the Hamiltonian for which the energy spectrum cannot be negative is expressed in terms of an explicit function of the parasupercharge Q 1 . However, in general for the Rubakov-Spiridonov parasupersymmetric theory of arbitrary order the bosonic Hamiltonian cannot be obtained directly in terms of the parasupercharge Q 1 , and the energy eigenvalues are not necessarily non-negative. Moreover, there is no connection between the non-zero (zero) ground-state energy and the broken (unbroken) parasupersymmetry. Also, it has been shown [5,14] that there are p − 1 other conserved parasupercharges and p bosonic constants in the R-S parasupersymmetric theory of arbitrary order p. In an early work, Infeld and Hull [15] studied the factorization and algebraic solutions of the bound state problems and later, Gendenshtein et.al. considered the subject in the framework of the shape invariance symmetry as an important aspect of solvability of wide range of the 1D quantum mechanical models. It must be mentioned that the factorization and the shape invariance symmetry have obtained a very helpful approach in the representation of the supersymmetry theory [16][17][18][19][20][21][22][23][24]. Recently, most of the one dimensional shape invariant solvable quantum mechanical models have been classified into two bunches. The first bunch [25] includes models for which the shape invariance parameter is the main quantum number n. On the other hand, in the second bunch [11] the shape invariance parameter of the models is the secondary quantum number m. Meanwhile, it has been shown in Ref. [11] that the R-S parasupersymmetry algebra of arbitrary order p is realized by the shape invariant quantum mechanical models so that the algebra can be represented by the quantum mechanical states of the models. For realizing the algebra, it has also been shown that the bosonic Hamiltonian involves p + 1 isospectrum Hamiltonians. This fact has been studied in detail for the second bunch of the shape invariant models in Ref. [11]. In fact, p + 1 isospectrum Hamiltonians are obtained by adding p appropriate constants to the factorized Hamiltonians , and by adding the corresponding constant of the last Hamiltonian to its partner Hamiltonian i.e. 1 2 B(p)A(p) as well. Here, the operators B(l) and A(l); l = 1, 2, ..., p are the raising and lowering operators of the quantum states of the shape invariance theory, respectively. One of the other successes of the R-S parasupersymmetry theory of arbitrary order p is the fact that it can be realized [26][27][28] by the 2D and 3D solvable quantum mechanical models obtained from the shape invariance symmetry. In 1990 another formulation of the unitary parasupersymmetry algebra of order p = 2 was introduced by Beckers and Debergh [29] as follows: The appropriate form of the parasupersymmetric Hamiltonian presented in the Eqs. (2) has been constructed in Ref. [29] by using the generators of the simple harmonic oscillator, and the algebra (2) has been represented by the quantum states of the simple harmonic oscillator. But, a generalization of the Beckers-Debergh (B-D) parasupersymmetry algebra to arbitrary order p ≥ 2 which is satisfied by the quantum mechanical systems has not been found yet. In general, the R-S parasupersymmetry algebra has been much more successful than the B-D parasupersymmetry algebra. Nevertheless, for the B-D parasupersymmetry algebra of order p = 2, it has been performed useful discussions. For example, Mostafazadeh has proved that [30] for both Rubakov-Spiridonov and Beckers-Debergh formulations of the parasupersymmetric quantum mechanics of order p = 2, the degeneracy structure of the energy spectrum can be derived using a thorough analysis of the parasupersymmetry algebra. He showed that the result is independent of the details of the Hamiltonian, for example, the degeneracy structure is not related to the dimension of the coordinate manifolds that the Hamiltonian is defined on it. Moreover, it has been shown that in general the Rubakov-Spiridonov (R-S) and Beckers-Debergh (B-D) systems possess identical degeneracy structures [30]. Also, similar to the Witten index of the supersymmetric quantum mechanics, for the two kinds of (p = 2) parasupersymmetry systems (B-D and R-S) a new set of topological invariants has been obtained [30, 31]. In this paper it is intended to generalize the unitary parasupersymmetry algebra of B-D. Also, we introduce a special representation of the B-D unitary parasupersymmetry algebra of even arbitrary order by 1D shape invariance solvable models, and some 2D and 3D quantum solvable models as well. Meanwhile, the B-D unitary parasupersymmetry algebra of even arbitrary order p = 2k is introduced with 2k independent conserved parasupercharges. Towards an appropriate generalization of the B-D parasupersymmetry algebra In order to generalize the B-D unitary parasupersymmetry algebra, we shall take into account the parafermionic operators b and b † of arbitrary order p which have been used by Khare [4,5]. In fact in moving from statistics to parastatistics, the parafermionic operators are considered as the following (p + 1) × (p + 1) matrices: where the coefficients C j are given by It can be easily shown that Using the parafermionic operators b and b † , one may obtain a spin-p 2 representation for the group SU(2). Indeed, by defining the operators J ± and J 3 as: the following commutation relations corresponding to the Lie algebra SU(2) may be derived It is noticed that the commutation relation of the parafermionic operators b and b † is proportional to the third component of the spin-p 2 representation of the Lie group SU(2). One of the well-known and important properties of the parafermionic operators is [4,5] b Now it can be verified that the parafermionic operators b and b † of order p possess the following new multilinear structural relations (p ≥ 2): where the constants C p l ; l = 0, · · · , p are the Newton binomial expansion coefficients. Using the Baker-Hausdorff formula for two arbitrary operators A and B which is given by the structural relations (9) between the operators b and b † may be written down as It is seen that the multilinear expressions in the left-hand sides of the relations (11) are described in terms of the parafermionic operators b and b † in the right-hand sides. Actually, the multilinear relations (11) between the parafermionic operators b and b † propose similar relations between the parasupercharges of the parasupersymmetric quantum mechanics of order p. The relations (8) and (11) indicate that there exist the conserved parasupercharges Q 1 and Q † 1 of order p, and the bosonic Hamiltonian so that they generate the following parasupersymmetry algebra: It can be easily verified that the relations (12) for any arbitrary p are closure under Hermitian conjugation. Meanwhile, the algebraic relations (2) are a special case (p = 2) of the relations (12). Therefore, the relations (12) is an appropriate generalization of the B-D unitary parasupersymmetry algebra with the bosonic Hamiltonian H and the parafermions of arbitrary order p i.e. Q 1 and Q † 1 . 3 A special representation for realizing of the even arbitrary order unitary parasupersymmetry algebra of B-D by quantum solvable models In this section we analyze a special representation for the B-D quantum mechanical unitary parasupersymmetry algebra of even arbitrary order p = 2k by wide range of the 1D, 2D and 3D solvable models. In fact, the 1D models are the solvable models obtained from the two approaches of the factorization with respect to the main and secondary quantum numbers i.e. n and m. On the other hand, the 2D and 3D models representing the algebraic relations (12) for p = 2k are some known quantum mechanical models on the homogeneous manifolds SL(2, c)/GL(1, c) and the group manifolds SL(2, c). Now in order to obtain the mentioned representations, we only introduce two bunches of the shape invariance models which have been classified before [11,25]. In master function theory, a function A(x) which is at most of second order in terms of x, and a non-negative weight function W (x) defined in an interval (a, b) may be chosen so that (1/W (x))(d/dx)(A(x)W (x)) is a polynomial of at most first order. For a given master function A(x) and its corresponding weight function W (x), it has been shown that the eigenvalue equations of the one dimensional partner Hamiltonians corresponding to the first bunch of the superpotentials, which are obtained from the factorization with respect to the main quantum number n, will be [25] B(n)A(n)ψ n (θ) = E(n)ψ n (θ) where the variable θ is introduced by means of solving the following first order differential equation The energy spectrum E(n) and the wave function ψ n (θ) are given by Note that the prime symbol indicates the derivative with respect to x. Moreover, the explicit forms of the raising and lowering operators corresponding to the main quantum number n are, respectively: The change of variable x = x(θ) is substituted in the relations (16) and (17) by solving the first order differential equation (14). Now, by choosing the suitable normalization coefficients a n for the wave functions ψ n (θ), one may write down the shape invariance equations (13) as the raising and lowering relations: The potentials like Coulomb, Rosen-Morse I, Rosen-Morse II and Eckart are involved in the first bunch of the solvable models. In master function theory, the eigenvalue equations of the one dimensional partner Hamiltonian corresponding to the second bunch of the superpotentials which are obtained from the factorization with respect to the secondary quantum number m are given by [11] B(m)A(m)ψ n,m (θ) = E(n, m)ψ n,m (θ) A(m)B(m)ψ n,m−1 (θ) = E(n, m)ψ n,m−1 (θ). In the factorization equations (19), the variable θ is introduced by solving the following first order differential equation The energy spectrum and the eigenfunctions of the partner Hamiltonians obtained from the factorization with respect to the secondary quantum number m are where m = 0, 1, 2, · · · , n. The explicit forms of the raising and lowering operators corresponding to the secondary quantum number m are given by This time the change of variable x = x(θ) is substituted in the Eqs. (22) and (23) by solving the first order differential equation (20). Similar to the Eqs. The potentials like 3D harmonic oscillator, Scarf I, Scarf II, Natanzon and generalized Pöschl-Teller are included in the second bunch of the solvable models. Moreover, simple harmonic oscillator and Morse potentials belong to both of the solvable models. Now let us analyze a special realization of the B-D unitary parasupersymmetry algebra of even arbitrary order p = 2k by means of the one dimensional quantum mechanical solvable models which are obtained from the factorization with respect to the secondary quantum number m. Similar procedure can be made by means of the one dimensional quantum mechanical models which are obtained from the shape invariance with respect to the main quantum number n. In order to realize the B-D unitary parasupersymmetry algebra of even arbitrary order p = 2k, one may define the parafermionic generators Q 1 and Q † 1 of order p = 2k, and the bosonic operator H as the following (2k + 1) × (2k + 1) matrices: where m = 1, 2, · · · , n. It is evident that by choosing the definitions (25a) and (25b) for Q 1 and Q † 1 , the relations (12c) are satisfied automatically. To satisfy the Eqs. (12a) and (12b), by using the definitions (25) in them we obtain Considering the following identities then the Eqs. (26) give the following results Meanwhile, by using the definitions (25), the Eqs. (12d) and (12e) lead to the following relations, respectively It is noticed that the Eqs. (27) satisfy the relations (28a) and (28b), and additionally, in order to determine the remaining components of the bosonic Hamiltonian H it is sufficient to substitute the relations (27) in the recursion Eqs. (28a) and (28b). Then, one may obtain consistent solutions which are the same for the Eqs. (28a) and (28b) as: The recent result declares that the isospectrum Hamiltonians of the B-D unitary parasupersymmetry theory of even arbitrary order p = 2k are the two partner Hamiltonians 1 2 A(m)B(m)((k + 1) − times) and 1 2 B(m)A(m)(k − times) of the shape invariance theory with the energy spectrum 1 2 E(n, m). Clearly, the following (2k + 1) × 1 columns matrix . . . The representation of the parafermionic generators Q 1 and Q † 1 on the basis (30) by using the Eqs. (24) has the following forms It is easily seen that the states Q l 1 Ψ(θ) and Q † 1 l Ψ(θ) (l = 1, 2, · · · , p = 2k) are eigenfunctions of the bosonic Hamiltonian H. If we consider the first bunch of the shape invariance models, we will be able to construct the parafermionic generators by means of the raising and lowering operators B(n) and A(n), therefore we can obtain the bosonic Hamiltonian H including two independent partner components 1 2 A(n)B(n) and 1 2 B(n)A(n) with the same energy spectrum 1 2 E(n). In this case, the basis of the representation of the B-D unitary parasupersymmetry algebra of even arbitrary order p = 2k is constructed by the eigenfunctions ψ n (θ) and ψ n−1 (θ). [L + , The change of variable x = x(θ), which is used in the Eqs. (34), is obtained by solving the differential Eq. (20). It has been shown in Ref. [27] that the Casimir of the generators L + , L − , L 3 and I is the corresponding Hamiltonian of the charged particle on the homogeneous manifolds SL(2, c)/GL (1, c) in the presence of magnetic monopole with degeneracy group GL(2, c). The wave functions ψ n,m (θ, φ) , which represent the Lie algebra gl(2, c) as describe the two dimensional quantum states of the charged particle on the homogeneous manifolds SL(2, c)/GL (1, c) in the presence of the magnetic monopole and they are given by ψ n,m (θ, φ) = e imφ ψ n,m (θ). Now it can be easily shown that the quantum states ψ n,m (θ, φ) also represent the B-D unitary parasupersymmetry algebra of arbitrary order p = 2k. In order to show the mentioned fact it is sufficient to define the parafermionic generators Q 1 and Q † 1 of order p = 2k, and the bosonic operator H as (H) ll ′ := δ l,l ′ H l , l, l ′ = 1, 2, · · · , 2k +1. (39c) Once again, the relations (39a) and (39b) satisfy the relation (12c) automatically. Using the definitions (39a), (39b) and (39c), the Eqs. (12a) and (12b) lead to the following results The Eqs. (12d) and (12e) by using the definitions (39), and the Eqs. (40) lead to Therefore, the operator H of the B-D unitary parasupersymmetry algebra of arbitrary order p = 2k has two Hamiltonian components 1 2 L − L + ((k + 1) − times) and 1 2 L + L − (k − times)) on the homogeneous manifolds SL(2, c)/GL(1, c) with the same energy spectrum 1 2 E(n, m). The representation basis of the parasupersymmetry algebra in terms of the quantum states ψ n,m (θ, φ) which describe the motion of the particle on the homogeneous manifolds SL(2, c)/GL(1, c) has the following form . . . The eigenvalue equation of the parasupersymmetric Hamiltonian is The representation of the parafermionic generators Q 1 and Q † 1 by using the representation of the Lie algebra gl(2, c) given in the relations (37) is the relations (32) and the only difference is the fact that we must substitute the quantum states ψ n,m−1 (θ, φ) and ψ n,m (θ, φ) instead of ψ n,m−1 (θ) and ψ n,m (θ), respectively. By choosing the generators of the Lie algebra gl(2, c) in terms of three variables, given in Ref. [28], we have taken into account the solvable quantum models on the group manifolds SL(2, c). In this case, like the two dimensional models, it can be also shown that the three dimensional solvable quantum models on the group manifolds SL(2, c) described in Ref. [28] represent the B-D unitary parasupersymmetry algebra of arbitrary order p = 2k. It is also noticed that the commutation relations of the parasupercharges and the bosonic constants are closure, that is, d l Q l s = 1, 2, 3, · · · , 2k + 1 and r = 1, 2, 3, · · · , 2k where the coefficients d l are constants. Similar relations exist for the Hermitian conjugate of the parasupercharges which are obtained by taking Hermitian conjugate of the relations (49). The bosonic constants I s and the parasupercharges Q r satisfy the mixed multilinear relations which are a generalisation of the relations (45a) and (45b). For example, one may introduce the B-D unitary parasupersymmetry algebra of order p = 2 with two parasupercharges and three bosonic constants: Clearly, the three generators H, Q 1 , Q † 1 and also the three generators H, Q 2 , Q † 2 satisfy separately the algebraic relations (45) with k = 1. Moreover, we have
2014-10-01T00:00:00.000Z
2002-03-26T00:00:00.000
{ "year": 2002, "sha1": "a7e61e721a33842474227ad907bcac535d13c777", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0203240", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "32c337c3e8e5aab8b057832cc53b5c57d782990b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
119463204
pes2o/s2orc
v3-fos-license
A comparative study of different exchange-correlation functionals in understanding structural, electronic and thermoelectric properties of Fe$_{2}$VAl and Fe$_{2}$TiSn compounds Fe$_{2}$VAl and Fe$_{2}$TiSn are full Heusler compounds with non-magnetic ground state. The two compouds are good thermoelectric materials. PBE and LDA(PW92) are the two most commonly used density functionals to study the Heusler compounds. Along with these two well studied exchange-correlation functionals, recently developed PBEsol, mBJ and SCAN functionals are employed to study the two compounds. Using the five functionals equilibrium lattice parameter and bulk modulus are calculated. Obtained values are compared with experimental reports wherever available. Electronic structure properties are studied by calculating dispersion curves, total and partial density of states. For Fe$_{2}$VAl, band gap of 0.22 eV is obtained from the mBJ potential which is in reasonable agreement with experimental value while, for Fe$_{2}$TiSn band gap of 0.68 eV is obtained. Fe$_{2}$VAl is predicted to be semimetallic with different values of negative gaps from LDA,PBEsol,PBE and SCAN functionals. Whereas, Fe$_{2}$TiSn is found to be semimetallic(semiconducting) from LDA,PBEsol(PBE,SCAN) functionals employed calculations. From the dispersion curve effective mass values are also computed to see the contribution to the Seebeck coefficient. In Fe$_{2}$TiSn, a flat band is present along the $\Gamma$-X direction with calculated value of effective mass $\sim$36 more than the mass of electron. The improvements or inadequacies among the functionals in explaining the properties of full Heusler alloys for thermoelectric application are thus observed through this study. I. INTRODUCTION Electronic band structure of a material tells about the occupation of electrons at different energy levels in that material. The electronic structure can be studied through experimental methods like photoemission spectroscopy as well as through theoretical methods. Electrical conductivity, thermal conductivity, Seebeck coefficient are transport properties of a material. For a thermoelectric material, it's efficiency is determined by figure-of-merit(ZT) and it is governed by these transport properties. 1,2 The transport properties can be well explained by understanding the electronic structure of the material. To improve and modify a material as an efficient thermoelectric of practical application a good understanding of it's electronic structure is essential. So, the methods of electronic structure analysis should be accurate enough to model the material with physical accuracy. First-priniciples density functional theory(DFT) 3 method is the most resorted methods for the theoretical evalulation of electronic structure of periodic solids. In the Kohn-Sham(KS) 4 form of DFT, the KS equation is solved self-consistently for the one electron wave functions. In the KS equation the electron-electron interaction part is approximated by the exchange-correlation potential. The limitation of DFT in exactly modelling electronic structure is introduced from this part. So, there is a large research in the development of electron exchange-correlation functionals for the better approximations. Thus, many density functionals exists today with it's own merits and demerits. The limitations of DFT functionals is that they may be quite accurate for some physical properties, while it may not be accurate for other physical properties of the same material. 5 Some functionals are constructed for explaining specific applications. Local densiy approximation of Perdew and Wang-1992(LDA-PW92) 6 and generalized gradient approximation of Perdew-Burke-Ernzerhof(GGA-PBE) 7 are the two most widely used functionals in the first-principles DFT calculations. In the LDA, local exchangecorrelation potential is defined as the exchange potential for the spatially uniform electron gas with the same density as the local electron density. 5 LDA is less suitable for prediction of the properties of atoms and molecules, since the density is not slowly varying in case of atoms and molecules. In GGA functionals electron density is descibed using both the local electron density and gradient of electron density. PBE is an improved description of the local spin density approximation for atoms and molecules. 7 GGA calculations are found to improve upon LDA for atomization of energies of molecules and enthalpy of formation derived from the atomization energy. 8 But, for nonmolecular solids, the lattice parameters calculated by PBE not found to improve. Also, it is known that LDA underestimates lattice constant, while PBE overestimates. PBEsol functional was proposed by a restoration of the density-gradient expansion in PBE. 9 This functional is intedended to provide accurate values of equilibrium properties for solids and their surfaces. Also, PBEsol is supposed to give better values of lattice constant in the densly packed solids and in solids under pressure. mBJ potential was proposed by Tran and Blaha, by modifying the exchange potential originally put forth by Becke and Johnson. This semilocal potential is claimed to yield accurate band gaps for semiconductors and insulators with accuracy and it is less expensive than the hybrid and GW calculations. 10 SCAN is a semilocal meta-GGA approximation which is fully constrained. 11 This functional is expected to be significant improvement over PBE, PBEsol and LDA functionals at the nearly same computational cost. Number of studies on full Heusler alloys using LDA, PBE or mBJ functionals are available in literatures. But, a comparative study on full Heusler alloys, with different exchange-correlation functionals which influences in deciding the thermoelectric behavior is missing. Fe 2 VAl and Fe 2 TiSn are compounds belonging to class of full Heusler alloys with formula unit of the form X 2 Y Z, where X and Y are transition metal elements and Z is a main group element. 12 The two compounds have non-magnetic ground state with Slater-Pauling rule for these full Heusler alloys giving zero magnetic moment. 13 The two compounds are good thermoelectric materials. Their properties are studied experimentally as well as using first-principles calculations. Experimentally, Nishino et. al reported that Fe 2 VAl-based full Heusler alloys showing large power factor(PF=S 2 σ, where S is Seebeck coeffient and σ is electrical conductivity) considerably more than that of the conventional thermoelectric material Be 2 Ti 3 . 14 In first-principles DFT based study of full Heusler compounds, LDA of Perdew Wang(1992), PBE are more commonly used functionals to investigate the thermoelectric properties. Markus Meinert investigated the properties of full and half Heusler alloys using modified Becke-Johnson potential . 12 Sharma et. al used PBEsol exchange-correlation functional to study thermoelectric properties of the full Heusler alloys and showed the possibility of synthesis in laboratory. 15 In the present work, taking up Fe 2 VAl and Fe 2 TiSn as representatives of non-magnetic class of full Heusler alloys we are interested to check the suitability of these exchange-correlation functionals for the calculation of the properties of full Heusler alloys with non-magnetic ground states. The properties approximated from the new functionals are compared with the well used functionals(LDA, PBE) used to study this kind of compounds. We have employed five exchange-correlation functionals to study: i) structural properties of the two Heusler compounds. Lattice constant and bulk modulus values are extracted from energy versus volume curves and the obtained values are compared with the available experimental data. ii) Dispersion curves, total and partial density of states are calculated to study the electronic structure. General features and differences in the electronic structure predicted from different functionals are discussed. For the two compounds effective mass values are computed from the dispersion curve using parabolic approximation. The values of effective mass are used to give an idea of contribution to Seebeck coefficient. II. COMPUTATIONAL DETAILS The calculations are performed using the full-potential linearized augmented plane wave(FPLAPW) method as implemented in the WIEN2k 16 program for calculating crystal properties within density functional theory. For the exchange-correlation part five different functionals are used viz., LDA of Perdew-Wang-1992 (LDA) 6 , GGA of Perdew-Burke-Ernzerhof(PBE) 7 , and newly developed PBEsol 9 , mBJ 10 , and SCAN 11 . In case of mBJ, for the correlation part LDA is used with the modified Becke-Johnson(mBJ) potential for the exchange part. The muffin-tin radii R MT used for volume optimization caluclations of (i) Fe 2 VAl are 2.26 bohr for Fe; 2.15 bohr for V and 2.04 bohr for Al (ii) Fe 2 TiSn are 2.32 bohr for Fe;2.26 bohr for Ti; 2.32 bohr for Sn, respectively. A k-mesh grid of size 10x10x10 is used for both volume optimization and electronic structure calcualtions. The self-consistency in the total energy/cell is achieved by setting a convergence criteria of 0.1 mRy. The equilibrium lattice constants are computed by fitting the total energy versus volume of the unit cell data to the Birch-Murnaghan(BM) equation of state. 17 The third-order BM isothermal equation of state is given by the formula: where E is energy, V is volume, B 0 is equilibrium bulk modulus, V 0 is volume of experimental unit cell and B ′ 0 is pressure derivative of bulk modulus at equilibrium value. The volume optimization process is carried out by varying the lattice parameters in a fixed ratio. Fe 2 VAl and Fe 2 TiSn have the space group F m − 3m and they are found to crystallise in cubic L2 1 structure. Denoting these two full Heusler compounds as Fe 2 YZ, where Y=V,Ti and Z=Al,Sn in the order, Fe atoms occupy the Wycoff position 8c ( 1 4 , 1 4 , 1 4 ), Y atoms occupy Wyckoff position 4a (0, 0, 0) and Z atoms occupy Wyckoff position 4b ( 1 2 , 1 2 , 1 2 ). We employed 5 different exchangecorrelation functionals to study these compounds. The results are discussed in the sections below. A. Structural properties evaluation In order to find the theoretical lattice constants of Fe 2 VAl and Fe 2 TiSn, calculations are carried out for several values of volumes corresponding to different lattice constants employing the five exchange-correlation potentials mentioned in section II. The obtained values of total energy is plotted as a function of volume. The Birch-Murnaghan(BM) parameters are used to fit the calculated data. The fitted curve gives the equilibrium lattice constant and bulk modulus value. 5.762Å. 18 and 6.074Å 19 are the reported values of experimental lattice constants for Fe 2 VAl and Fe 2 TiSn, respectively. These experimental values of lattice constants are used to construct the initial crystal structrure of the two compounds. The energy versus volume curves for the two compounds computed using five exchange-correlation functionals is shown in Fig. 1. In the figure, the symbols corresponds to the calculated values of the energy as a function of volume and the lines to the B-M fit to the calculated data, respectively. The dashed line perpendicular to the volume axis corresponds to the volume of experimental lattice constant. The energy axis is, E − E 0 , the difference between volume dependent energy(E) and energy corresponding to equilibrium volume E 0 . From the figure, the qualitative shifts in the calculated values from the experimental value can be made out. For the two compounds the large deviation in the lattice constants from LDA is noticeable. The PBE calculated values of optimized lattice constants lie close to the experimental lattice constant (nearer to the dashed line). For the other three potentials, the energy-volume parabolas lie in between PBE and LDA calculated values. Also, the values are close to each other. The calculated values of optimized lattice constants a o and bulk modulus B 0 for the compounds are tabulated in Table.1 which support the behavior of the curves in 20 The PBE calculated value is within the agreement of 1 % and out of all 5 functionals showing fairly good agreement with experimental value. PBEsol, mBJ and SCAN are giving 1.98, 1.92 and 1.93 % reduction from the experimental value, respectively. The PBEsol approximation, 9 which is specially constructed for the calculation of lattice constant and it's dependent properties of solids and solid surfaces is found less suitable for full Heusler compounds. Because the calculated value of lattice constant for this compound from PBEsol functional is less than than that of the PBE, mBJ, SCAN calculated values. For Fe 2 TiSn, lattice constant calculated using LDA is 2.70 % less than the experimental value. Also, LDA calculated value is the lowest of all the values found out using other functionals. PBE calculated value of lattice constant (6.0423Å) is close to the experimental value with deviation of only 0.52 %. For both the compounds, it can also be observed that the last two functionals in the table are producing nearly same values and also showing improvement over PBEsol calculated value with respect to lattice constant. The table confirms the suggestion given by Sun et. al 11 that SCAN functional is improvement over LDA and PBEsol but our calculated value shows not over PBE for the full Heusler alloys as PBE functional is giving a very good value of lattice constant even though it is producing lowest value of bulk modulus. The values of the equilibrium bulk modulus calculated for these two compounds are also tabulated in Table. Here we have reported bulk modulus for for three new functionals. It can be observed that as in the case of lattice constant, the B 0 values from the last three functionals in the table for both the compounds are lying close to each other. The experimental value of bulk modulus for these two compounds are not yet available for comparison. Thus, by observing the trend in lattice constant and bulk modulus values of these two full Heusler compounds, we can say that the relatively new functionals mBJ and SCAN are nearly equivalent approximations for full Heusler compounds for structural properties evaluation. The reason for overestimation of B 0 by LDA and underestimation by PBE functionals is as follows: We know that bulk modulus is calculated by the formula B = V (∂ 2 E/∂V 2 ). Where, V is volume of the unit cell and E is energy per unit cell. The term (∂ 2 E/∂V 2 ) in the equation gives the curvature of the energy versus volume parabolas of Fig. 1. In the figure, the curvature of LDA calculated curve is more than that of the PBE curve for the two compounds. Applying the values of the curvature and volume in the relation of the bulk modulus, clearly tells the reason behind this behavior. B. Electronic structure analysis To know the behavior of the five exchange-correlational potentials in explaining the non-magnetic ground state properties of the full Heusler compounds, we have carried out electronic structure analysis. Using the optimized lattice parameters dispersion curves and density of states are computed for each compound employing all the five exchange-correlation functionals. The dispersion curves calculated along the high symmetric k-points in the first Brillouin zone for selected functionals for the two Heusler compounds are shown in figure 2. The first and second row of the figure represents the dispersion The Fig. 2(a) shows the dispersion curves for Fe 2 VAl obtained using PBEsol functional. The conduction band(CB) bottom at the X-point is crossing the E F and is lower in energy than the valence band(VB) at the Γpoint. This suggests the compound is semimetallic in nature. The direct gap is ∼0.36 eV above the VB maximum at the Γ-point. Also, the CB minimum is very close to the second VB maximum with energy gap of ∼20 meV. The negative band gap is defined as, where E c,min and E v,max are conduction and valence band extrema, respectively. 23 The value of the negative gap(or pseudo gap) from PBEsol calculation is -0.20 eV. The curves are triply degenerate at the top of VB. Along the Γ-X direction degeneracy is lifted and bands become doubly degenerate and non-degenerate at the X-point. Similary along the Γ-L direction. Also, bands are doubly degenerate at the CB bottom at Γ-point. This degeneracy is lifted from Γ to X-direction. But the degeneracy is maintained along Γ-L direction. In case of LDA computed band structure of the compound, similar behavior is observed(not shown in the figure). The observed negative band gap is -0.24 eV from LDA calculation, implying the semimetallic nature. Thus, LDA and PBEsol are giving similar results for band structure. Not much improvement in PBEsol functional over LDA is found. LDA and PBEsol are known to underestimate the bandgap. 24,25 In Fig. 2(c), the dispersion curves of Fe 2 VAl calculated from SCAN exchange-correlation functional is presented. There is a formation of pseudogap by the overlap of VB maximum at the Γ-point and CB minimum at the X-point. The value of the pseudogap in this case is -0.16 eV. The direct gap above the VB maximum at Γ-point is 0.41 eV. The calculation using PBE is giving pseudo band gap of -0.13 eV which is nearer to that obtained from SCAN functional. The general features of the bands of the PBE(not shown in figure) are similar to that of the SCAN. Thus, both the dispersion curves calculated using PBE and SCAN predicting semimetallic nature in Fe 2 VAl. The dispersion curve for Fe 2 VAl is also calculated using mBJ potential which is constructed for the accurate bandgaps of semiconductors and insulators. 10 The dispersion curve is shown in Fig. 2(b). The VB edge is at Γ-point and the CB bottom is The mBJ potential is found to enhance the band gap. The enhanced indirect band gap for the compound compared to other two functionas is seen from Fig. 2(e). The indirect band gap value obtained from mBJ is 0.68 eV. We have not come across any experimental band gap value for Fe 2 TiSn Heusler compound. Thus, PBE, SCAN and mBJ are found to open the band gap with different shift in energy to a higher value while, PBEsol and LDA are underestimating the gap producing zero gap for Fe 2 TiSn. For Fe 2 VAl Heusler compound, Bilc et. al used B1-WC hybrid functional to study the electronic and thermoelectric properties and reported an indirect band gap of ∼1 eV. They claimed B1-WC hybrid functional is ac-curate for the calculation of electronic and thermoelectric properties. 27 However, this gap is sim0.78 eV higher than the mBJ produced value as well as the experimental value. It is important to note that thermoelectric properties are sensitive to the temperature dependent band gap value and hence, the description of the thermoelectric behavior of the compound using this approach may not be correct. Also, for Fe 2 TiSn , the same group reported a band gap of ∼1 eV using the hybrid functional 27 , which in case of mBJ obtained value of ours is 0.68 eV. Meinert, Markus using mBJ potential and FPLAPW method obtained a band gap value of 0.31 eV for Fe 2 VAl and 0.69 eV for Fe 2 TiSn, respectively. For the compounds Fe 2 VAl and Fe 2 TiSn, total density of states(TDOS) and partial density of states(PDOS) are calculated using five exchange-correlational functionals. Fig. 3 shows the TDOS and PDOS for Fe 2 VAl calculated using PBEsol, mBJ and SCAN functionals. In Fig. 3 In case of mBJ obtained TDOS plot a large band gap is clearly seen with zero density of states present at the E F . The TDOS plot of mBJ says Fe 2 VAl is a semiconductor with band gap of 0.22 eV as predicted by dispersion curves. In order to understand the contribution from constituent atoms, PDOS are calculated and are shown in Fig.3 for Fe, V, and Al atoms. For Fe and V atoms the three fold degenerate states(d xy , d yz , d zx ) are represented by t 2g and two fold degenerate orbitals (d x 2 −y 2 , d 3z 2 −r 2 ) are represented by e g states. It is clearly seen from the figure that near the Fermi level in the conduction band region Fe t 2g states (Fig. 3(d)-(f)) are main contributors in the range -1.2 to 0 eV. The same behavior is observed for all three functionals except from mBJ, the number of states contribution is more. From -2.5 to -1.2 eV both V and Fe t 2g states are contributing to the conduction band. The V e g states are contributing less to the valence band region as seen in Fig. 3(g)-(i). While contribution from Fe e g states to valence band are more compared to V e g states. In the range 0 to 2 eV high intensity peaks of V and Fe e g states are observed. In the 0-2 eV region the intensity of t 2g states of V atoms are more compared to that of Fe atoms. This indicates that valence band top is more of Fe t 2g character and bottom of conduction band region is more of V e g character. The formation of pseudogap is due to the overlap of these two bands in the vicinity of Fermi level. The contribution to the DOS from Al 3p orbitals in case of three funcitonals are shown in Fig. 3(j)-(l). The intensity of the peaks of Al 3p states are high in the valnce band region in -6 to -1 eV. The intensity of the PDOS peaks calculated by mBJ functional for V and Fe atoms are more compared to that of other functionals. But in case of Al atom mBJ and other functionals giving nearly same contribution to PDOS. The obtained DOS from LDA and PBE functionals(not shown in figure) are in near agreement with PBEsol and SCAN characterization of electronic states, respectively. The total density of states(TDOS) and partial density of states(PDOS) calculated for Fe 2 TiSn Heusler compound using three exchange-correlation functionals are shown in Fig. 4. The Fig. 4(a)-(c) represents the TDOS plots from PBEsol, mBJ and SCAN functionals calculations for Fe 2 TiSn. From the TDOS plots of Fe 2 TiSn, it can observed that Fe 2 TiSn is semiconductor in nature. The width of gap is enhanced and the value of the gap is more in case of mBJ calculations. The states are shifted by ∼ 0.5 eV higher in energy in the conduction band region in case of mBJ obtained DOS (Fig. 4(b)). Fig. 4(d)-(f) and (g)-(i) show the PDOS of Fe and Ti atoms calculated from three functionals. Near the Fermi level, the valence band region from -1 to -0.5 eV is mostly of Fe t 2g character. The small but finite density of states at the Fermi level is due to the overlap from the Fe t 2g orbitals. It is clear from these figures that the contribution to conduction band is mainly from the e g states of Fe and Ti atoms. But, mBJ functional is showing equal contribution to the conduction band region from Ti t 2g states. The three functionals suggest the contribution to the valence band region is mostly from t 2g states of both Fe and Ti atoms. In the lower energy region of -6 to -2 eV the contribution to DOS is from the Sn 5p orbitals (Fig. 4(j)-(l)). Also, from the TDOS plots it is observed that the value of TDOS produced by mBJ potential at the higher energy region of valence band is relatively high to that of other functionals. These observations lead to an understanding that, for the two compounds, mBJ approximates the ground state electronic structure with a notable difference from the other functionals. For the two compounds, mBJ is opening a reasonable gap compared to other functionals, with the band gap value matching to experimental value in case of Fe 2 VAl. The effective mass is a property dependent on the shape of the bands, which is important in explaining transport properties. Thus band structure is important in explaining the carrier dynamics in different energy levels. DOS plots show sharp peaks near the Fermi level, which is a needed feature for a thermoelectric material. 28 To understand, how using different exchangecorrelation functionals to study single compound affects the effective mass, we have calculated effective mass(m * ) of charge carriers(holes and electrons) along the high symmetric directions in the first Brillouin zone. The bands that contribute significantly to the transport properties in Fe 2 VAl and Fe 2 TiSn are labelled with numbers in Fig. 2(a)&(d). Effective mass (m * ) is calculated for charge carriers in these bands and expressed in terms of electrons mass(m e ). The calculated m * for Fe 2 VAl and Fe 2 TiSn compounds are tabulated in Table 2 & 3, respectively. In the table, for instance, Γ-ΓX denotes the effective mass calculated at Γ-point along Γ-X direction. Similar meaning is conveyed in other high symmetric directions. The symbols B1,B2,..B5 stand for bands numbered 1,2,..5, respectively. The effective mass(m * ) is calculated by the formula m * = 2 /(d 2 E/dk 2 ) under parabolic approximation 29 . According to this formula, the value of effective mass at a k-point is decided by the shape of the dispersion curve. This means, in Fe 2 TiSn, large effective mass is expected because of the presence of a flat conduction band along Γ-X direction. In Fe 2 VAl, the bands B1, B3, B4 are triply degenerate at Γ-point. At X-point B2, B3 are doubly degenerate in mBJ calculated dispersion curves( Fig. 2(b)) and in case of other functionals doubly degeneracy is changed to B3, B4 bands at lower energy position. Effective mass at these points are calculated by fitting the band edges with a parabola. While, in Fe 2 TiSn, the bands B1, B2 are doubly degenerate and B3, B4 and B5 are triply degenerate at the Γ-point. The shape of the bands 4, 5 and 1 at X-point resembles closely of a cone. This feature of the bands is observed in case of LDA, PBEsol, PBE and SCAN functionals. Parabolic approximation cannot be applied for these cases to get the value of effective mass. But, the edges of degenerate bands at Γ-point of mBJ dispersion curves are parabolic in nature and fitted with parabola to obtain the value of the effective mass. C. Thermoelectric properties We have studied the Fe 2 VAl and Fe 2 TiSn full Heusler compounds using five exchange-correlation functionals. It is well know that both the compounds are good thermoelectric materials. It is important to understand the electronic structure in order to explain the thermoelectric properties, since Seebeck coefficient(S) is related to effective mass(m * ) and carrier concentration(n). From the free electron theory approximation the relation between Seebeck coefficient and effective mass is given by the relation 30 , .53discussed earlier, but contribution to the transport property will be different since the shapes of the bands are different (Fig. 2(b)). This can be confirmed from the Table 2. as the effective mass of band 4 at Γ-point along X and L directions is less than that of the bands 2 and 3. The carriers in bands B2 and B3 are main contributors to Seebeck coefficient with effective mass more than m e (mass of electron) at Γ-point and X-point. Except for mBJ, the effective mass is more than 3 times at X-point along the Γ-direction. Qualitatively, also from the curvatures of the bands this fact can be understood. The VB top at X-point is doubly degenerate(bands 2 & 3), and at the higher temperatures the fraction of electrons jumping across the direct gap of 0.39 eV will be more. So, effective mass at X-point is also calculated. The difference in energy between the 1st and 2nd VB at the X-point is 0.17 eV. This corresponds to the temperature of ∼2000 K. Therefore, there will be negligible contribution from this band(B4). The calculations from PBEsol, SCAN, LDA and PBE functionals does not show any real gap in the compound. Since the mBJ calculation is producing the experimental band gap, if we create a band gap artificially, equal to the mBJ obtained gap in band structures obtained from other functionals, now we can utilise the effective mass calculated from other functionals to explain the transport properties of Fe 2 VAl. In the dispersion curves of the other functionals, as explained in previous section the features are different therefore, the effective mass computed for various bands is different from different functionals. The band gaps and position of bands at various k-points obtained after shifting the bands will be different. This implies the number of eletrons and holes created because of the transition between the bands now will not be same as before from all the functionals. Therefore, using this concept and the calculated values of effective mass one can find which functional can better explain the thermoelectric properties. The effective mass calculated for Fe 2 TiSn are in Table 3. The effective mass of holes along the X-Γ direction is very high. This is due to the presence of the flat band along that direction. This means holes are the major contributors to Seebeck coefficient(S) in case of Fe 2 TiSn and high value of S is expected in the compound. The high value of effective mass for the flat band(B2) from Table. 3 supports this argument, which is more than 36 times mass of electron along the X-Γ direction. The indirect gap observed from mBJ calculation is 0.68 eV and from the SCAN(PBE) calculation 0.028eV(0.033eV). Thus, at higher temperatures the probability of occupation of the thermally excited electrons at the lowest vacant conduction band is higher in case of SCAN and PBE predicted properties compared to mBJ calculations. This means the Seebeck coeffiecient value approximated from these functionals should be less as Seebeck coefficient is inversly proportional to the carrier concentration. The experimental band gap for the compound is not yet known. Thus, mBJ should better explain the thermoelectric behavior of Fe 2 TiSn, with higher value of band gap and large effective mass, if the bandgap value is matching with the experimental band gap. Thus, effective mass is an important quantity in the calculation of Seebeck coefficient and hence figure of merit of a thermoelectric. IV. CONCLUSIONS In this work, we have studied Fe 2 VAl and Fe 2 TiSn, two full Heusler alloys using five exchange-correlation(XC) functionals viz., LDA, PBE, PBEsol, mBJ and SCAN. The two compounds are experimentally studied compounds. Structural properties of both the compounds are evalulated using five density functionals. It is observed that, for both the compounds, bulk modulus is understimated by PBE and overestimated by LDA functional, while lattice constant is underestimated by LDA. Out of all five functionals PBE calculated value of lattice constant is nearer to the experimental lattice constant. PBEsol, mBJ and SCAN are giving values of lattice constant and bulk modulus in between the values that are from LDA and PBE functionals. To understand the electronic ground state properties for the two compounds dispersion curves and density of states are calculated. For Fe 2 VAl, mBJ calculations yield an indirect band gap of 0.22 eV which is in reasonable agreement with the experimental value. While for Fe 2 TiSn the value of indirect gap from the same functional is 0.68 eV. The general features of the dispersion curves observed from LDA and PBEsol calculations, and PBE and SCAN calculations are similar. While, mBJ features showing more changes with respect to other functionals for both the compounds. The LDA and PBEsol functionals are found to underestimate band gaps in the compounds. Effective mass of charge carriers are calculated from the dispersion curves applying parabolic approximation and contributions to transport properties are discussed. Very high value of effective mass for holes are obtained in case of Fe 2 TiSn compound. For structural properties calculation, we found that SCAN functional is an improvement over LDA, PBE and PBEsol for these two compounds. If description of the any one of the five exchange-correlation functional is the most appropriate, then the transport properties calculated using that functional should be in well agreement with the experimental observations. We would like to investigate this aspect by calculating the transport coeffiecients and thermoelectric properties of the two Heusler compounds with non-magnetic ground state and thereby compare with the experimental values in our next work. Thus, this work is intended to give a hint on, which functional gives a better description of electronic structure of full Heusler compounds for thermoelectric applications.
2017-08-03T15:21:26.000Z
2017-08-03T00:00:00.000
{ "year": 2017, "sha1": "0f8ba525f89d4b9278d6061bdf2bf9a6741a7a39", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1708.01180", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0f8ba525f89d4b9278d6061bdf2bf9a6741a7a39", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
260234362
pes2o/s2orc
v3-fos-license
A Hybrid Power-Rate Management Strategy in Distributed Congestion Control for 5G-NR-V2X Sidelink Communications The accelerated growth of 5G technology has facilitated substantial progress in the realm of vehicle-to-everything (V2X) communications. Consequently, achieving optimal network performance and addressing congestion-related challenges have become paramount. This research proposes a unique hybrid power and rate control management strategy for distributed congestion control (HPR-DCC) focusing on 5G-NR-V2X sidelink communications. The primary objective of this strategy is to enhance network performance while simultaneously preventing congestion. By implementing the HPR-DCC strategy, a more fine-grained and adaptive control over the transmit power and transmission rate can be achieved. This enables efficient control by dynamically adjusting transmission parameters based on the network conditions. This study outlines the system model and methodology used to develop the HPR-DCC algorithm and investigates its characteristics of stability and convergence. Simulation results indicate that the proposed method effectively controls the maximum CBR value at 64% during high congestion scenarios, which leads to a 6% performance improvement over the conventional DCC approach. Furthermore, this approach enhances the signal reception range by 20 m, while maintaining the 90% packet reception ratio (PRR). The proposed HPR-DCC contributes to optimizing the quality and reliability of 5G-NR-V2X sidelink communication and holds great promise for advancing V2X applications in intelligent transportation systems. tion systems. Within the ITS context, the exchange of cooperative awareness messages (CAMs) and event-triggered decentralized environmental notification messages (DENMs) plays a vital role [4]. Periodically broadcast CAMs convey essential information such as vehicle position, speed, acceleration, and other relevant data, which are crucial for safety and traffic efficiency applications. To address hazardous road situations promptly, DENMs are also required. Both CAMs and DENMs are transmitted on the control channel (CCH), dedicated to cooperative road safety [5]. However, the use of a shared control channel can lead to radio congestion in scenarios with high vehicle density. To address this issue, the DCC framework is introduced as a solution for alleviating control channel congestion. The DCC framework achieves this by regulating the message rate, transmission power, and data rate of periodic messages. Operating as a cross-layer mechanism, the DCC encompasses congestion control at both the network and MAC layers, utilizing information from the physical and network layers to manage congestion effectively. The protocol employs a sliding window mechanism, dynamically adjusting the window size based on the network congestion level. The DCC protocol features two distinct approaches for managing congestion control: adaptive and reactive [3]. The DCC Adaptive approach, prescribed by the ETSI, adaptively adjusts DCC parameters based on real-time evaluations of network conditions [5]. By assessing the current network congestion status using gathered metrics, this method modifies communication parameters according to the evaluation results. In contrast, the DCC Reactive approach is designed to respond to specific network events or triggers as shown in Figure 1. Instead of proactively monitoring and adjusting communication parameters based on real-time network conditions like the DCC Adaptive approach, the DCC Reactive approach operates through a set of predefined rules or policies. Upon detecting a triggering event, this reactive method promptly takes action to mitigate congestion by adjusting communication parameters, such as transmission power, packet size, or transmission intervals. To achieve more reasonable resource allocation, it is also possible to refine each state parameter by adding multiple Active states [6]. Upon analyzing existing congestion control schemes, this research introduces a hybrid congestion control scheme named the HPR-DCC for V2X communications by integrating the DCC framework and rate-power-based control mechanism. The hybrid approach leverages the complementary strengths of TPC and TRC, mitigating the limitations of each technique. TPC helps to maintain reliable communication by adjusting the transmission power based on channel conditions, while TRC regulates the data transmission rate to prevent congestion and ensure fair resource allocation. By combining the advantages of the TPC with the TRC, the HPR-DCC scheme offers enhanced performance in terms of congestion mitigation, reliable communication, and fair resource allocation. Furthermore, the HPR-DCC enhances adaptability to dynamic network conditions by dynamically switching between TPC and TRC based on real-time feedback and network Upon analyzing existing congestion control schemes, this research introduces a hybrid congestion control scheme named the HPR-DCC for V2X communications by integrating the DCC framework and rate-power-based control mechanism. The hybrid approach leverages the complementary strengths of TPC and TRC, mitigating the limitations of each technique. TPC helps to maintain reliable communication by adjusting the transmission power based on channel conditions, while TRC regulates the data transmission rate to prevent congestion and ensure fair resource allocation. By combining the advantages of the TPC with the TRC, the HPR-DCC scheme offers enhanced performance in terms of congestion mitigation, reliable communication, and fair resource allocation. Furthermore, the HPR-DCC enhances adaptability to dynamic network conditions by dynamically switching between TPC and TRC based on real-time feedback and network metrics. Thus, the TPC and TRC hybrid control DCC algorithm provides a robust and efficient solution for congestion control in wireless networks, addressing the challenges of varying channel conditions and network congestion while optimizing resource utilization. The primary contributions of this paper are twofold: • Proposing a novel method that combines the strengths of both TPC and TRC to achieve efficient and reliable communication; • Evaluating the performance of the proposed scheme through simulations, comparing it to existing congestion control schemes and demonstrating its effectiveness in alleviating congestion across various traffic situations. This paper offers valuable insights into designing efficient and reliable congestion control schemes for 5G-NR-V2X, which are essential for implementing future intelligent transportation systems. And the rest is organized into following sections: Section 2, provides an overview of congestion control in 5G-NR-V2X sidelink communications, explores transmission power and rate control schemes in wireless networks, and introduces the HPR-DCC algorithm. Section 3 presents the system model and assumptions, elaborates on the proposed algorithm's design, and analyzes the control scheme's stability. Section 4 outlines the simulation setup and scenarios, defines performance metrics and evaluation criteria, offers experimental results and analysis, and compares the proposed scheme to alternative congestion control strategies. Finally, it recapitulates the principal contributions and outcomes, discusses practical implications and applications, and recommends directions for further research. Related Works In recent years, significant attention has been given to the development and implementation of DCC methods in vehicular ad hoc networks (VANETs). The primary goals are to manage network congestion and enhance the reliability of V2X communications [5,7]. Comparing various DCC methods is challenging due to the diverse control strategies and performance metrics in the literature. To tackle this issue, standardization organizations like ETSI have developed performance evaluation standards and proposed a unified cross-layer DCC framework. This research will focus on DCC methods that optimize two primary metrics: CBR [8] and packet reception ratio (PRR) [9]. This study aims to provide a comprehensive understanding of the current state-of-the-art in DCC for VANETs and identify potential avenues for further research and development. A lower CBR value indicates a more efficient utilization of the communication channel, contributing to reduced network congestion. Conversely, PRR is calculated by dividing the number of received data packets by the number of sent data packets [9], thereby serving as a measure of successful communication within the network. A higher PRR value signifies better signal reception, ensuring that critical safety and traffic information is effectively communicated among vehicles. Our study pays particular attention to these optimization objectives and the resulting control strategies. Through the investigation of these approaches and their impact on network efficiency, our objective is to offer an in-depth insight into the contemporary advancements in DCC for VANETs, while pinpointing possible directions for future exploration and progress. The authors of [10] present the linear message rate integrated control (LIMERIC) algorithm, an effective solution that addresses fairness concerns and shows adaptability in various complex scenarios. Building upon the LIMERIC algorithm, the error model based adaptive rate control (EMBARC) algorithm is introduced in [11], which improves the LIMERIC approach by dynamically adjusting the transmission rate according to vehicular movement. In [12], the researchers suggest integrating beaconing into the vehicular networks framework and modifying the beacon frequency as well as the transmission rate to efficiently manage traffic congestion. To ensure that DCC is applicable to a broader array of scenarios, including those with a heightened focus on security, a substantial quantity of broadcast beacons is necessary. Consequently, the TRC mechanisms might prove insufficient in fulfilling all aspects of operational security requirements. Torrent-Moreno and colleagues have shown that effec- tive TPC is crucial for optimizing channel utilization while mitigating security concerns arising from channel saturation. To address this issue, they propose a solution called distributed fair power adjustment for vehicular environments (D-FPAV) [13]. This control scheme adheres to stringent fairness principles, ensuring prioritized transmission for high-priority data and equitable transmission conditions for other vehicles based on the prevailing channel conditions. The design of TPC is characteristically complex, as it encompasses rapidly evolving networks. Nevertheless, it is particularly well-adapted for streamlined and linear network topologies, such as those found in platooning scenarios. In [14], the authors explore various communication strategies for platooning by employing synchronized communication slots in conjunction with TPC techniques. They subsequently compare their proposed method with alternative beaconing solutions, including static beaconing and conventional ETCI DCC for automated platooning applications. The simulation results indicate that the suggested approach can effectively reduce collisions. Moreover, the researchers examined a mixed scenario wherein some vehicles simultaneously accessed the channel using ETSI DCC. They found that the performance of their proposed solution remained unaltered, while the ETSI DCC performance was significantly impacted. In [15], the authors present a DCC algorithm that integrates a priority model and adjusts the beacon transmission rate. This algorithm effectively manages congestion's influence on vehicle safety by guaranteeing the reliable and timely reception of safety information. In [16], the authors present a novel approach to enhance the object filtering process of collective perception by considering DCC awareness. This approach dynamically adjusts the message size based on DCC constraints and, consequently, the message generation rate. The comparison with the existing ETSI design shows that this design improves the perceived quality and reduces the message generation rate. In [17], the authors introduce a traffic density-based congestion control algorithm (TDCCA) that incorporates vehicle IDs into their respective CAMs and utilizes TRC-based DCC to enhance model parameter efficiency. The algorithm considers a range of network conditions, from non-saturated to saturated, as well as sparsely dispersed and congested networks. The proposed approach demonstrates improved performance in terms of PRR and latency. While the majority of conventional DCC algorithms rely on CBR as the criterion for determining control parameters, a single-measurement approach is insufficient due to the myriad factors influencing channel load, which consequently leads to issues such as fairness. In [18], the authors introduce Rate-OPT and Power-OPT algorithms for dedicated short-range communication (DSRC), demonstrating that their coordinated and alternating application results in enhanced channel utilization and packet transmission rates. This integrated congestion control algorithm optimizes channel load usage by dynamically allocating transmission range and rate in response to vehicle density. In [19], the authors introduce a combined transmission power and rate control strategy, which deprioritizes the use of TPC as the primary response mechanism. In contrast to a purely TPC-based approach, this strategy lessens the reliance on precise transmission power adjustments and demonstrates that it can efficiently approximate the optimal control parameter configuration for load adaptation across individual channels. A joint power and rate algorithm in [20], which can set different priorities for different vehicles, introduces fairness into V2V communication, and conducts simulation verification through multiple scenarios. In [21], the authors propose a perception-based hybrid beacon algorithm that utilizes the driver's state as a reference condition and broadcasts it to nearby vehicles, then makes joint decisions to adjust the transmission range and power in order to enhance safety. In [22], the authors propose an approach that integrates realtime traffic flow sensing with channel congestion status, utilizing distributed network utility maximization to improve channel utilization. In [23], the authors introduce the POSACC algorithm, prioritizing location accuracy and communication reliability as paramount metrics. They effectively manage the beacon rate and transmission power, resulting in enhanced efficiency. The HPR-DCC integrates the merits of both TRC and TPC, facilitating improved congestion control and maintaining CBR within the convergence range. TPC allows for power-efficient communication by reducing transmit power when the channel conditions permit, while TRC optimizes the transmission rate to maximize throughput when the channel quality is favorable. This system offers an adaptable balance between energy consumption and network efficiency. However, the most optimal coordination and equilibrium between the TPC and TRC depend on the specific system requisites and the characteristics of the adaptive scenario in which they will operate. Consequently, the full potential of the HPR-DCC is realized through careful consideration and customization of the system according to the specific needs and conditions of the intended application. While it holds promising benefits, it is important to note that the coordination between TPC and TRC can be a complex process. The HPR-DCC's potential limitations thus lie in this inherent complexity of achieving the optimal balance between power and throughput efficiency. Several factors affect the quality of channel communication, such as vehicle speed, vehicle density, and signal transmission distance. Relying solely on CBR as the foundation for the control algorithm proves inadequate for ensuring equitable communication among all users. This study introduces the incorporation of additional parameters alongside CBR to holistically assess state transitions, guaranteeing a more equitable allocation of resources for vehicles experiencing identical states. System Model and Assumptions The proposed system model aims to estimate vehicular parameters by emphasizing the adjustment of transmission power and transmission rate. This is achieved through the assimilation of received messages and distance information from neighboring vehicles, as illustrated in Figure 2. This model incorporates critical parameters, such as neighboring vehicles' transmission power, distance of received messages, and estimated path loss (PL), which are instrumental in determining the optimal transmission power necessary for achieving the desired level of awareness within the target vehicle's awareness range. The system model operates on several fundamental assumptions. Primarily, it assumes that the transmission power is modulated based on the estimated PL value, with the goal of achieving the target awareness percentage within the awareness range, while not considering the impact of frame error rate. Subsequently, the model presumes that the PL value estimation relies on messages from a sufficient number of neighboring vehicles, enabling target-aware transmission for vehicles that have not received any messages. Lastly, under extreme circumstances where the distance between vehicles is minimal and path loss is substantial, the transmission power will be maintained at a prominent level to ensure the preservation of the target awareness range. The control mechanism embedded within the proposed system model incorporates the calculation of CBR, received messages, and distance information, as well as the computation of transmission power necessary to attain the target awareness percentage. Moreover, the estimation of the PL value is employed to adjust the transmission power for target-aware transmission within the awareness range. This ensures that the vehicle maintains awareness of the target vehicle throughout the communication process. HPR-DCC Design and Strategies In the design of the HPR-DCC algorithm, a three-fold approach is employed to mize performance and efficiency. Firstly, power adaptation for awareness control is itated through the component, which dynamically adjusts the transmission power i cordance with the target awareness range dictated by the application context. By esti ing the PL, the algorithm is able to modulate transmission power to satisfy aware prerequisites, even for vehicles in the 'worst' channels that have not exchanged messa Secondly, the HPR-DCC incorporates rate control by leveraging the state machine. Ow to its ability to converge towards fair and efficient channel utilization, the state mac regulates the forthcoming message rate to sustain the CBR beneath the predeterm threshold. Lastly, the HPR-DCC combines both power and rate control, adjusting the sequent transmission power on the basis of the current path loss for each message tained from neighboring vehicles. This process considers the target awareness percen when determining the appropriate transmission power level. Simultaneously, the a rithm adapts the rate by considering the existing message rate and channel load, re sented by the CBR. This comprehensive approach allows for enhanced adaptability performance in the context of the HPR-DCC algorithm design and features a simple s ture. To adjust transmission power in response to congestion control and vehicle dem the proposed algorithm makes a joint decision by calculating PL and current state for state switching. The specific explanation is as follows: Assume HPR-DCC Design and Strategies In the design of the HPR-DCC algorithm, a three-fold approach is employed to optimize performance and efficiency. Firstly, power adaptation for awareness control is facilitated through the component, which dynamically adjusts the transmission power in accordance with the target awareness range dictated by the application context. By estimating the PL, the algorithm is able to modulate transmission power to satisfy awareness prerequisites, even for vehicles in the 'worst' channels that have not exchanged messages. Secondly, the HPR-DCC incorporates rate control by leveraging the state machine. Owing to its ability to converge towards fair and efficient channel utilization, the state machine regulates the forthcoming message rate to sustain the CBR beneath the predetermined threshold. Lastly, the HPR-DCC combines both power and rate control, adjusting the subsequent transmission power on the basis of the current path loss for each message obtained from neighboring vehicles. This process considers the target awareness percentage when determining the appropriate transmission power level. Simultaneously, the algorithm adapts the rate by considering the existing message rate and channel load, represented by the CBR. This comprehensive approach allows for enhanced adaptability and performance in the context of the HPR-DCC algorithm design and features a simple structure. To adjust transmission power in response to congestion control and vehicle demand, the proposed algorithm makes a joint decision by calculating PL and current state CBR for state switching. The specific explanation is as follows: Assume that vehicles transmit power at time t: P Tx i (t); target awareness range of vehicle: TR e (t); target awareness percentage of vehicle: TA e (t); and shadowing coefficient: S. For each received message, calculate d ij (t), the distance between vehicle and ith neighbor at time t when received the message j. Compute PLE ij (t), the PL for message j from neighbor, by Equation (1): where λ is the signal wavelength and PL(t) is calculated by Equation (2): where P Tx i (t) represents the transmit power of neighbor i and P Rx ij (t) denotes the receive power of jth message form neighbor i. Then, calculate the received power required as Equation (3): where P t is the transmitted power, G t is transmitter antenna gain, and G r is the receiver antenna gain. Then, set the transmission power for next time (t + 1): P sorted Tx e = sort ∀i,j∈N (P r (t + 1)), where P r (t + 1) is calculated as: In Equations (4) and (5), sort the necessary transmission power to each neighboring node and select the appropriate power level for transmission. In the proposed algorithm, state switching is jointly determined by PL and CBR, this joint decision-making facilitates ensuring fairness in policies. Multiple states are established, with each active state allocating the appropriate transmission power and transmission rate according to the current channel conditions [24]. The detailed state transitions are presented in Table 1. Proposed HPR-DCC Algorithm Algorithm 1 outlines the steps of the proposed HPR-DCC, a hybrid power and rate control DCC algorithm that can dynamically adjust the transmission power and rate of vehicular communication systems. The transmission power control component of the proposed HPR-DCC assigns appropriate transmit power to neighboring vehicles that were already connected to the node in the previous time step. This assignment is based on the current path loss and path loss exponent. If vehicles are not neighbors in the previous time step, a default value is used as the transmission power. The HPR-DCC then sorts these power values in ascending order and selects the smallest value that meets the target awareness percentage as the next transmission power. In terms of rate control, the HPR-DCC adjusts the communication rate according to the current channel load (CBR), which is calculated as the ratio of received messages to the channel capacity. The state machine rate control mechanism can also adjust to diverse congestion scenarios and assign suitable transmission rates to vehicles. To maintain efficiency, the power and rate control decisions collaboratively "share the load" under high CBR conditions. The balancing of this relationship between the target and current beacon rate and awareness is determined by the coefficient γ, which is currently set to 1. That means the Tx power and Tx rate share the equal weight. The coefficient γ will be utilized to effectively coordinate the control module in this context. When the detected transmission power error rate δ P surpasses the transmission rate with the coefficient γδ R , a higher level of transmission power will be employed as a means to alleviate it. Conversely, if the detected transmission power error rate with coefficient γδ R is below the transmission rate, the current transmission power level will be maintained without any adjustment. 3: if Neighbor e→i (t) ∈ Neighbor e (t − 1) then 4: P r (t) = P t ·G t ·G r ·S PL(t) 5: else 6: P sorted Tx e = sort ∀i,j∈N (P r (t + 1)) 7: P r (t + 1) = P sorted Tx e [round(TA e * N)] Congestion detection process 8: Sensed busy if ∑ P ri > S busy then 9: Record T busy 10: Calculate CBR every T CBR = 100 ms 11: CBR = T busy /T CBR Transmission power allocation 12: if CBR(t) < CBR Th then 13: Apply P r (t + 1) 14: else 15: if P r (t + 1) ≤ P r (t) then 16: Apply P r (t + 1) Transmission rate allocation 17: if New Tx power apply then 18 24: if δ P ≥ γδ R then 25: Apply P r (t + 1) 26: if δ P < γδ R then 27: Keep Tx power the same Additionally, the HPR-DCC is designed to manage channel load by preventing significant increases in channel load caused by a sudden growth in the target awareness range. However, the proposed HPR-DCC allows safety-critical messages generated during hazardous events to be transmitted at high power and rate, by passing the standard restrictions. This ensures that crucial information is promptly communicated in emergency situations, thereby enhancing the overall safety and reliability of the vehicular communication system. In this study, the parameter settings of the proposed HPR-DCC algorithm are calibrated to account for the multifaceted aspects of V2V communication systems, thereby ensuring an accurate representation of scenarios, with the main parameters shown in Table 2. The time step duration is set at 200 milliseconds, providing a suitable time resolution for capturing the dynamic interactions within V2V networks. Both the target range and target awareness are defined as context-dependent variables, which typically fluctuate between 20 and 500 m, and 50% to 100%, respectively, depending on the specific application context. This flexible approach allows the algorithm to adapt to various V2V communication scenarios and capture the nuances of different vehicular environments. Furthermore, the maximum transmission power is confined to a range of 0 to 23 dBm [25], adhering to the standard constraints for V2V communication radios. This restriction ensures that the proposed algorithm operates within the acceptable power limits established for vehicular communication systems, mitigating potential interference or signal degradation issues. And, the maximum beacon rate (BR) is specified within a range of 1 to 10 Hz, representing a typical range for cooperative messages in V2V communication systems. By constraining the beacon rate within this range, the algorithm effectively accommodates the requirements of V2V communication, facilitating efficient and reliable data exchange between vehicles. Then, set the subcarrier spacing as 15 kHz for providing better time domain resolution, reduce the influence of multipath fading on the signal, and improve the anti-interference ability of the system. Feasibility Analysis of the Proposed HPR-DCC Algorithm The feasibility of using a hybrid control strategy that merges TPC and TRC for congestion management is supported by multiple factors. First, TPC provides precise control over wireless nodes' transmission power, enabling adjustments in communication range and link quality. This capability effectively mitigates congestion by reducing interference and contention within the network. Additionally, TPC ensures efficient power allocation, optimizing energy consumption and extending the network's operational lifetime. Second, TRC allows for regulation of data transmission rate, enabling dynamic channel capacity control and ensuring optimal network resource utilization. By adopting a hybrid control strategy, the limitations of individual control methods, such as exclusive reliance on power or rate control, are overcome. Moreover, TRC offers flexibility in managing traffic, as it can promptly adapt to changing network dynamics. By combining TPC and TRC in a hybrid control strategy, the limitations of individual control methods are effectively addressed. The hybrid approach uses the precision of power control and the adaptability of rate control, resulting in a more versatile and resilient congestion management solution. The HPR-DCC is expected to improve network throughput, decrease packet loss, and minimize delays, making it a promising solution for congestion management in wireless networks. Evaluation Parameters To manage channel congestion, the 3GPP standard defines a metric named CBR, as well as potential mechanisms for leveraging these metrics to mitigate channel congestion [20]. The CBR is a measure of the portion of time the channel is busy transmitting data. CBR is useful for quantifying the level of channel congestion and can be utilized to implement congestion control mechanisms. By monitoring the CBR, the network can dynamically adjust the allocation of channel resources and regulate the transmission of data to avoid congestion. The CBR is calculated every T CBR = 100 ms as follows: In Equation (6), the channel occupancy of T busy is dynamically updated at the beginning or end of each transmission for every vehicle. The calculation of T busy is determined based on whether the channel is sensed as busy or not. Specifically, the channel is considered busy if the received power level P r i is greater than the sensitivity threshold S busy . The sensitivity threshold S busy is set to −94 dBm [26]. The degradation of PRR is a common issue in vehicular networks as vehicle density increases. Higher PRR guarantees more reliable communication and is calculated as below Equation (7): where P r , P SL and P TL represent the total received packets, SINR packet loss, and packet loss of transmitting, respectively. The degradation of PRR is a common issue in vehicular networks as vehicle density increases. DCC algorithms are commonly used to mitigate PRR reduction, which typically involves reducing the CBR. However, lowering the CBR could result in reduced throughput performance. Hence, the aim of this study is to determine the optimal packet transmission power and packet transmission rate that can maximize the aggregate PRR of vehicle user equipment (VUE), while also maintaining the CBR to a predetermined target, even in high-vehicle-density scenarios, through the HPR-DCC algorithm. Simulation Setup and Scenarios Consider a C-V2X network consisting of a number of VUE that are spatially distributed using a 1-D Poisson point process with a variable density. The highway length is set to 2 km with three lanes in each direction, and the width of each lane is 4 m [25][26][27]. In this scenario, VUE moves with a predefined speed and when they reach the end of the road, they loop around and enter the opposite direction, as shown in Figure 3. The VUE periodically broadcasts CAMs via V2V sidelink communication. For the C-V2X system, single carrier frequency-division multiple access (SC-FDMA) is used in a 10-MHz-wide channel. Each VUE automatically selects radio resources using the allocation procedure of SB-SPS. The reselection counter is randomly selected to be between 5 and 15 The VUE periodically broadcasts CAMs via V2V sidelink communication. For the C-V2X system, single carrier frequency-division multiple access (SC-FDMA) is used in a 10-MHz-wide channel. Each VUE automatically selects radio resources using the allocation procedure of SB-SPS. The reselection counter is randomly selected to be between 5 and 15 in the SB-SPS [27][28][29][30]. And the self-interference cancellation coefficient is set as −110 dB to effectively eliminate interference between its own transmission and reception. The average duration of the interval for the CBR calculation is set to 100 ms to ensure the provide more real-time performance metrics of channels. To evaluate the efficacy of HPR-DCC under varying traffic congestion conditions, we manipulate vehicle densities to 40, 80, and 120 vehicles/km while maintaining an average speed of 140 km/h. This manipulation represents low, medium, and high-traffic scenarios, respectively. We further set the standard deviation of vehicle speeds at 3 km/h to approximate real-world traffic conditions. In the initialization of the simulation, under low-stress conditions, we establish the initial transmission power at 10 dBm and the initial transmission rate at 10 Hz [31][32][33][34]. As traffic conditions transition, HPR-DCC facilitates the necessary adjustments. Consequently, the transmission power fluctuates within a 0-23 dBm range, and the packet transmission frequency range varies between 1 and 10 Hz [31]. The main simulation parameters are the same as those listed in Table 3. This simulation scenario, by using the enhanced LTEV2Vsim simulator [35], allows us to evaluate the proposed DCC; thus, we can verify its effectiveness in maximizing packet reception rate and minimizing the CBR while still maintaining a low collision rate. Experimental Results and Analysis The curves representing the CBR performance indicate that the proposed HPR-DCC consistently surpasses both the case without DCC and the original DCC. Nevertheless, the extent of improvement, when contrasted with the original DCC scheme, fluctuates depending on the specific congestion scenarios. Figure 4a demonstrates that the maximum convergence value of the proposed DCC is 30% when vehicle density is 40 veh/km, which does not significantly enhance the original DCC's improvement of 32%. Figure 5a, when vehicle density is 80 veh/km reveals that although the proposed HPR-DCC marginally outperforms the original DCC in congestion control, their maximum convergence values are nearly identical. In a highly congested setting with a vehicle density of 120 veh/km, as illustrated in Figure 6a, the proposed DCC attains a maximum CBR of 64%, while the original DCC scheme reaches a maximum CBR of 70%, exhibiting the most considerable gain of about 6%. In this research, we adopt a hybrid control approach that combines TPC and TRC mechanisms for congestion control and employs multiple active states to enhance channel load mitigation and vehicular communication. The proposed HPR-DCC leverages hybrid and distributed control design, offering enhanced control capabilities in high channel load scenarios. Through adaptive transmission facilitated by TPC, the transmit power is dynamically adjusted to compensate for channel quality variations, ensuring optimal signal reception even in challenging environments. As a complementary feature to TPC, TRC effectively prevents network overload and packet loss while maintaining a balance between throughput and reliability. As a result, when compared to the conventional DCC method, our proposed DCC strategy consistently produces lower CBR values throughout the simulation. illustrated in Figure 6a, the proposed DCC attains a maximum CBR of 64%, while the original DCC scheme reaches a maximum CBR of 70%, exhibiting the most considerable gain of about 6%. In this research, we adopt a hybrid control approach that combines TPC and TRC mechanisms for congestion control and employs multiple active states to enhance channel load mitigation and vehicular communication. The proposed HPR-DCC leverages hybrid and distributed control design, offering enhanced control capabilities in high channel load scenarios. Through adaptive transmission facilitated by TPC, the transmit power is dynamically adjusted to compensate for channel quality variations, ensuring optimal signal reception even in challenging environments. As a complementary feature to TPC, TRC effectively prevents network overload and packet loss while maintaining a balance between throughput and reliability. As a result, when compared to the conventional DCC method, our proposed DCC strategy consistently produces lower CBR values throughout the simulation. Since PRR serves as a critical indicator of packet transmission success rate, only PRR results exceeding 90% are considered to ensure a fair comparison. In comparison to the non-DCC scheme, the HPR-DCC scheme displays enhanced PRR performance across various test environments. In Figure 4b, in low congestion scenario when PRR is equal to Since PRR serves as a critical indicator of packet transmission success rate, only PRR results exceeding 90% are considered to ensure a fair comparison. In comparison to the non-DCC scheme, the HPR-DCC scheme displays enhanced PRR performance across various test environments. In Figure 4b, in low congestion scenario when PRR is equal to Since PRR serves as a critical indicator of packet transmission success rate, only PRR results exceeding 90% are considered to ensure a fair comparison. In comparison to the non-DCC scheme, the HPR-DCC scheme displays enhanced PRR performance across various test environments. In Figure 4b, in low congestion scenario when PRR is equal to 90%, the distance is extended by approximately 10 m relative to the original DCC scheme. A comparable outcome is depicted in Figure 5b, suggesting that the performance improvement of the HPR-DCC is not evident in low-and medium-congestion situations. In the high-congestion test scenario, the results presented in Figure 6b disclose that the effective reception distance of the proposed DCC is extended by 20 m compared to the original DCC, thereby significantly improving the system's signal reception performance. This enhancement arises from the incorporation of the distance between the control vehicle and neighboring vehicles into the state-switching strategy of the proposed HPR-DCC algorithm. When the gain of transmit power to the effective transmission distance surpasses a certain threshold, TRC is employed as an optimization supplement. Consequently, this improves channel conditions for vehicles located beyond the range of the original DCC. The results demonstrate that the proposed HPR-DCC scheme offers several advantages over the original schemes. Specifically, it performs better in highly congested environments, exhibiting an improved transmission range and a higher packet reception rate. Additionally, the proposed scheme maintains low channel occupancy. Conclusions and Future Work This paper presents an optimized DCC scheme that utilizes a hybrid approach, combining TRC and TPC algorithms to create a more effective and robust congestion control solution. The HPR-DCC method dynamically allocates transmission power and rate according to the degree of congestion, which improves overall network performance and efficiently allocates available bandwidth. By incorporating PL and CBR metrics, the DCC scheme can make joint state-switching decisions, enhancing its flexibility and adaptability to various network scenarios and congestion situations. Achieving such flexibility entails real-time monitoring and detection of current network conditions, which allows for improved responsiveness to network demands and enhanced network performance. By introducing additional state-switching conditions, the DCC algorithm can be more adaptive, enabling it to better manage network instability and fluctuations. Simulation results substantiate that the HPR-DCC effectively controls the maximum CBR value within 64%, with a 6% enhancement compared to the original DCC approach and an extension of the effective signal reception distance by 20 m while maintaining a PRR of 90%. In our future research, we intend to place greater emphasis on algorithmic complexity. This is because the coordination and synchronization between TPC and TRC necessitate implementation and tuning, which can potentially increase computational and processing overhead when compared to existing methods. Subsequently, we will evaluate the performance of the proposed hybrid control scheme in more intricate network environments, such as urban scenarios. Additionally, we aim to explore the possibility of incorporating machine learning techniques into the design of the hybrid control scheme, which could potentially enhance its adaptability and robustness. Furthermore, we will investigate the integration of other advanced technologies to improve the performance and security of the hybrid control scheme.
2023-07-28T15:21:36.228Z
2023-07-25T00:00:00.000
{ "year": 2023, "sha1": "4b97774f3c5a392b5a5f00c2dd561a8d2b0d5ab1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3b30761d396c3872b5eff5ec3f03a0678ef6d9b4", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
214750329
pes2o/s2orc
v3-fos-license
Comparison of Aneurysm Patency and Mural Inflammation in an Arterial Rabbit Sidewall and Bifurcation Aneurysm Model under Consideration of Different Wall Conditions. Background: Biological processes that lead to aneurysm formation, growth and rupture are insufficiently understood. Vessel wall inflammation and degeneration are suggested to be the driving factors. In this study, we aimed to investigate the natural course of vital (non-decellularized) and decellularized aneurysms in a rabbit sidewall and bifurcation model. Methods: Arterial pouches were sutured end-to-side on the carotid artery of New Zealand White rabbits (vital [n = 6] or decellularized [n = 6]), and into an end-to-side common carotid artery bifurcation (vital [n = 6] and decellularized [n = 6]). Patency was confirmed by fluorescence angiography. After 28 days, all animals underwent magnetic resonance and fluorescence angiography followed by aneurysm harvesting for macroscopic and histological evaluation. Results: None of the aneurysms ruptured during follow-up. All sidewall aneurysms thrombosed with histological inferior thrombus organization observed in decellularized compared to vital aneurysms. In the bifurcation model, half of all decellularized aneurysms thrombosed whereas the non-decellularized aneurysms remained patent with relevant increase in size compared to baseline. Conclusions: Poor thrombus organization in decellularized sidewall aneurysms confirmed the important role of mural cells in aneurysm healing after thrombus formation. Several factors such as restriction by neck tissue, small dimensions and hemodynamics may have prevented aneurysm growth despite pronounced inflammation in decellularized aneurysms. In the bifurcation model, rarefication of mural cells did not increase the risk of aneurysm growth but tendency to spontaneous thrombosis. Introduction In intracerebral aneurysms, the risk of growth and rupture is associated with larger aneurysm size, larger aneurysm height to neck aspect ratio and irregular configuration of the aneurysm [1][2][3]. However, the biological mechanisms of these morphological characteristics are poorly understood. There is a growing body of evidence that chronic vessel wall inflammation and loss of aneurysm mural cells is a crucial factor in the pathogenesis of aneurysm growth and rupture [4][5][6]. Aneurysms with vital vessel walls may be able to recruit smooth muscle cells that are able to organize thrombus, to build a neointima and, by phenotype switch, to synthesize extracellular matrix. On the other hand, those aneurysms with a rarefication of cells in their vessel wall seem to be unable to promote aneurysm healing after intraluminal thrombosis. Instead, intra-aneurysmal thrombus may promote chronic inflammation, further weakening of the vessel wall and finally leading to aneurysm growth and rupture [6,7]. This difference becomes fundamentally crucial with endovascular aneurysm treatments, which are conceptually based on a biological healing process, rather than just mechanical flow obstruction [7][8][9]. The abovementioned putative pathophysiological mechanism was first observed in human samples [10] and later confirmed in an experimental setting in rat saccular sidewall aneurysms [11]. Rabbits stand higher up in the translational chain than rats and allow for experimental creation of complex, more physiological bifurcation aneurysms by means of rheology and hemodynamics [12,13]. Rabbit models are considered ideal for testing of novel endovascular devices, because the rabbit carotid artery is accessible with endovascular devices of the same size as in humans. Therefore, this study aims to investigate the natural course of vital and decellularized aneurysms in a rabbit sidewall and bifurcation aneurysm model with an emphasis on aneurysm patency, growth and mural inflammation. Materials and Methods New Zealand white rabbits aged 4 months (weighing 3750 ± 293 g) received care in accordance with institutional guidelines. The Committee for Animal Care of the Canton Bern, Switzerland (BE 108/16) approved the experiments. An a priori power analysis was performed, revealing n = 6 animals per group needed to reach statistical significance with an estimation of 30% difference between groups. Two animals served as pilots. All animals were randomly allocated to either vital or decellularized aneurysm group. For each group, 6 aneurysms were created. For sidewall aneurysm creation, two animals were used as tissue donors. Two aneurysms (one on each common carotid artery) were created in one animal. For bifurcation aneurysms, only one aneurysm was created per animal. Graft interpositions were taken from the same animal, with no need for additional donor animals. Creation of Sidewall Aneurysms Female rabbits were premedicated with an intramuscular injection of Ketamine HCL 30 mg/kg (Ketalar, 50 mg/mL, Pfizer AG, Zürich Switzerland) and Xylazine 6 mg/kg (Xylapan 20 mg/mL). An auricular vein was then catheterized and a continuous infusion of anesthesia solution (10mL Ketalar and 1.6 mL Xylapan in 50 mL NaCl) was installed with a flow rate of 4-14 mL/h. Furthermore, Fentanyl 1 mg/kg (Fentanyl, Janssen-Cilag, Zug, Switzerland) was applied for analgesia. Animals breathed spontaneously through an oxygen mask. During the operation, animals were located on a heating panel and physiological variables such as heart rate, blood pressure and temperature were continuously monitored. After local infiltration of the pectoral musculature with lidocaine (Lidocaine 1%, Streuli & Co, Uznach, Switzerland), the common carotid artery was dissected on both sides and a previously prepared donor graft (either vital or decellularized) was sutured in an end-to-side configuration, to form a sidewall aneurysm. Nimodipine (Nimotop 0.2 mg/mL, Bayer, Leverkusen, Germany) was locally applied to prevent for vasospasms. A fluorescence angiography was then performed [14,15], to ascertain aneurysm perfusion and patency of the underlying vessel. Afterwards, incised tissues (musculature, subcutaneous and skin) were readapted and closed. Postoperative analgesia was ascertained with transdermal fentanyl application (12 µg/72 h). All animals received postoperative antibiotics by intramuscular injection of terramycin (60 mg/kg), vitamin B12 (Novartis, Basel, Switzerland) 100 mcg subcutaneous and prophylactic low-molecular weight heparin (250 units/kg) subcutaneous. Creation of Bifurcation Aneurysm Due to an internal periodic veterinarian re-evaluation of the standards, anesthesia protocols were slightly adopted for bifurcation models. Premedication comprised subcutaneous application of Ketamine 20 mg/kg, Dexmedetomidine (Novartis, Basel, Switzerland) 100 mg/kg and Methadone (Novartis, Basel, Switzerland) 0.3 mg/kg. Animals were then preoxygenated through a facial mask, before installation of peripheral catheters in the auricular vein and in the contralateral auricular artery. Then, propofol (1-5 mg/kg) (Novartis, Basel, Switzerland) and 0.2-1 mg/kg midazolam (Novartis, Basel, Switzerland) were intravenously administered, followed by intubation with an endotracheal tube (3 mm). Mode of the breathing system was chosen circle, able to be changed from ventilation to spontaneous breathing anytime. A heating pad was continuously used to keep the animals warm during the procedure. For monitoring, a continuous electrocardiogram, a rectal temperature probe and a bispectral index were installed. Analgesia was ascertained by local subcutaneous infiltration with ropivacaine (Novartis, Basel, Switzerland), followed by a continuous rate of infusion of 50 mcg/kg/min lidocaine (Novartis, Basel, Switzerland) and fentanyl boli of 3-10 mcg/kg/h. Postoperatively, Meloxicam 0.5 mg/kg (Novartis, Basel, Switzerland), Vitamin B12 100 mcg (Novartis, Basel, Switzerland) and Clamoxyl 20 mg/kg (Novartis, Basel, Switzerland) were administered subcutaneously. For the first three days, low-molecular weight heparin (250 units/kg) and meloxicam were administered subcutaneously (likewise methadone was administered, if an additive was needed). The detailed surgical technique for creation of bifurcation aneurysms has been presented elsewhere [16]. Briefly, bifurcation aneurysms were created by end-to-side anastomosis of the right common carotid artery to the left common carotid artery and interposition of an arterial pouch, either vital or decellularized. A Protocol for Decellularization Untreated donor arterial grafts with a standardized length of 3-4 mm were taken from a segment of the common carotid artery of a donor animal, ligated with a 6-0 suture on one end and immediately reimplanted in a recipient animal or stored in phosphate buffered saline (PBS) at −4 • C for a maximum of 3 days. All aneurysm pouches were measured, and photo documented on creation and again at follow-up. For decellularization, a modified protocol of a previously described methodology was performed [11,17]: First, grafts were frozen in PBS at −4 • C for several days. Later, they were thawed, rinsed with distilled water and incubated in 1% sodium dodecyl sulphate (SDS) for 6 h at room temperature. The SDS-treated grafts were then washed with gentle shaking and refrozen and kept in PBS at −4 • C until reimplantation. To establish these modifications of the original protocol, various SDS concentration (0.1% and 1%) and several time spans for decellularization (6 h, 9 h,12 h, 15 h and 2 h, 4 h, 6 h, 8 h, respectively) were assessed. All samples were histologically cut and stained with 4 ,6-diamidino-2-phenylindole (DAPI) to count nuclei and with hematoxylin-eosin (HE), to assess the integrity of the extracellular matrix such as elastic fibers. Cell nuclei were counted three times for three randomly selected cuts, in each slice specifically for the following wall layers of the vessel: endothelium, media and adventitia. Digital photographs of the microscopic images were taken and analyzed while blinded to the treatment. Near-complete graft decellularization with extracellular fibers still intact was documented after 6 h of 1% SDS treatment. Outcome Measurements After creation, sidewall aneurysms were followed with color coded duplex sonography (SonoSite 180 PLUS, SonoSite, Bothell, WA, USA) on post-operative day 1, day 3 and every 7 days thereof. After a follow-up period of 28 days all animals underwent MRI with MR-angiography (MRA). Immediately afterwards aneurysms were surgically re-exposed, and a fluorescence angiography was performed before euthanasia with an overdose of thiopental (Esconarkon ad us. vet, Streuli & Co, Uznach, Switzerland) and tissue harvesting. Aneurysms were macroscopically inspected and measured. Aneurysm volume was calculated on the basis of a = length, b = width and l = height, with the formula Brain Sci. 2020, 10, 197 . Afterwards fixation in formalin (4% weight/volume solution, J.T. Baker, Arnhem, The Netherlands) and embedding in paraffin for histological analysis followed. Histological staining included HE, Masson-Goldner trichrome, smooth muscle actin, and von Willebrand factor (F8) staining. Stained slices were digitalized (omnyx VL120, GE healthcare, Chicago, IL, USA) and evaluated with the JVS viewer (JVS view 1.2 full version, University of Tampere, Finland). Histologic scoring was performed blinded to treatment allocation. A 4-scale grading system ("none", "mild", "moderate", "severe") was applied to characterize histology, according to the previously presented neointima score [11]. Statistics Data were analyzed and visualized using Graph Pad Prism statistical software 8.3.1 for Windows (GraphPad Software, Sand Diego, CA, USA). Unpaired Mann-Whitney test was used to calculate differences between vital and decellularized aneurysms according to histological analysis with non-parametric values. Values are presented as median with interquartile range and arbitrary units 0-3 representing categories ("none", "mild", "moderate", "severe") according to the neointima score [11]. A p-value of <0.05 was considered statistically significant and a p-value of <0.001 was considered highly significant. Results During the study period, no aneurysm ruptured. All sidewall aneurysms (vital and decellularized) thrombosed spontaneously during follow-up. Histologically, inferior thrombus organization was observed in decellularized aneurysms when compared to healing characteristics in vital aneurysms. In the arterial pouch bifurcation model, three out of six aneurysms with decellularized walls thrombosed spontaneously whereas all vital aneurysms (six out of six) stayed patent, with relevant growth pattern in two cases. Study and Animal Characteristics Totally, 22 New Zealand white rabbits were included in this study, weighting 3750 ± 293 g. No animals had to be excluded due to severe comorbidities and no animal died prematurely before planed euthanasia on follow-up day 28. See Figure 1 for an overview of the experimental setting. For histological evaluation one vital aneurysm in the sidewall constellation and one decellularized aneurysm in the bifurcation constellation was excluded from the final analysis due to insufficient detection of the relevant structures after histologic processing of these heavily scarred aneurysms. Statistics Data were analyzed and visualized using Graph Pad Prism statistical software 8.3.1 for Windows (GraphPad Software, Sand Diego, CA, USA). Unpaired Mann-Whitney test was used to calculate differences between vital and decellularized aneurysms according to histological analysis with non-parametric values. Values are presented as median with interquartile range and arbitrary units 0-3 representing categories ("none", "mild", "moderate", "severe") according to the neointima score [11]. A p-value of < 0.05 was considered statistically significant and a p-value of < 0.001 was considered highly significant. Results During the study period, no aneurysm ruptured. All sidewall aneurysms (vital and decellularized) thrombosed spontaneously during follow-up. Histologically, inferior thrombus organization was observed in decellularized aneurysms when compared to healing characteristics in vital aneurysms. In the arterial pouch bifurcation model, three out of six aneurysms with decellularized walls thrombosed spontaneously whereas all vital aneurysms (six out of six) stayed patent, with relevant growth pattern in two cases. Study and animal characteristics Totally, 22 New Zealand white rabbits were included in this study, weighting 3750 ± 293 grams. No animals had to be excluded due to severe comorbidities and no animal died prematurely before planed euthanasia on follow-up day 28. See Figure 1 for an overview of the experimental setting. For histological evaluation one vital aneurysm in the sidewall constellation and one decellularized aneurysm in the bifurcation constellation was excluded from the final analysis due to Aneurysm Patency All sidewall aneurysms showed initial flow upon creation but thrombosed within the first two weeks after creation and were not detectable thereafter with either ultrasound or MR angiography. Intraoperative fluorescence angiography confirmed flow obliteration in all these cases. Calculated volume, based on the measured aneurysm size, was significantly smaller for scarred aneurysms at follow-up (7.55 ± 2.73 mm 3 ) than they were at creation (11.27 ± 3.27 mm 3 ), p = 0.0033 (Figure 2). Brain Sci. 2020, 10, x FOR PEER REVIEW 5 of 12 insufficient detection of the relevant structures after histologic processing of these heavily scarred aneurysms. Aneurysm Patency All sidewall aneurysms showed initial flow upon creation but thrombosed within the first two weeks after creation and were not detectable thereafter with either ultrasound or MR angiography. Intraoperative fluorescence angiography confirmed flow obliteration in all these cases. Calculated volume, based on the measured aneurysm size, was significantly smaller for scarred aneurysms at follow-up (7.55 ± 2.73 mm 3 ) than they were at creation (11.27 ± 3.27 mm 3 ), p = 0.0033 (Figure 2). In bifurcation aneurysms, only three out of six aneurysms with decellularized walls thrombosed and all of these with vital vessel walls remained patent until follow-up (exemplary illustration in Figure 3). In bifurcation aneurysms, only three out of six aneurysms with decellularized walls thrombosed and all of these with vital vessel walls remained patent until follow-up (exemplary illustration in Figure 3). Furthermore, these aneurysms showed a pattern of growth from (6.48 ± 1.81 mm 3 ) on creation to (19.48 mm 3 ± 6.40 mm 3 ) follow-up (p = 0.037) Histological Analyses Overall, there was more inflammation in decellularized aneurysms than in those with vital vessel walls. In sidewall aneurysms, this was reflected by significantly more neutrophil invasion in Furthermore, these aneurysms showed a pattern of growth from (6.48 ± 1.81 mm 3 ) on creation to (19.48 mm 3 ± 6.40 mm 3 ) follow-up (p = 0.037) Histological Analyses Overall, there was more inflammation in decellularized aneurysms than in those with vital vessel walls. In sidewall aneurysms, this was reflected by significantly more neutrophil invasion in the thrombus in decellularized than in vital aneurysms (p = 0.0065) (Figure 4). flow in both, the aneurysm and the parent artery at the time of aneurysm creation. One-month patency is confirmed by magnetic resonance angiography (c). Furthermore, these aneurysms showed a pattern of growth from (6.48 ± 1.81 mm 3 ) on creation to (19.48 mm 3 ± 6.40 mm 3 ) follow-up (p = 0.037) Histological Analyses Overall, there was more inflammation in decellularized aneurysms than in those with vital vessel walls. In sidewall aneurysms, this was reflected by significantly more neutrophil invasion in the thrombus in decellularized than in vital aneurysms (p = 0.0065) (Figure 4). § In bifurcation aneurysm, there were significantly more inflammatory cells (neutrophils) in the wall of decellularized aneurysms compared to vital aneurysms (p = 0.013). Periadventitional fibrosis was higher in vital aneurysms than in decellularized ones (p = 0.013). All histological characteristics are summarized in Figure 5. In both models, aneurysm wall cellularity was significantly lower in decellularized aneurysms than in vital aneurysms, confirming a successful experimental decellularization (a). Spontaneous thrombosis and neointima formation (b) were strong in the sidewall constellation, but not so in the bifurcation model. In the bifurcation model, aneurysm wall inflammation was significantly more pronounced in decellularized aneurysms when compared with vital aneurysms (c). However, there was no difference in terms of aneurysm wall inflammation in the sidewall model. On the other hand, there were significantly more inflammation cells, i.e., neutrophils in the thrombus of decellularized sidewall aneurysms, a difference not as distinctly observed in the bifurcation constellation (d). In turn, periadventitial fibrosis was significantly higher in vital than in decellularized bifurcation aneurysms, but not in sidewall aneurysms (e). There were no relevant differences for periadventitial inflammation (f), aneurysm wall dissection (g) or aneurysm wall hematoma (h) between different wall conditions for either aneurysm model. A 4-scale grading system 0 = none, 1 = mild, 2 = moderate, 3 = severe was applied to characterize histology [11]. *: p < 0.05, ** p < 0.001. Discussion The results of this study demonstrate that all arterial pouch (decellularized and non-decellularized) sidewall aneurysms thrombose spontaneously during follow-up without increase in size. Poor thrombus organization in decellularized sidewall aneurysms confirms the important role of mural cells in aneurysm healing. In the bifurcation aneurysm model, removal of mural cells did not increase the risk of aneurysm growth. In our experiments, all sidewall aneurysms thrombosed spontaneously without any treatment. This is opposed to the natural course of saccular sidewall aneurysms which were sutured as standardized arterial pouches on the abdominal aorta in a rat model [18]. In that model the authors found a clear pattern of growth in decellularized aneurysms [11]. We hypothesize that the pressure of surrounding muscular tissues in the rabbit neck may counteract the artificially created saccular aneurysms from growth. Furthermore, the base dimensions of these aneurysms are given by the diameter of the carotid artery of the donor animal. This size was usually smaller (approximately 1-1.5 mm) than in a rat aorta (2-3 mm). Together with the different hemodynamics between the rat aorta and the rabbit carotid artery, these aneurysms may have been simply too small for a sufficient perfusion, particularly since the relatively thick and muscular arterial walls may have a tendency to self-contract or increased fibrosis after transplantation. Ding et al. found a patency rate of 95% after 3 weeks in venous pouch sidewall aneurysms on rabbit carotids [19]. In order to overcome these limitation factors, as a next step we repeated the series with non-decellularized and decellularized arterial pouch aneurysms in a hemodynamically more challenging bifurcation constellation. Previous experiments demonstrated in various species (rats, rabbits and dogs) that spontaneous thrombosis occurs less frequently in bifurcation than sidewall venous pouch aneurysms [20][21][22]. In contrast to these earlier findings, however, all decellularized arterial pouch aneurysms thrombosed even in the setting of an artificial bifurcation. Most previous studies with degenerated vessel walls used elastase eradication of the cells [23][24][25][26]. Sodium dodecyl sulfate (SDS) is a detergent that destroys cells but leaves extracellular matrix intact. Its use for experimental decellularization worked well in a previously established rat model, where decellularized aneurysms have been shown to grow over time and eventually rupture, in contrast to aneurysms with vital vessel walls [6,9,11]. However, the completely decellularized graft (including eradication of endothelial cells) after SDS treatment may exhibit prothrombogenic properties. This may be an explanation for the finding of high rate of thrombosed decellularized bifurcation aneurysms. The growing pattern of bifurcation aneurysms with vital vessel walls indicates that the hemodynamic constellation (in a vessel bifurcation) is an important factor for aneurysm enlargement/growth. When comparing vital and decellularized aneurysms histologically, there was a clear pattern of more pronounced inflammation in decellularized aneurysms for both sidewall and bifurcation aneurysms. This is in line with previous findings [27][28][29]. For aneurysm healing, intraluminal thrombus needs to undergo gradual organization into a mature thrombus and a neointima needs to form. This process is mediated by smooth muscle cells and myofibroblasts, which migrate into the thrombus, presumably originating in the vessel wall. If there is a substantial diminution in the pool of these cells (i.e., after decellularization) the intraluminal thrombus will undergo cycles of lysis and re-thrombosis instead of scarification [6,11]. This instable thrombus formation causes local inflammatory reactions which promotes further vessel wall weakening. In summary, there was more pronounced inflammation in decellularized aneurysm than in those with vital walls. However, decellularized aneurysms did not show any pattern of growth or rupture, neither in a sidewall nor in a bifurcation constellation. Therefore, the presented aneurysm models need further refinements to allow for meaningful experiments with a translational focus. Further experiments should use other degrading substances like elastase, test anti-platelet medications to prevent spontaneous thrombus formation, or relocation of the experimental aneurysms into the abdominal cavity to allow for more unrestricted growth. However, with all these issues addressed, still no animal model ever will perfectly match all aspects of the human condition of the disease [30,31]. For instance, there are relevant differences in thrombus formation and endothelial cell coverage between rabbits and humans. In addition, we used healthy arteries in which cells but not the extracellular matrix was destroyed to form aneurysms. However, the elastin content of real aneurysms is inferior to that of healthy arteries [32]. Furthermore, the aneurysm angioarchitecture influences hemodynamic characteristics, and with that the rate of spontaneous thrombosis. Despite all efforts made to standardize aneurysm dimensions and geometry (the latter specifically for either sidewall or bifurcation constellation), we could not avoid differences of few millimeters in size of the vessel pouches and thus in hemodynamics. Further limitations include the relatively small sample size of n = 6 animals per group. Lastly, a longer follow-up than 28 days would probably be better to characterize hemodynamic-induced changes in the bifurcation constellation. Conclusions The results of poor thrombus organization in decellularized rabbit arterial sidewall aneurysms confirm the important role of mural cells in aneurysm healing after intraluminal thrombus formation. Several factors such as restriction by neck tissue, small dimensions and hemodynamics may have prevented aneurysm growth despite pronounced inflammation in decellularized aneurysms. Even in the bifurcation aneurysm model, removal of mural cells did not increase the risk for aneurysm growth but resulted in a higher rate of spontaneous thrombosis. Future studies should examine the role of less thrombogenic degenerated aneurysm wall pouches in a rabbit artificial bifurcation model. Funding: This work was supported by research funds of the Research Council, Kantonsspital Aarau, Aarau, Switzerland (FR 1400.000.054). The authors are solely responsible for the design and conduct of the presented study and declare no competing interests.
2020-04-02T09:13:24.812Z
2020-03-27T00:00:00.000
{ "year": 2020, "sha1": "5efc82257d23c2c0d55c0a4f84beb52401fa6569", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3425/10/4/197/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4149710138571b0e629fea85314a84b2f4ea3abf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
158291720
pes2o/s2orc
v3-fos-license
Women, Water Resource Management, and Sustainable Development: The Turkey-North Cyprus Water Pipeline Project Women’s role in water resource management is recognized, yet the implementation of methods and strategies to get beyond gender-based obstacles to women’s equal participation in water resource management related projects remain vague. Mainstream considerations on the gender aspects of development and environment focus on women as having an intrinsic relationship with the environment. Women are perceived as a natural reflection of their responsibilities for the household and the comfort and security of future generations. Contrary to mainstream environmental and political ecology research, this paper sees gender as relevant within policy and practice across multiple levels, and within institutions related to natural resource governance. Based on this, the paper looks at the sustainable development and water governance issues with the help of a specific case: the Turkey-North Cyprus Water Pipeline Project. Through broad reviews of project documentation, interviews with people who were directly involved with the project and with women’s organizations the paper draws insights on the gender aspect of the decision-making mechanisms related to water governance. The results indicate that participation by women in resource management is marginal in North Cyprus. The paper discusses that this is a reflection of a broader problem, which is gender inequality. In conclusion, one can argue that future water projects need to realize more sustainable outcomes and greater effects on gender equality in North Cyprus. Introduction There are various factors which are influential in determining patterns of natural resource use and governance and all are having multifaceted effects on our lives.These factors range from the choice of neo-liberal programs over a social welfare agenda, policies made as responses to climate change, responses in the face of population growth, increased geographical mobility, and urbanization, among others [1].Gender engages with all these fields as an important analytical tool, which helps us to understand and follow policies and changes in natural resource governance.It is on those lines that the link between gender, environment, and natural resource management can be best understood, through careful analysis of theory, policy, and practice in these distinct fields of research.Gender can be seen as "a political, negotiated and contested element of social relationships" [1].Gender, from that perspective, provides essential insights for understanding the social dynamics of environment and natural resource management in the sense that it is a critical analytical concept for exploring the social and political aspects of natural resource management and governance across various empirical settings.At this point, it is essential to state that sustainable natural resource management entails the involvement of numerous social actors and stakeholders.Among others, active and meaningful participation of women in decision-making processes regarding the use and management of natural resources is important [2].Natural resource governance as a process involves the production, employment, and struggle of gender subjectivities, ideologies and identities.Not many may notice such practices but exclusions and possibilities are shaped by gender discourses within the environment and development processes [1].Based on these points mentioned above, it is noteworthy to point out that developing gender sensitive environmental policies is quite difficult for policy-makers in relevant fields.Perhaps more important than that would be to question the degree to which such policies will reduce the production and struggle of gender subjectivities, ideologies, and identities in a particular society.It is possible to say that focusing on women out of context of their wider social relationships is unproductive.The main assumption is that women are disadvantaged and subordinated in their relations with men and these relations shape access to resources, participation in decision-making, and the exercise of power within communities [3].A gender perspective views relations between men and women as socially constructed differences in roles and expectations.From that perspective, gender approaches to water resource management underpin the notion that more equitable division of labor and of power between men and women is possible with a meaningful intervention, which may facilitate changes in gender relations within communities.More specifically, for more equitable relations among women and men, a gender approach to water resource management suggests a more balanced division of labor between men and women, especially in the areas of: "access to information; physical work; contributions in time and cash; decision-making; and access to and control of resources and benefits" [3].A gender perspective, in that respect, aims to reveal differences between men's and women's interests and how these interests overlap, conflict, and are negotiated.The approach questions the underlying dynamics of the subordination of women in each and every unit of social life, and therefore is concerned with norms, values and hierarchies that determine these conditions at large.Such an approach also attempts to reveal gender differences based on various demographic characteristics such as wealth, age, and ethnicity, etc.It traces the ways in which social and economic trends influence gender roles and relations within society [3].In this regard, it is seen that, the approach at hand provides possible venues for the embracement of a gender equality perspective to water resource management.In this paper, in more specific terms, it is argued that a gender approach is necessary as women are central to the water governance and it is possible to see that gendered hierarchies exist within that structure in North Cyprus, as it is at other countries. Theorizing Gender and Environment Gender, environment, and natural resource management can be linked via three broad themes.First, one can explore gender and environment within the changing global context by analyzing macroeconomic policies and alterations in governance linked with neo-liberalism.Second, one can attempt to understand the ways in which 'gender' has been incorporated in sustainable developmental goals.Finally, one can examine the gendered agency in environmental actions by addressing the realm of knowledge and authority [1].The last point may provide a more critical perspective but all are invaluable in terms of providing a versatile gendered approach to natural resource management. The literature on gender and environment is broadly divided into two main strands: (1) a liberal attempt to incorporate gender aspect into developmental policy and practice, and (2) relational perspectives that lay emphasis on binary power relations between women and men.In both literatures, the main assumption is that men and women experience environment differently from each other because of their materially different daily work activities and responsibilities [1].Since men and women have different roles, responsibilities, and knowledge with regards to the environment, they have different interests in natural resource management.This notion indicates more the liberal perspective mentioned above, which sees gender as an important variable in determining ecological change and sustainable development.Relational perspectives on gender, on the other hand, argues that it is more valuable to examine power relations and power struggle between women and men over accessing and controlling natural resources, and their real expressions in conflict, cooperation, and coexistence over the nature [1]. To understand these perspectives better, one may need to look at the scholarship that explores gender in the framework of ecological change, economic circumstances, and a legal normative perspective.Similarly, the topic of gendered property (water and land) rights is extensively researched by [4][5][6][7][8], among others.Women's participation in local development programs and community-led bodies is also an important area of research in the domain of gender and environment [9][10][11].Gendered environmental knowledge is another important field of research that can be considered as enlightening in terms of understanding the link between gender and natural resource management [12][13][14][15].Exploring the dynamics of gender in policy discourses also appears to be an important field of study in the same vein.In all these studies, gender dynamics are seen as constituted through norms and institutions and reproduced through individuals [1].Other works in the areas of feminist and post-colonial theories reconsider 'gender' as a central analytical category.They attempt to constitute gender through other domains of social differentiation and power struggle among different races and classes [16][17][18][19][20]. A Gender Approach to Water Governance Women and women's role in the analysis of natural resource management in broader terms is reviewed above.This section aims to explore the more specific link between gender, water governance, and sustainable development as important themes of this research.It is possible to argue that, because of their gendered socioeconomic roles, in other words their disadvantaged political and economic position in a society; women are more vulnerable to ecological and water-related problems [21].Underrepresentation is one of the main problems, where women in many cases are excluded from environmental decision-making processes.Despite this fact, it is rare to find mentioning of water issues in the world of gender policies.In the world of water policies, on the other hand, gendered approaches are available but their depth and breadth can be questioned [22].One of the contexts in which gender and water are linked is the domain of household (domestic) water use and irrigation.Identifying and addressing gender concerns in these fields require an in-depth understanding of both contexts.Household (domestic) water usage related issues are contextualized in the framework of social rights and welfare, as well as health and hygiene [22].Irrigation, on the other hand, is a matter of economic efficiency and production.Adaptation of the "basic needs/social welfare" approach in the domain of domestic water acknowledges women's needs for water, but does not specifically address women having a greater saying in water management.Still, in this approach, drinking water and sanitation policies agendas are linked to women.In irrigation policy, women are almost non-existent because this topic is considered to be a part of a manly world of production.Farming and irrigation are considered to be more male dominated jobs, thus making women's opinions seen as irrelevant in these topics [22].Lynch cited in [22] also argues that women's roles are more recognized and valued in the domestic water sector. As discussed above, it is possible to say that there is a general problem of a lack of integration of women in environmental decision making processes.At this point, a mentioning of gender and sustainable development can be helpful in addressing that problem in particular.There are several definitions of sustainable development but the most often cited one is the definition proposed by the Brundtland Commission [23][24][25][26].That broad definition highlights the significance of intergenerational equity.The concept of conserving resources for future generations marks the core of sustainable development policy.The long-term stability of the economy and environment can be described as the main objective of this framework.Integrated decision making is thus at the core of sustainable development understanding [25,26].It is only through the integration of economic, environmental, and social concerns throughout the decision making process that sustainable development can be achieved [27].The concept of integration differentiates sustainability from other forms of policy.In practice, encountering a comprehensive and highly integrated problem requires the mixing of economic, environmental, and social objectives across sectors, territories, and generations.Thus, any fragmentation of these objectives can be viewed as a stumbling block in the decision making processes and can hinder development that can be regarded as truly sustainable.Within that perspective, one of the crucial aspects of accelerating sustainable development is to end all forms of discrimination against women [28].Empowering women by giving them equal rights to economic resources such as water are vital in realizing sustainable development.For that matter, strengthening of policies and legislation for greater gender equality is essential.Nevertheless, notions of gender possibly prompt reactions that this is fundamentally 'women's stuff'.It is noteworthy that gender is not a synonym for women.It, in principle, mirrors the dynamic relationship between male and female.It builds the first political order within a society.That political order affects the ways in which we relate to others, and by what means we govern ourselves.In other words, those struggling for gender equality are not only encouraging for a special interest group that will only have an impact on women [29]."The male/female relationship is created and sustained by all within the society."[29]. It is possible to find many references made to women's role in improved water governance in particular and to sustainable development in general [30].For instance, Agenda 2030 [31], connects women's empowerment (SDG 5) and the importance of "water and sanitation" (SDG 6), which provides a guide on interlinks among the two [30].The UN Water Synthesis Report (UN Water, SDG 6 Synthesis Report) and the UN Women and Global Water Partnership Action Piece [32] each make the connection and they propose venues for action.It is important to highlight that it is not clear whether these reports and action plans are referring to women in their individual/professional capacity or to women organizations.In both roles, women's potential must be given attention.Especially on that matter [33] indicates that involvement of women in water resources development, management, and usage means an involvement of strong social networks that are characterized by norms of trust and reciprocity.This means new solutions to water related problems.In the same place it is argued that projects can be more sustainable and infrastructure development can generate maximum social and economic returns with the involvement of women [33].This is because women, being trusted in their communities, have the ability of reaching down to different segments of society.They can inform and engage community members and this may result in more locally owned projects and programs.Evidence shows that project effectiveness, efficiency, and sustainability can be enhanced by supporting gender equality and women's empowerment in infrastructure operations [34].Infrastructure projects can be more gender-responsive by addressing the needs and constraints of women and men.Introducing measures, for example; providing quotas for projects to boost women's opportunities in employment; and enabling and allowing decision making roles for women can be helpful.Stated among them are some interventions to provide enhanced gender-responsive infrastructure projects to maintain women's and men's full access to project benefits.Since this paper is mainly a review of a water infrastructural project, understanding these dynamics discussed above in the context of North Cyprus has the utmost importance for drawing some important conclusions for gender inequality as well.As [21] rightly pointed out, we need to pay attention to "the exclusiveness of role distribution and its implications for resource allocation and the distribution of power," for gender equality in water resource management.This can only be done once it is appreciated that women and men assign different priorities to water and this certainly influences their knowledge bases in terms of water use.This means women and men may have differing ideas regarding project designs.It is important to indicate that policies in water management are drawn largely on men's experiences, needs, and priorities.Therefore, these policies are inadequate in addressing the needs and priorities of the rest of the community.One should recognize that the right to water is an inherent social right for everyone.Easy access to water supplies for all members of a community is one dimension of the matter at hand.Another important point is to have a consistently sufficient supply of water.Without any doubt, the quality of water for health and hygiene is another crucial dimension.In that respect, the following part moves on to review the work, efforts, and skills that women devote to the governance of water as an essential resource of life. Policy Entry Points As mentioned above, water is a basic human need and a social right.Nevertheless, certain limitations arise for women largely in terms of access to water, resource control, participation in decision making structures, and capacity building in that specific field.It is possible to argue that women experience extra burden in times of water scarcity and pollution [35].Gender equality can be strengthening by managing a more integrated and sustainable access to water and water services.It is seen that more inclusive water policies are developed, which recognizes different needs and demands of women and men [35].The most critical aspect of these efforts is the acknowledgment that both formal and informal women's networks can play vital roles in water management.This part aims to highlight some of the inclusive water policies recognized by various declarations at different times in various places. "The International Conference on Water and Environment", Dublin, 1992, admitted the fundamental part that women have in the provision, management, and safeguarding of water (Dublin Principle 3) [35].In the same conference, positive policies were also suggested, which may address women's specific needs.This conference also highlighted the key is to empower women and to define at what level they may participate in water resource programs.In that vein, this can be considered as one of the policy entry points for women.The Hague Ministerial Declaration issued by the second World Water Forum in 2000 also highlights empowering women through a more participatory process of water management [36].The 2001 Ministerial Declaration of the Bonn International Conference on Freshwater also emphasizes a participatory approach in water resources management where both men and women should have an equal standing in managing water resources [37].The Johannesburg Plan of Implementation issued the World Summit on Sustainable Development in 2002 and in that document gender sensitivity is again highlighted this time in the context of the realization of the Millennium Development Goal on safe drinking water and sanitation [38].In 2003, the third World Water Forum was held in Kyoto, Japan.In the Ministerial Declaration of the forum, the gender aspect is again mentioned in the context of water resource management [35].These efforts have been important milestones in acknowledging the intrinsic link between gender and water governance.All these declarations, in one way or another, call for tangible results beyond mere talk in the framework of the link between women and water.Within that background, the rest of the paper moves on to explore the dynamics in North Cyprus through careful analysis of the water infrastructural project as well as, the Turkey-North Cyprus Water Pipeline Project. Materials and Methods This paper entails an in-depth review of the water infrastructure project in North Cyprus and includes an extensive treatment of gender equality concern in respect to this project.It is a qualitative paper based on a single case study.It aims to review in depth the process in which women take part in natural resource management, in particular to water governance through the analysis of a specific water infrastructure project.It is well established that the qualitative case study method in International Relations is helpful in explaining complex phenomena [39].The role of women in water governance in the context of North Cyprus is a relatively unstructured field of research thus employing a qualitative case study method proving invaluable insights. With regards to data collection, secondary resources such as textbooks, magazine articles, and commentaries were used and documents were sourced in North Cyprus.Supplementary to these, primary data collection was undertaken.Archive searches, key informant interviews, field observation and informal interviews were used for primary data collection.In-depth and semi-structured interviews were conducted.Interviewee's personal specifications/interpretations of key issues with regard to the water governance in North Cyprus and women in the overall context were elicited with open-ended questions.In terms of the interviewees' selection procedure, a 'purposeful' sampling [40] was employed.Those actors/informants who were considered to be as information rich cases were engaged with until new research themes emerged.Interviews were held both with those who manage the water infrastructure project in relevant decision-making bodies to understand whether they view women participation as imperative as well as with women's organizations to get their position and understanding of the matter at hand.Interview questions were organized around certain topics (themes).Project coordinators and directors were asked whether they think of any activities to reduce gender inequality for instance rules necessitating women's participation on project committees.Another theme was whether the project took direct measures to empower women mainly by working with or mobilizing local women's organizations.They were also asked whether the different perspectives, needs, and priorities of women and men were identified during the design of the project.All these questions were designed to understand whether women in general inform policy implementation of the water infrastructure project under scrutiny.Interviews with women's organizations were also held.They were asked whether they have any positions in the management and the usage of water resources of the island prior to the project.The questions that were submitted to them followed their understanding of the water management after the implementation of the project.Another theme in these set of questions posed to women's organizations was whether they had any invitations to take part in any of the decision-making bodies related to the project.They were also asked whether their perspective, needs and priorities were asked throughout the design and the implementation of the project.Another question directed to women's organizations was about their interventions in curbing corruption (if they think there was any) in the water project.They were also asked about their interventions in preventing conflicts among different interest groups shaped alongside the institutional arrangements for distribution, maintenance and management of the water brought from Turkey.Their suggestions for increasing resource efficiency in North Cyprus were also asked.A cross-checking of the information provided was possible by talking to officials and relevant people in the field.Field observations of the water project were also undertaken.Interviews were conducted in Nicosia, Cyprus and were recorded and catalogued with the consent of the interviewees. In terms of data analysis, thematic analysis is employed [41] with the aim of discovering patterns and developing themes in the field.Identifying systematic patterns and interrelationships across themes was important for the research at hand.The research, in that sense, was a process of grouping the data into themes to be able to see the role and the participation of women in a significant policy-making field. Water Governance in North Cyprus There are a number of problems related to water governance in North Cyprus.The over-usage of aquifers is one of the main water related problems in that sense (Interviewee 1: Göze in Appendix A).Due to overuse, existing aquifers have become salinized.As underlined by [42], overuse and salinization of coastal aquifers in North Cyprus is mainly the outcome of inefficient water governance.In retrospect, the island was always in need of forestation and improved irrigation.When, in 1878, British colonial officers realized these conditions, they attempted to improve Cyprus's climate by reforestation and improving irrigation [42].In the 1950's, finding alternative water resources turned out to be a part of their endeavors with regards to water governance of the island.In that respect to this, the British colonial administration initially looked for the possibility of water purification but they eventually decided that this may not be a feasible project [42].In its place, British administration decided to import water from elsewhere due to these conditions and they considered to import water from Turkey.Since these years were witnessing intercommunal conflict between Greek Cypriots and Turkish Cypriots, Turkey was not considered to be a reliable source for Greek Cypriot experts (Interviewee 1: Göze in Appendix A; [42]).Nonetheless, the politically more positive atmosphere of the 1959 London and Zurich agreements presented new avenues for an underwater pipeline project for bringing water from Turkey.In Nihat Erim's (Erim in Appendix A) memoirs, the idea was to construct a 45-mile underwater pipe with the cost of $10 million, at the 1959 price [42].With the establishment of the Republic of Cyprus, in 1960, the interest shifted from Turkey to Syria as a possible source of water.Nevertheless, due to the unsuitable political conditions in Syria, the idea was dropped.Then, the RoC, until the 1963 intercommunal conflict, rather focused on dam construction and water conservation.When the 1974 coup d'etat and following Turkish military operation resulted with the de facto division of the island, two separate and non-cooperative administrations were established.Since then, both administrations have their own separate infrastructure and natural resource management.The RoC, as the internationally accepted sovereign of the island, became an EU member state in 2003 but the acquis communautaire is suspended it in the North of the island until a political settlement.As a result, the island's north is not a part of any international regulatory structures.Mostly due to that reason, most of the time natural resource management in North Cyprus is characterized as disorganized, short-term, and ad hoc.Water management issues nevertheless extend Cyprus's de-facto borders, since there are a number of underground aquifers shared by the two sides.Nonetheless, conflict has been an important part of the inefficient water resource management.A lack of coordinated water infrastructure development after 1974 resulted in grave mismanagement in trans-boundary resources of the island (Interviewee 4: Hüdao glu in Appendix A). In the north of the island, total annual freshwater resources are 90 million cubic meters (mm 3 ) and over 90% of this is supplied by groundwater resources.The annual demand is 105-110 mm 3 (Interviewee 1: Göze in Appendix A). 60-80% of water is allocated for agricultural use [42].It is possible to say that there is no water policy regarding a more efficient supply of agricultural water.As mentioned elsewhere, over extraction of groundwater resources has led to the depletion of all aquifers.Salt water intrusion is the main problem of all coastal aquifers in the north.With all these problems, it is clear that North Cyprus is in need of an efficient water governance strategy.Within that framework, it has been always important to find out alternative water resources for North Cyprus.The Turkey-North Cyprus Water Pipeline Project can be seen as part of that endeavor. The Turkey-North Cyprus Water Pipeline Project With the aim of pumping water from the south Turkish coast to North Cyprus, an undersea water pipeline was constructed and began to operate in 2015.The construction of the pipeline took two years and was managed by the General Directorate of State Hydraulic Works (DS ˙I) of Turkey.Anamur River is the source of the project and the Alaköprü Dam was built to collect water.The water travels to North Cyprus, a total of 107 km to the Panagra Dam.The pipeline is anticipated to transfer 75 mm 3 of water annually.It is important to note that the project intends to share water equally between domestic use and agricultural use.Beyond these technicalities, the pipeline project was interrupted by discussions over who will manage and control the water flowing from Turkey.This tense atmosphere was the result of a lack of planning on behalf of the Turkish Cypriot authorities.It is apparent that, almost no plans were made with regards to the water management until the time of the inauguration. In 2010, an agreement was signed between Turkey and the administration in the north for the transfer of the water for 30 years.Renewal for a further five years is also mentioned there.After Turkey finalized the construction of the underwater pipeline and key arteries, a tendering process for internal water distribution and management had begun.From that moment, discussions about the privatization of water governance, the price of the water, and the tendering process for management of the water all turned out to be matters of concern for the Turkish government, the Turkish Aid Commission who financed the construction phase of the pipeline, as well as the administration in the North Cyprus (especially for the Turkish Cypriot public).Therefore, the tendering process had been very tough, mostly due to the fear of Turkish Cypriot municipalities of losing control over water governance.Beyond other alternatives like desalination, it is stated that for many Turkish Cypriot, all the conditions regarding the pipeline project is imposed by Turkey as a way to tie the island more to herself [42].Among these entire discussions one can see no long-term plan for water management but only a great confusion of whether or not the water will be governed by a private Turkish company with the administration in the north or whether its management will be directly given to the municipalities. Considering the acknowledged necessity for gender equality on water governance related matters and decision-making processes, the next part expects to find out traces of women involvement in such a critical issue area in case of the water pipeline project.It is seen that this project has many shortcomings throughout the designing and the distribution processes.One way to start a discussion on these would be to examine the aspect of gender equality. Women's Involvement in the Overall Project Following the analysis on a broader topic of water governance and in particular to the pipeline project, this part aims to look at women involvement in the overall project.Although water has to be seen as an economic, social, and environmental good of the whole community, it is not possible to talk about a coordinated development of water resources with an all-encompassing community and beneficiaries' participation in the management of water resources in the specific case of the pipeline project.On that point, one of the interviewees (Interviewee 3: in Appendix A) mentioned that it is not only women who have been excluded from the whole project.In practice, none of the civil society organizations were incorporated because the project had adopted a top-down perspective (Interviewee 3: Derya in Appendix A).Beyond this project, water policies are merely focusing on water provision and women are not part of the provision, management and safeguarding of water in any related project, commission or institution in North Cyprus.In this context, when related legislations, policies and programs are considered with regards to the pipeline project, it is seen that they were all drafted and finalized without any participation of women.Interviewee 1 (Göze in Appendix A) stresses that, in an uncoordinated and sectorial approach towards water governance, there is no women involvement in the Water Commission established under the roof of the Union of the Chambers of Cyprus Turkish Engineers and Architects.Though, there are two women engineers in the sub-committee of the same commission.The main reason for that is cited as lack of women expertise on these highly technical matters (Interviewee 1: Göze in Appendix A).Considering that women involvement is not about counting the number of women involved in these processes, one needs to understand whether gender-related perspectives such as women's needs and ideas were incorporated into the project.Both Interviewee 2(Atlı) and 3(Derya) (Appendix A) stress that their ideas as women's organizations were not considered as significant during the planning and the implementation process.It is possible to argue that the project had adopted a top-down perspective without any feedback from the community and especially FEMA (Feminist Atelier) published a number of articles dating back to 2015 (when the construction of the pipeline was finalized and this was followed with a public debate on how water can be distributed) to criticize such a perspective (Interviewee 3: Derya in Appendix A). On the side of the women's organizations, it is observed that natural resource management, and water in particular, is not one of the most important items on the agendas of women's associations in North Cyprus.Since issues like violence against women, and women's rights seem to have a more direct effect on women, there is more voice against these issues (Interviewee 3: Derya in Appendix A).It is possible to argue that management of water resources has a rather secondary place comparatively.One of the interviewees responded that they leave such issues to environmental organizations (Interviewee 2: Atlı inAppendix A) because they do not have enough capacity and expertise to address such broad range topics and problems.Interviewee 2 (Atlı in Appendix A) also describes the project as the one where a top-down approach was pursued in the design and the implementation process, thus it substantially lacks a transparent framework that would include different segments of the community.It can be argued that the lack of a bottom-up approach in the overall project resulted in the rather marginal involvement of women. It is seen that above all there is a lack of emphasis on community ownership of the project.In addition to this, the project had no mechanisms for involving women in the design of water initiatives.The project design fell short in including a strategy and action plan stating how women would take part in the design and management of water initiatives linked to the project.This could be done through reserving positions for women on the project committees.The project, in that sense, lacks a gender-inclusive community planning processes, such as separate women's meetings, gender equality in water committees and project processes, and gender inclusive facilitations.Besides lacking gender inclusive components, there is also no implementation of any policy frameworks and strategic plans to encourage the long term sustainability of the water supply in North Cyprus (Interviewee 1, 2 and 3 in Appendix A). In the light of these observations, the following part moves on to provide some recommendations on how to increase equal participation of women and men in the decision-making processes on water related matters. Discussion With gender equality perspective, as mentioned above, the aim is to overcome patterns of inequality between women and men.Nevertheless, attaining greater equality between two genders necessitates changes at various levels of the society.Attitudes and relationships, institutions and legal frameworks, economic institutions, and political decision-making structures all need to change in order to achieve greater gender equality [43].To commence a dialogue for negotiating shared objectives in sustainable water supply, local women and other non-governmental organizations (NGOs) are valuable partners.Nevertheless, women need institutional strengthening to play a greater role in influencing priorities in infrastructure projects in North Cyprus.This is a problem that deserves further attention also because it is a reflection of a greater problem that is gender inequality.Gender inequality is apparent in all aspects of life, ranging from economic to political domains in North Cyprus.According to [44] the unrecognition and isolation of North Cyprus due to the Cyprus Problem has been a major obstacle in the process of political and economic development and gender equality.Reasons for that are cited as "the earnings differentials, occupational segregation, unequal distribution of unpaid work, attitudes towards working women, and the gender gap and segregation in education" [44].Nevertheless, the Turkish Cypriot authorities lack policies to prevent discrimination against women.There are no mechanisms to raise awareness of women on the issues of non-discrimination and equality principles (Cyprus Dialogue Forum, Gender Equality).As a result of not being recognized, international treaties and legal frameworks on gender equality are only unilaterally ratified by the parliament in the north of the island, thus they cannot be recognized by the relevant international actors [44].For example, for the improvement of the legal framework, the Convention on the Elimination of All forms of Discrimination against Women (CEDAW) and the Istanbul Convention are unilaterally ratified by the Turkish Cypriot authorities.Still, the implementation of certain responsibilities under the international and regional instruments is rather slow.Under these conditions, it is civil society stakeholders who can be considered as primary actors for addressing and improving gender equality at the north of the island.They are expected to ensure and monitor the implementation of such frameworks.Although there is such an expectation also in the framework of water resource management and water infrastructural projects, as seen in the Turkey-north Cyprus Water Pipeline Project, in practice women do not have equal access to these projects.Their access is hindered by (a) the lack of adequate policies to represent them in water related initiatives and decision-making bodies (b) by the fact that their own scope of interests is very limited as women's organizations. Likewise, for North Cyprus, tailoring unique implementation strategies and activity areas in increasing the active participation of women in water management is gravely important.To begin with, women's participation and leadership has to be strengthened.At local levels, activities must be devised that aim to increase women's representation in water related decision-making areas.For example, a gender balance quota can be instituted in key positions such as the boards of the local water management organizations.Nonetheless, this has to go beyond symbolic positions on boards and extra effort has to be made to increase the influence for women on such boards.Linked to that, activities can be organized in helping women to strengthen their leadership skills for an enhanced influence.Leadership training, the strengthening of women's networks and organizations, and capacity-building on rights and gender roles can be listed as some of these activities which can be viewed as helpful [45]. There are studies which show that these activities help to boost women participation for the leadership positions at the local, municipal, and provincial levels [46,47].Beyond these, policies and plans which can address gender inequality in its various forms are needed for strengthening women's economic empowerment and encouraging women organizations to include water and sanitation issues in their agendas [45].It can be said that developing strategies to include women in the field of natural resource governance can be seen as a helpful tool in addressing gender inequality in North Cyprus in broader terms as well as contribute to providing a different point of view for the water governance matters. Conclusions Accessing water and related services are essential for survival.The paper has highlighted the significant role of women in the management and use of this vital resource.Notwithstanding their critical role in that, there are structural stumbling blocks for women in terms of resource control, participation, and capacity in North Cyprus, where in general there's a lack of effective water governance.In that context, the issue of women and water is almost non-existent.North Cyprus urgently needs an integrated and sustainable water resource management with a specific emphasis on gender equity.Experiences around the world have shown that women themselves have to define clearly what their interests and concerns are in the sphere of water governance and have become a part of the water related issues at various levels.It is seen that, in case of North Cyprus both formal and informal women's networks must develop a framework to ensure that their concerns and interests are becoming an indispensable dimension of related programs, projects, policies and legislations.In case of the Turkey-North Cyprus Water pipeline Project, unfortunately women have no role, no capacity, and no say at any stage of the project yet.As women are also major beneficiaries, they need to be effectively represented in water related initiatives and decision-making bodies as well.Related infrastructure projects like the Turkey-North Cyprus Water Pipeline Project must be designed to provide women economic opportunities; to make the proper services to women available; to enthusiastically incorporate and empower women; and to reassure women to engage in decision making and leadership roles in related initiatives and bodies.This research attempted to link together two seemingly distinct fields of research as gender equality and natural resource management within North Cyprus, where as a policy field, even natural resource management is still very immature.It is sure that further studies must be done to detail the stumbling blocks that remain in front of women participation, specifically in the context of North Cyprus. Funding: This research received no external funding. Interest: The author declares no conflict of interest.Interviewee 1. Göze, B. personal communication, 19 February 2018.Bektaş Göze is the president of the Water Commission formed under the roof of the Union of the Chambers of Cyprus Turkish Engineers and Architects, Nicosia, North Cyprus.2. A2.Interviewee 2. Atlı, Mine (Personal Communication).27 July 2018.Nicosia.North Cyprus.She is a lawyer and at the same time a project coordinator of KAYAD (women's organization).Her responses were solely in her own capacity.3. A3.Interviewee 3. Derya, Do guş (Personal Communication).28 July 2018.Nicosia.North Cyprus.Do guş Derya is a Member of the Parliament and a FEMA (Feminist Atelier) activist.4. A4.Interviewee 4. Hüdao glu, A. personal communication, 18 February 2018.Ahmet Hüdao glu, an electrical engineer, is the former president of the Union of the Chambers of Cyprus Turkish Engineers and Architects, Nicosia, North Cyprus.He initiated the formation of the Water Commission under the roof of the Union of the Chambers of Cyprus Turkish Engineers and Architects during his presidency.He also participated at various negotations which took place at different times on Turkey-north Cyprus water pipeline project. 5. Nihat Erim is the 13th Prime Minister of Turkey.He is a renowned Turkish politician who participated at the negotiations on Cyprus in London, England, in 1959.
2019-01-22T14:01:58.521Z
2018-08-10T00:00:00.000
{ "year": 2018, "sha1": "eecbf8bb6fa0a0474ed8c7907ef02726156f0682", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9276/7/3/50/pdf?version=1533872026", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "eecbf8bb6fa0a0474ed8c7907ef02726156f0682", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
56258525
pes2o/s2orc
v3-fos-license
INVESTIGATION OF RECEPTIVITY PRINCIPLES IN THE ENTRANCES OF CITIES ( CASE STUDY OF ARDABIL ENTRANCE FROM HEYRAN ROAD ) Nowadays, urban growth has been so fast that proper urban standards are largely neglected. These standards include particular attention to the quality of the city's entrances and exits. Entrance spaces of cities connect the exterior domain of cites to the artificial atmosphere inside them. Uncontrolled urban development and inattention to appropriate spatial development patterns have led to weaknesses in the form and content of urban spaces. The present study is an attempt to investigate and analyze the entrance of Ardabil from the Heyran road. This is a descriptive analytical study and the data has been collected through library studies, Internet searches and field observations. In this article, above from analyzing the above-mentioned entrance, some suggestions have been presented to eliminate its disadvantages. The research results indicate that urbanism principles are not met in this entrance and that this entrance has some problems in terms of receptivity. INTRODUCTION Perspectives have objective -subjective nature.Human perception of the perspectives depend on experiences, mental models, culture and history and physical features of the environment and time.Urban landscape, is an urban perspective that can be seen when approaching the city.What makes people perceive and understand entrance to a city is a series of visual symptoms which despite their physical and material existence, have some connotations (8).Urban entrance landscapes are one of the most effective and most attractive urban areas that leave effective memories in the mind of the viewer (6).The entrances of cities should be represented in, or showcase the city's identity and its underlying values.The entrances of a city should be reprehensive of the city, or be the showcase of the city's identity and its underlying values.Unfortunately, the urbanism criteria and standards are not met at the entrance of cities, and the only factor that helps the driver and passenger know that they are approaching a city, is the sign that announces the beginning of the urban area.Ardabil is also one of the cities where design issues and criteria are not met in its entrance design.And this issue has caused some perturbations at the entrance of the city. OBJECTIVES This study seeks to achieve the following objectives: 1. Investigation and Analysis of the city entrances and identification of its advantages and disadvantages in order to improve the visual quality of the main entrance of the city of Ardabil 2. Provision of some suggestions and solutions for redesigning the city entrances CITY According to the classic definition, city is relatively large and dense permanent accommodation composed of people who are socially different (9).According to The broadest definition of city, it is known as a place for permanent accommodation of humans and the place of their activities (3).Lynch defines city as a house whose residents constitute its most important body.Presence at this house depends on the existence of active relationship between people and their perspectives that is meaningfulness and effectiveness of what is spread before them, rather than on Adornment and capacity of the ambient environments.This meaningfulness can be found in every place and is more likely to be found in cities (7).However, different views of the city have had separate definitions: 1. Sociological definition of city: from sociological perspective, the city's famous definition by Louis Wirth is provided here: (for sociological purposes, city can be defined as a permanent, relatively large and massive settlement for people who are socially defferent ((4). 2. Definition of city from economic point of view: from economic perspective, city can be defined as: (a place for activity of social groups whom mostly have non-agricultural jobs and variety of different kinds of livelihood and different service occupations along with manufacturing and commercial fields are among the key economic features of urbanization (5). 3. The definition of city from capitalism perspective:, from this perspective, city is defined as an arena where various forces fight against one another in order to maintain their dominance and influence on the life of city.These forces belong to the particular structures of power, such as churches, governments and multinational corporations, and city is a reflection of the balance, whether relative or absolute, among these powers, (1). CITY ENTRANCES In the past, the first image of the city which occurred in the minds of the passengers was the image of its entrance.After passing through the desert or mountains for a long time, reaching the farms and gardens around the city began to give a sense of approaching to a biological complex.After passing these farms and gardens, the passengers reached the gate which looked like a door in the walls of the city, and it was the entrance to the city.Sometimes this gate only defined a particular domain without any wall.But today, we normally encounter 2 scenes at the city entrances: A spontaneous scene which is illegally created by the residents of that area and does not follow any pre-determined pattern but has turned into a pattern due to its high frequency. B the scene of entrances which have been established on the basis of an unknown pattern with interference of city managers (2). Today, the entrance of cities receive less attention, and it may be due to the fast vehicles that fail to draw the attention of passengers and newcomers to the city entrances.But, city entrances can always be reinforced through compliance with the design principles, so that good images may occur in the minds of people who enter cities.City entrances, just like any other entrance, should have the necessary receptivity and one should feel welcome on their arrival without having any uncomfortable feeling.Therefore, receptivity is the first thing which a city is expected to have (2). RECEPTIVITY IN THE ENTRANCES OF CITIES On arrival in any city, the first expectation that may occur in one's mind is receptivity.In other words, one expects to face a space that welcomes him/her on their arrival.But surely entering a city is different from entering other domains.Grace or attraction is the first expectation of any person from a city entrance.Presence of natural elements changes this domain into a pleasant domain and paves the path for transition from a natural spaces to an artificial one.That's why the road sides should be surrounded with vegetation so that drivers may reduce their speed and the view of entrance road may improve as drivers approach the city. METHODS AND MATERIALS INTRODUCTION OF THE STUDY'S SCOPE Ardabil province which is located in the northwest plateau of Iran and has an area of 5 /25951square kilometers, constitutes a percentage of the total area of the country.The province is located in part of the triangular plateau of Iran in the Eastern part of Azerbaijan Plateau and about 2/3 of this province is mountainous area with a large height differences and the rest consists of flat areas.Sabalan Mountain with an altitude of about 4811 meters is the highest point of the province.The city has several entrances in different directions.One of these entrances is the entrance from the Heyran mountain path which is the subject of the present study.The maps presented below provide the full territory of the study (Figure 1 and Figure 2) After meeting the territory and doing field observations, the advantages and disadvantages of this entrance in terms of receptivity were evaluated: CONCLUSIONS According to the previous studies, the space of city entrances, as the joints linking the inner and outer city spaces, follow some common patterns of spatial communications in the establishment of this link.In order to have an attractive and effective city entrance, we must follow the urbanism principles in the design of this urban space.As mentioned before, receptivity is one of the factors and policies that any entrance should have.Atmosphere development, vegetation, use of natural elements, fulfilment of the passengers needs and … are the issues which are taken into considerations in discussions of receptivity in city entrances.In investigations and field observations of the entrance to the city of Ardabil (entrance from the Heyran mountain path) some important points were picked, these points include disorderliness in spatial organizations, traffic at the entrance, poor quality of bordering ........But another important point that is noteworthy about the analysis of this entrance is absence of any clear plan to address and eliminate the problems of this entrance. Suggestions Based on the investigation of advantages and disadvantages of this city entrance from the Heyran mountain path, the following suggestions were proposed to redesign this entrance: 1. Increasing the width of road sides and the entrance 2. Development of separate high-speed and low-speed lines 3. Elimination of inappropriate controls from the sidelines of the entrance Fig 1 : Fig 1: the territory designated as the space required for the design of the city entrance from Astara Figure 2 . Figure 2. The provided territory with full details
2018-12-17T22:08:54.610Z
2016-04-10T00:00:00.000
{ "year": 2016, "sha1": "f9794c0cc97734a27426d3688da0c32198c4edfb", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.7456/1060ase/027", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f9794c0cc97734a27426d3688da0c32198c4edfb", "s2fieldsofstudy": [ "Geography" ], "extfieldsofstudy": [ "Geography" ] }
208234022
pes2o/s2orc
v3-fos-license
Deficiency of Urokinase Plasminogen Activator May Impair β Cells Regeneration and Insulin Secretion in Type 2 Diabetes Mellitus Background: The relationship between urokinase-type plasminogen activator (uPA) and the development of type 2 diabetes mellitus (T2DM) was investigated in the study by using mice and cell models, as well as patients with T2DM. Methods: In mice models, wild-type and uPA knockout (uPA-/-) BALB/c mice were used for induction of T2DM. In cell models, insulin secretion rate and β cell proliferation were assessed in normal and high glucose after treating uPA siRNA, uPA, or anti-uPA antibody. In our clinical study, patients with T2DM received an oral glucose-tolerance test, and the relationship between uPA and insulin secretion was assessed. Results: Insulin particles and insulin secretion were mildly restored one month after induction in wild-type mice, but not in uPA-/- mice. In cell models, insulin secretion rate and cell proliferation declined in high glucose after uPA silencing either by siRNA or by anti-uPA antibody. After treatment with uPA, β cell proliferation increased in normal glucose. In clinical study, patients with T2DM and higher uPA levels had better ability of insulin secretion than those with lower uPA levels. Conclusion: uPA may play a substantial role in insulin secretion, β cell regeneration, and progressive development of T2DM. Supplementation of uPA might be a novel approach for prevention and treatment of T2DM in the future. Introduction In recent decades, cases of type 2 diabetes mellitus (T2DM) have been rapidly increasing worldwide. While the causes of T2DM include insulin resistance and impaired insulin secretion, some clinical evidence has shown progressive insulin secretion impairment over time in patients with T2DM [1][2][3]. The causality of mass declination of β cells in T2DM is the increasing rate of apoptosis rather than neogenesis [4]. However, the factors associated with apoptosis and neogenesis in β cells are still unknown. Preventing the development of T2DM and β cell dysfunction is an important issue in controlling T2DM. Urokinase-type plasminogen activator (uPA) is well-known as a serine protease which activates plasminogen, converting to plasmin in the fibrinolytic process of thrombosis and extracellular matrix. Patients with T2DM are prone to formation of intravascular thrombosis and an imbalance between coagulation and fibrinolysis is implicated in the pathogenesis [5]. In addition to triggering a fibrinolytic cascade, uPA is a pleiotropic functional protein linked to innate immune response in regulating immune cell migration and recruitment [6,7]. The uPA independently enhances release of some inflammatory cytokines and activates matrix metalloproteinase [8]. Moreover, uPA also plays an important role in tissue remodeling and atherosclerosis [9,10]. Clinically, intravenous uPA is commonly used for thrombolytic therapy on peripheral artery occlusion in patients with T2DM. High glucose reduces uPA activity and degradation of extracellular matrix on mesangial cells [11]. In addition, the expression of uPA declines during development of adipose tissue in mice fed a high-fat diet, whose blood glucose is elevated time-dependently [12]. Furthermore, local application of exogenous uPA accelerates wound healing in the diabetic mouse model [13], and islet surface modification with uPA, instead of heparin, improves β cell survival on transplantation of islets [14]. However, except for these weak relationships, the role of uPA on T2DM is unknown. In this study, we investigated the causal relationship between uPA and T2DM by using wild-type and uPA knockout (uPA-/-) mice, indexing the diabetes-induced rate, insulin resistance, and insulin secretion. We also explored the β cell proliferation and insulin secretion after silencing expression of uPA or treating with uPA, respectively, in in vitro cell models. In the clinical cases, we analyzed the relationship between uPA levels and insulin secretion in patients with T2DM. The aim of this study was to explore the role of uPA in T2DM. uPA-/-Mice Were Prone to HyperGlycemia and Development of T2DM Twenty-two wild-type and 12 uPA-/-BALB/c mice received induction of T2DM. While half of the wild-type mice developed T2DM within one week, at a success rate of 50%, all the uPA-/-mice developed T2DM (success rate: 100%). The uPA-/-mice developed significantly higher hyperglycemia after induction, and the level difference persisted for one month ( Figure 1B). The uPA-/-mice non-significantly lost a little weight in the development of hyperglycemia, which may have been due to hyperglycemia-related polyuria and mild dehydration ( Figure 1C). These above results suggest that uPA-/-mice are prone to T2DM. Interestingly, the Immunohistochemical (IHC) staining of uPA on the islet of wild type showed that the uPA expression on islet declined remarkably on D3 (DX means X days after successful development of T2DM) after induction ( Figure 1D). The data indicate that hyperglycemia was associated with the suppression of uPA expression on islet. Figure 1. Changes in uPA, blood glucose, body weight, uPA expression on islet, insulin, glucagon, insulin resistance (HOMA-IR), and insulin secretion (HOMA-β) before and after development of type 2 diabetes mellitus (T2DM) in both wild-type and uPA-/-mice (n ≥ 4). (A) The uPA levels in wild-type (solid line) mice were significantly higher than those in uPA-/-(dashed line) BALB/c mice. ** p < 0.01. (B) D0, blood sugar levels in both uPA-/-and wild-type mice were similar. After induction, blood glucose in uPA-/-BALB/c mice was significantly higher than in wild-type mice. *** p < 0.001. (C) The body weight in both wild-type and uPA-/-BALB/c mice showed no significant difference. (D) The expressions of uPA on islet before and after induction of T2DM were compared by immunohistochemical stain. In wild-type mice, uPA expression obviously decreased after induction of T2DM. The low expressions of uPA persisted both before and after induction in uPA-/-mice. (E) The insulin levels in wild-type mice were significantly higher than in uPA-/-mice. With the high-fat diet, insulin levels increase progressively in both wild-type and uPA-/-mice. (F) The glucagon levels declined gradually, along with elevation of insulin levels in wild-type and uPA-/-mice. (G) The HOMA-IR in wild-type mice was significantly higher than in uPA-/-mice. With a high-fat diet, the HOMA-IR increases significantly and progressively in both wild-type and uPA-/-mice. (H) The HOMA-β in uPA-/-mice was lower than in wild-type mice. After induction, the HOMA-β significantly decreases in both wild-type and uPA-/mice. ** p < 0.01 (Kruskal-Wallis H test). Figure 1E-H shows the changes in insulin, glucagon, insulin resistance, and insulin secretion during the course of T2DM development. It was predictable that insulin resistance in the mice increased with the feeding of a high-fat diet. The insulin levels and Homeostasis model assessment-insulin resistance (HOMA-IR) increased progressively in the wild-type mice, whereas glucagon levels, the reciprocal hormone of insulin, declined gradually with elevation of insulin. While Homeostasis model assessment-β (HOMA-β) was significantly lower on D3 in wild-type mice, the HOMA-β, however, increased mildly on D30, although it did not reach a significant difference. We hypothesized that the insulin secretion in wild-type mice might be revived partly after streptozotocin (STZ) destroyed β cells. On the other hand, the increments of insulin levels and HOMA-IR in uPA-/-mice were lower than in wild-type mice, and the glucagon levels in uPA-/-mice were higher than in wild-type mice. However, it was noteworthy that the same induction protocol on uPA-/-mice caused the HOMA-β to decline abruptly on D3 and remain low until D30, where the D30 value was even lower than that of D3. The data suggest that the β cells of uPA-/-mice failed to regenerate after being destroyed by STZ. In the pathologic morphology (Figure 2A), the IHC stain of insulin in the wild-type mice showed the insulin particles increased obviously on D30. A similar finding was not noted in uPA-/-mice, indicating less β cell mass as compared with the wild-type mice. After quantitative analysis ( Figure 2B), the intensity scores of insulin particles in islet of wild-type mice were significantly higher than those in uPA-/-mice on D0 and D30. The proliferating cell nuclear antigen (PCNA) of islet in wild-type mice on D30 increased mildly, although it did not reach statistical significance. This demonstrates that the regeneration of β cells after injury is poor in uPA-/-mice. Furthermore, this accords with lower levels of insulin secretion in uPA-/-mice on D30 ( Figure 1H). After Treatment with uPA Plasmid, uPA-/-Mice Were Less Likely to Develop T2DM After the injection of uPA plasmids in uPA-/-mice, the uPA level was significantly higher than in those injected with control plasmid, and remained steady until the end of the study ( Figure 3A). The most interesting observation was that not one mouse developed T2DM after STZ/Nicotinamide (NTM) induction ( Figure 3C). Conversely, all uPA-/-mice injected with control plasmids developed T2DM. Accordingly, the expressions of insulin and PCNA on islet in plasmid-treated uPA-/-mice were found to have increased as in the wild-type mice ( Figure 2A). Based on these findings, we presumed that the ability to regenerate β cells in uPA-/-mice seemed to have recovered after supplementation of uPA plasmid. The uPA Secretion Rate Decelerated and uPA Expression on β Cells Declined in High-Glucose Condition The uPA secretion rate and uPA intra-cellular expression of β cells after treatment with normal or high glucose are shown on Figure 4A. Comparing with normal glucose, uPA secretion rate of β cells, adjusting for cellular proliferation (3-(4,5-cimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT)), significantly decelerated in high glucose for both 24 and 48 hour (h). In addition, the uPA intra-cellular expression in β cells also significantly decreased in high glucose for 48 h. These findings accorded with the result in the in vivo mice model ( Figure 1D). The H&E stain of islet showed no pathologic structural change in either wild-type or uPA-/-mice. The expression of glucagon on islet increased after induction both in wild-type and uPA-/-mice. Obviously, insulin particles of islet 30 days after induction (D30) increased in wild-type mice as compared with uPA-/-mice. Meanwhile, the PCNA expression of islet increased on D30 in wild-type mice. After treatment with uPA plasmids in uPA-/-mice, the glucagon, insulin, and PCNA expressions are similar to those on D30 in wild-type mice. (B) The quantitative intensity score of expression of glucagon, insulin, and PCNA on islets on induction day (D0), D3, and D30, after induction was assessed and scored by pathologists. The intensity scores of glucagon in uPA-/-mice were significantly higher than those in wild-type mice on D3. The intensity scores of insulin in wild-type mice were significantly higher than those in uPA-/-mice on D3 and D30. The intensity score of PCNA in wild-type mice on D30 was higher, but did not reach statistical significance. ** p < 0.01 (Kruskal-Wallis H test). Insulin Secretion Rate Decelerated and Replication of β Cell Declined in High Glucose after Silencing uPA The role of uPA in insulin secretion was examined in a uPA-silenced β cell model ( Figure 4B). In normal glucose condition, the rate of insulin secretion in mouse β cells peaks in the first hour and then falls gradually, with no significant difference among normal, scramble siRNA (S-siRNA), and uPA siRNA groups. In high-glucose condition, the insulin secretion rate rapidly rises in the first hour, and then progressively accelerates with time in normal and S-siRNA groups. However, in the uPA siRNA group, the rate of insulin secretion in the first hour was lower than in normal and control groups. The insulin secretion rate in uPA siRNA group did not accelerate and was significantly lower than in the normal and S-siRNA groups. The results indicate that a deficiency of uPA in β cells impaired the insulin secretion in high-glucose condition, suggesting that uPA may contribute to regulating insulin secretion of β cells after high-glucose stimulation. To assess the role of uPA in β cell regeneration, the comparison of β cell proliferation in normal, S-siRNA, and uPA siRNA groups was studied by MTT assay ( Figure 4C). The β cell proliferation was significantly inhibited after uPA silencing by siRNA, while cell proliferation was inhibited mildly in the S-siRNA group. In eliminating the chemical effect caused by the transfection reagent, our result clearly indicated that a deficiency of uPA could lead to the impairment of β cell regeneration. Treatment with uPA Enhanced β Cell Proliferation in Normal Glucose, but Insulin Secretion Rate Decelerated in High Glucose after Treatment with Anti-uPA Antibody The insulin secretion rate and cell proliferation of β cells after treatment with different concentrations of uPA and anti-uPA antibody is shown in Figure 5. The insulin secretion rate of β cells significantly decelerated after treatment with uPA antibody in high glucose, at 2 h, and persisted for 4 h. In normal glucose, the insulin secretion rate of β cells treated with anti-uPA antibody declined in the second hour, but was restored thereafter. In addition, the insulin secretion rate at 4 h significantly increased after treating uPA and co-treating uPA and uPA antibody. The finding was similar to the results for β cells with silenced uPA. However, β cells treated with high levels of uPA did not show an accelerated insulin secretion rate in either normal glucose or high glucose. In addition, cell proliferation of β cells significantly increased after being treated with uPA in normal glucose. In addition, the effect of treatment was dose-dependent. In high glucose, β cells treated with uPA did not significantly increase proliferation. However, β cells treated with anti-uPA antibody showed inhibited cell proliferation in high glucose. Better Insulin Secretion Capability after Oral Glucose Challenge in Patients with T2DM and Higher uPA Levels There were 112 subjects with T2DM enrolled in our study, and they were divided into four quartile groups based on their plasma uPA levels, as shown in Table 1. Their general characteristics among each uPA quartile group are shown in Table 1. The uPA levels between different groups showed a significant difference statistically. While the HgbA1c of groups uPA4 and uPA2 were significantly lower than those of uPA1 and uPA3, the fasting plasma glucose of group uPA4 was significantly lower than that of uPA3, and LDL of group uPA4 was significantly higher than that of uPA1. C-peptide, adiponectin and free fatty acid showed no significant difference between quartiles of uPA groups. However, C-peptide in uPA4 groups was non-significantly higher than in the other three groups. The correlation between the uPA and insulin-to-glucose area under curve (AUC) ratio of all subjects is demonstrated in Figure 6A, which shows a significant linear relationship (r = 0.308). After adjusting for age, sex, and BMI, uPA is also independently and significantly associated with insulin-to-glucose AUC ratio (β = 0.289, 95% Confidence Interval: 0.128-0.604, p = 0.003). This result indicates a weak relationship between uPA levels and insulin secretion ability for patients with T2DM. After dividing the subjects into four quartiles groups according to their uPA levels, it was found statistically significant that subjects with the highest level of uPA had better insulin secretion ability than the three lower uPA groups ( Figure 6B). Based on the clinical data, we again ascertained that uPA level is associated with insulin secretion in T2DM, especially for men with a higher uPA level. Figure 5. Insulin secretion rate and β cell proliferation in normal glucose or high glucose after treating with uPA or anti-uPA antibody. (A) Insulin secretion rate of β cells treated with control (black circle), uPA (blank circle), anti-uPA antibody (inverted black triangle), or uPA + anti-uPA antibody (blank triangle). In normal glucose, the insulin secretion rate of β cells with treating uPA antibody significantly decreased at 2 h. However, the insulin recreation rate at 4 h significantly increased after treating uPA and co-treating uPA and uPA antibody. The insulin secretion rate of β cells significantly decelerated in high glucose, after being treated with uPA antibody. * p < 0.05, ** p < 0.01 as compared with control group (Kruskal-Wallis H test). (B) Proliferation of β cells treated with uPA, anti-uPA antibody, or uPA + anti-uPA antibody for 24 or 48 h. The β cell proliferation increased after being treated with uPA in normal glucose. In high glucose, β cell proliferation decreased after being treated with anti-uPA antibody. * p < 0.05 (Kruskal-Wallis H test). Discussion According to the existing literature and our data, BALB/c mice are refractory to induction of T2DM, using a high-fat diet and STZ/NTM. In our study, the uPA-/-BALB/c mice were sensitive to induction of T2DM. However, after treatment with intra-muscular uPA plasmids, uPA-/-BALB/c mice were again resistant to induction of T2DM. In the IHC stain of insulin and PCNA, the islet of uPA-/-BALB/c mice after induction of T2DM showed poor regeneration. In our in vitro study, we found a deceleration of the insulin secretion rate and decline in replication of the β cell line in high-glucose condition after silencing uPA by siRNA. After treatment with uPA, β cell proliferation increased in normal glucose. After blocking circulating uPA by anti-uPA antibody, the insulin secretion rate decelerated in high glucose, and β cell proliferation declined in high glucose. At the same time, hyperglycemia also suppressed uPA expression in β cells in BALB/c mice and in the in vitro cell model. Clinical subjects with higher levels of uPA had a better ability to secrete insulin. Our results indicated that uPA is an important factor in insulin secretion and β cell regeneration, and deficiency of uPA may contribute to development of T2DM. As compared with C56BL/6 mice, Hayashi K et al. found BALB/c mice needed a larger dose of STZ for induction of diabetes [15]. Similarly, BALB/c mice had a lower fasting-blood glucose level and increment of blood glucose after being fed a high-fat diet [16]. According to the results, BLAB/c mice with increased insulin resistance after being fed a high-fat diet had better compensative ability to secrete more insulin for maintaining glycemic homeostasis. These findings could explain the lower induction rate in our BALB/c mice. It is most interesting that the induction rate of T2DM in BALB/c mice abruptly elevated after their uPA gene was knocked out. This suggests that uPA is an important factor in the development of T2DM, which motivated us to undertake this study for underlying causes. The results of our in vivo study showed uPA-/-mice had lower HOMA-β and HOMA-IR-which oppose each other in the development of hyperglycemia-than did the wild-type mice before and after induction of T2DM. However, the uPA-/-mice after induction had higher hyperglycemia. It is reasonable to suggest that a deficiency of uPA has a more prominent effect on insulin secretion than on insulin resistance, resulting in more hyperglycemia in uPA-/-mice. The impact of uPA deficiency on insulin secretion was also demonstrated in our in vitro cell model. The rate of insulin secretion in the uPA silenced β cell line decelerated in high-glucose condition, which was consistent with the findings in our in vivo study. The explanation may be that insulin particles increase in islet, but HOMA-β did not increase on D30 in wild-type mice because HOMA-β was not evaluated after the glucose challenge. Similar results were seen in cellular model. We found that the insulin secretion rate in normal glucose decreased at 2 h in uPA antibody group, and then mildly increased at 4 h. In high glucose, the insulin secretion rate significantly decreased at both 2 and 4 h. In human investigation, uPA is positively related to the ability for of insulin secretion during oral glucose tolerance test (OGTT). These results may verify our presumption that the uPA contributes to insulin secretion in high glucose. However, the exact mechanism by which uPA affects insulin secretion in high glucose is still unclear. Christow et al. reported that uPA stimulates intracellular calcium release by activating inositol 1,4,5-trisphosphate, a crucial signal in insulin secretion of β cells after glucose stimulation, in human promyelocytic cells [17]. However, there is no study that has shown a similar mechanism in β cells. The apoptosis of β cells over regeneration may be the rationale for impaired insulin secretion in development of T2DM. Our in vivo and in vitro studies indicated that deficiency of uPA is associated with decreased regeneration of β cells. The most potent evidence is that insulin particles of the islet increased in wild-type mice one month after injury of insulin-producing β cells, by injection of STZ. Similarly, Bonner-Weir et al. found euglycemia had developed, and observed partial β cell regeneration, 10 days after they STZ-induced diabetic mice [18]. Thereafter, several studies explored the mechanisms of β cell regeneration for potential therapy in T2DM [19,20]. However, a similar result was not found in uPA-/-mice. The proliferating biomarker PCNA in islet of wild-type mice on D30 showed a nonsignificant increase, which may be due to wide variation by individual subjective judgement. However, we found a decrease of cellular proliferation was noted in the β cell line after silencing uPA by siRNA and treatment with anti-uPA antibody in high glucose, which supports our findings in the mouse model. In addition, treatment with uPA in the in vitro study increased β cell proliferation in normal glucose, but not in high glucose. Generally, persistent hyperglycemia would induce glucotoxicity, which inhibits β cell replication. However, we found that treatment with uPA neutralizes β cells proliferation in our in vitro study. Meanwhile, the effect of glucotoxicity inhibited β cell proliferation amplified by treatment of anti-uPA antibody in high glucose. These concordant results in in vivo and in vitro studies emphasize the important role of uPA in β cell proliferation and the close relationship between lack of uPA and development of T2DM. The pathophysiology of uPA in β cell regeneration is still unclear. Some studies showed uPA contributed to regeneration in some kinds of cells. Shimizu et al. found that regeneration of hepatocyte was impaired in plasminogen activator-deficient mice, which is associated with activation of the hepatocyte growth factor [21]. Moreover, uPA also contributes to the regeneration of skeletal muscle cells by increasing the hepatocyte growth factor [22,23]. However, our study is the first paper identifying the role of uPA in β cell regeneration. Teramura et al. found uPA modified the surface of the transplanted islet and increased β cell survival, and suggested that the fibrolytic effect of uPA improved blood clot-related inflammation [14]. It is possible that a uPA-modified surface on β cells may contribute to cellular regeneration. Also noteworthy is that uPA-/-mice were treated with uPA plasmid, and their uPA levels elevated subsequently. No uPA-/-mice receiving uPA plasmids developed hyperglycemia. On the other hand, all uPA-/-mice receiving control plasmid developed hyperglycemia after induction. The β cell regeneration of the islet in treated mice resembled that of wild-type mice. In addition, IHC stains of insulin and glucagon on islet in uPA-/-mice treated with uPA plasmid showed a simultaneous increase, which may imply that increasing apoptosis and regeneration of β cells on effects of uPA-/-and uPA plasmids supplementation at the same time led to euglycemic homeostasis in mice. This interesting finding suggests that failure of β cell regeneration might be reversible after supplementation of uPA. Although the treatment of uPA has not yet been applied in clinical studies, our findings may provide a new direction in the use of uPA for prevention of T2DM in the future. We also found uPA expression on the islet was reduced in wild-type mice after development of T2DM. However, there has been no study exploring the uPA expression in β cells in T2DM. Fisher EJ et al. found that uPA levels and activity declined in mesangial cells in high-glucose condition [11]. According to our results, we presumed that hyperglycemia after induction may suppress uPA expression on islet. Using the above findings, we delineated the possible model of β cell failure and development of T2DM (Figure 7). Uncertain genetic or environmental factors may contribute to uPA deficiency. The deficiency of uPA impairs insulin secretion and β cell regeneration, and it promotes hyperglycemia. With the superimposition of insulin resistance, hyperglycemia becomes more severe. Then, hyperglycemia suppresses uPA expression in β cells. Consequently, a vicious cycle of T2DM results from the initial uPA deficiency. Furthermore, the vicious cycle leads to progressive β cell mass loss, so that patients with T2DM eventually need insulin supplement for glycemic control. In our study, the relationship between uPA and insulin secretion in mice and cell models was identified by clinical exploration. In clinical investigation of patients with T2DM, we found that uPA levels were mildly positively related to insulin secretion after the oral glucose challenge. From further analysis, the patients in the highest quartile of uPA had significantly better insulin secretion than did the three lower quartile groups. The relationship suggested a threshold of uPA level for better insulin secretion. If the uPA level is above the threshold, the patient's insulin secretion would be optimal. Nevertheless, optimal insulin secretion needs to be defined. Our clinical results suggest a novel direction for administration of uPA for treatment of patients with T2DM or prevention of β cell failure. However, several studies also showed uPA is linked to cancer invasion and metastasis [24,25]. Supplement of overdose of uPA may increase the risk of potential cancer spreading. Thus, the adequate balance of uPA supplementation needs further study. The receptor of uPA, urokinase plasminogen activator receptor (uPAR), is composed of three extracellular domains (D1, 2, and 3) and may be shed from the cell surface by several protease as a soluble bioactive peptide in blood and body fluid (soluble uPAR) that has multiple biological properties. We had analyzed the soluble uPAR in health subjects and patients with T2DM in a previous study. [26] However, the levels of soluble uPAR between healthy subjects and T2DM patients without nephropathy showed no significant difference. However, a large bundle of studies showed soluble uPAR is associated with various complications of T2DM, such as nephropathy, retinopathy, peripheral artery disease, and cardiovascular events [26][27][28][29]. Nonetheless, whether deficiency of uPA and high levels of uPAR synergistically contribute to insulin secretion impairment and complications of T2DM needs further explorations in the future. There were some limitations in our study. First, we did not have the pathology of human islet for testing our theory in clinical subjects. The relationship between uPA and β cell regeneration in human subjects is still unknown. However, the oral glucose test for evaluation of insulin secretion is commonly used in clinical research worldwide. Second, the mechanisms of uPA in insulin secretion and β cell regeneration are still unknown. One possible explanation may be associated with activation of the hepatocyte growth factor [21][22][23] by uPA, but it is still not a reasonable explanation for the real mechanism of β cell regeneration. In this study, we explored the influence of uPA on development of T2DM. Detailed cellular pathophysiology of uPA in β cells requires further study. Moreover, we used MTT assay, which evaluates mitochondrial dehydrogenase activity, for assessing cellular proliferation and apoptosis. However, the result was in line with our in vivo study. In addition, some studies also used MTT assay for assessing cellular proliferation or apoptosis [30,31]. Thus, it did not reduce our valuable contribution. Finally, we enrolled subjects with T2DM for clinical investigation. There are many factors affecting the glycemic and lipid control, such as diet control, exercise, and drug compliance. Thus, there were some variations between groups of uPA quartile. The relationship between uPA and insulin secretion in healthy subjects needs to be analyzed in the future. Backcross Breeding In this study, the B6.129S2-Plautm1Mlg/J mouse strain (Jackson Laboratory; GA, USA) was backcrossed onto a pure BALB/cByJNarl (the National Laboratory Animal Breeding and Research Center; Taipei, Taiwan). The first cross was between the female BALB/cByJNarl (recipient inbred partner) and male B6.129S2-Plautm1Mlg/J that carried the donor allele. Then, the procedure was repeated for at least five to ten generations backcross cycles before the experiment, since a strain was considered a congenic incipient after five to nine backcross cycles (N5-N9). The mice used in the present study were at least N5 generation or greater. The identification of genotype was confirmed by polymerase chain reaction. DNA from Tail Biopsy and Genotyping Identification A piece of murine tail (~0.5mm) was cut and placed in a polypropylene microfuge tube when the mouse was 3-4 weeks old. After incubating with DNA digestion buffer and proteinase K at 55 • C overnight, the supernatant was collected by centrifugation at 13,500 rpm for 15 min, followed by precipitating and washing the murine DNA by absolute and 75% ethanol at 4 • C, respectively. The obtained DNA was dried at room temperature, and then dissolved in sterilized water prior to analysis. The genotyping of uPA-/-was identified by polymerase chain reaction and genetic sequence identification, respectively. The primer sequences (5 to 3 ) were CCGGTTCTTTTTGTCAAGACCG, CGGCAGGAGCAAGGTGAGAT, TCTGGAGGACCGCTTATCTG, and CTCTTCTCCAATGTGGGATTG. Induction of T2DM Mouse Model The male wild-type and uPA-/-BLAB/c mice were fed a high-fat diet (40% fat, Research Diets Inc., NJ, USA) from the age of 5 weeks. NTM (200 mg/Kg) was injected intraperitoneally 15 min before the first dose (75 mg/Kg) of intravenous STZ. Five days later, the mice received the second dose of NTM and STZ [32,33]. Murine blood sugar and glucosuria were measured daily. The successful induction of T2DM was defined as blood sugar more than 11.1 mmol/L (200 mg/dL) and urine-strip color presenting four positives simultaneously. Groups of wild-type and uPA-/-BALB/c mice were euthanized before induction at D0, D3, and D30 after occurrence of T2DM. Blood samples were collected before euthanasia and the pancreatic tissues were harvested after euthanization. The experimental animal protocol was approved by the Institutional Animal Care and Use Committee at the National Defense Medical Center, Taipei, Taiwan (IACUC12-058), and performed in accordance with the relevant guidelines and regulations. uPA-/-Mice Treated with uPA Plasmid The uPA-/-BALB/c mice received weekly intramuscular injections of purified uPA plasmids (100 mg) or control plasmids from three weeks before induction of T2DM. The levels of uPA were measured weekly. Mice having successful uPA boost and T2DM induction were subjected to all the foregoing tests, including the induction rate. Measurement of Murine uPA, Insulin, Glucagon, Insulin Resistance, and Secretion The plasma and serum were separated from blood by centrifugation within one hour of being drawn and were stored at −80 • C, prior to analysis. The uPA was measured, using the Mouse uPA Total Antigen Assay enzyme-linked immunosorbent assay (ELISA) kit (Molecular Innovations, Novi, MI, USA). The serum insulin was measured, using the Mouse Insulin ELISA Kit (Mercodia AB, Uppsala, Sweden) with the intra-and inter-assay coefficients of variation at 3.4% and 3.6%, respectively. The serum glucagon was measured, using Human/Mouse/Rat Glucagon ELISA Kit (RayBiotech, Norcross, GA, USA), which has the intra-and inter-assay coefficients of variation of <10% and <15%, respectively. HOMA-IR and -β were measured to assess insulin resistance and insulin secretion ability, respectively [34]. 3.1.5. Hematoxylin and Eosin (H&E) and IHC Stains of Islet for Insulin, Glucagon, uPA, and PCNA The pancreatic tissue was fixed in 10% formaldehyde fixative solution and embedded in paraffin. Sections of formalin-fixed pancreatic tissue were immersed in xylene for 5 min, three times, to remove paraffin, followed by rehydration in graded ethanol and stained with H&E. For IHC staining of insulin in islet, the slices were incubated with 1:500 dilution of the primary antibody, rabbit anti-insulin antibody (Cell Signaling Technology, Danvers, MA, USA), at room temperature, for 1 h, after removing paraffin and rehydrating. Then, the slices were incubated with 1:200 dilution of secondary antibody, goat anti-rabbit IgG-HRP (Invitrogen, Life Technologies, Grand Island, NJ, USA), for 1 h before wash. The stain was visualized by use of BAD chromogen. For IHC staining of glucagon in islet, the procedure was the same as above, except that anti-mouse glucagon antibody (1:5000; Abcam, Cambridge, MA, USA) was used as the primary antibody and goat anti-mouse IgG-HRP (1:200; Santa Cruz Biotechnology, Santa Cruz, CA, USA) was the secondary antibody. For IHC staining of PCNA in islet, the rabbit anti-PCNA (1:200; Santa Cruz Biotechnology) was the primary antibody and the goat anti-rabbit IgG-HRP (1:200; Invitrogen, Life Technologies) was the secondary antibody. For IHC staining of uPA in islet, the anti-mouse uPA antibody (1:200; Abcam) was the primary antibody, and the goat anti-mouse IgG-HRP (1:200; Santa Cruz Biotechnology) was the secondary antibody. Slides were examined for development of fluorescence, using an optical photomicroscope (Olympus Corporation, Tokyo, Japan), and for quantitative analysis. At least 5 islets per slide were examined and scored by at least two well-trained pathologists for positively stained areas. For every islet examined, a score of 0 to 4 (0, no positive staining; 1, minor staining; 2, moderate staining; 3, prominent staining with expanded positively stained areas; 4; prominent staining of large areas) was given. Assessing uPA Expression and Secretion of Murine β Cells in Normal or High Glucose Medium Murine pancreatic β cells, NIT-1 cells (American Type Cell Culture, Manassas, VA), were seeded in culture platelet and treated with normal (7 mM) or high (25 mM) glucose culture medium for 24 or 48 h, respectively. Culture media from each well were collected, and uPA levels were measured by Mouse uPA Total Antigen Assay ELISA kit (Molecular Innovations, Novi, MI). Thereafter, equal amounts of protein (30 µg) from each well, after harvesting cells, were separated by gel. The gel was electroblotted onto a nitrocellulose membrane, and then incubated with 1:1000 dilutions of anti-uPA antibody (Abcam, Cambridge, MA), at 4 • C overnight. After incubation in horseradish peroxidase-conjugated anti-rabbit antibody, the membrane-bound antibody detected was incubated with Western blot detection system and captured on X-ray film. Measurement of Insulin Secretion of β Cells with and without uPA Silencing NIT-1 cells were prepared and mixed with 0.4 µL of TurboFect transfection reagent (Thermo Fisher Scientific Inc., Waltham, MA, USA) containing 5 nM of siRNA (Santa Cruz biotechnology), and then they were seeded in platelet for 8 h. For the control purpose, equal amounts of S-siRNA were used. NIT-1 cells with and without uPA silencing were first incubated with 10% fetal bovine serum and 25 mM of glucose, for 40 h, to exhaust intracellular insulin. Then, the NIT-1 cells were stabilized with normal glucose (7 mM) culture medium for 1 h. Subsequently, the NIT-1 cells were treated with culture medium containing 25 mM or 7 mM of glucose, respectively. The insulin levels in culture media were measured at 1, 2, and 4 h. Measurement of β Cell Proliferation with and without uPA Silencing in High Glucose NIT-1 cells with and without uPA silencing were incubated in a high-glucose (25 mM) cell medium for 40 h. Then, 10 mL of 5 mg/mL MTT solution (Invitrogen, Life Technologies) was added to each well for labeling live cells. After incubation, the spectrometric absorbance at 570 nm was used for reading cell viability. Assessment of Insulin Secretion Rate and β Cell Proliferation after Treating with uPA NIT-1 cells were incubated with normal (7 mM) or high glucose (25 mM), respectively. The first group of NIT-1 cells was treated with 0.01 IU/mL or 1 IU/mL of uPA (Yao Chih Hsiang Inc., Taiwan) in culture medium. The second group of NIT-1 cells was treated with 0.6 nM or 60 nM of anti-uPA antibody (Abcam, Cambridge, MA, USA) in culture medium. The third group of NIT-1 cells was treated with uPA and uPA antibody at the same time. Insulin secretion rate and cell proliferation of β cells were measured (method as described above). Subjects Subjects aged 20-70 years who had been diagnosed with T2DM within 5 years, from a local teaching hospital in Taiwan, were enrolled in the study. Subjects who had a history of major medical diseases, including coronary heart disease, myocardial infarction, stroke, renal failure, or type 1 diabetes were excluded. Three days prior to the study, subjects were asked to maintain a stable diet. On the day of the study, subjects visited the clinic at 8 a.m. after a ten-hour fast. A complete physical examination was done, and body mass index was calculated as weight/height 2 (kg/m 2 ). Blood pressure was measured by nursing staff, using standard mercury sphygmomanometers on the right arm of seated participants. The study protocol was approved by the institutional review board and ethics committee, Cardinal Tien Hospital, Xindian, New Taipei City, Taiwan (IRB Approval Number: CTH-97-2-4-035), and performed in accordance with the relevant guidelines and regulations. Written informed consent was obtained from all study subjects. OGTT On the study day, an intravenous catheter was placed in the antecubital vein. Fasting-blood samples were drawn for biochemistry analysis. Subjects orally consumed a standard 75 g dose of glucose. Plasma glucose and insulin concentrations were measured before and 5, 10, 20, 30, 45, 60, 80, 100, 120, and 180 min after the oral glucose challenge. Laboratory Measurement Human plasma and serum were separated from blood within 1 h of blood collection and were frozen at -80 • C, until analysis. Insulin was measured by the Coat-A-Count solid-phase radioimmunoassay kit (Diagnostic Products, Los Angeles, CA). Intra-and inter-assay coefficients of variance for insulin were 3.3% and 2.5%, respectively. Plasma glucose was measured by using a YSI 203 glucose oxidase analyzer (Yellow Spring Instrument, Yellow Spring, OH, USA). Serum triglyceride was measured based on a dry multilayer slide method by the Fuji Dri-Chem 3000 analyzer (Fuji Photo Film, Tokyo, Japan). Serum low-density lipoprotein-cholesterol and high-density lipoprotein-cholesterol levels were determined by an enzymatic cholesterol assay after dextran sulphate precipitation. The levels of glycated hemoglobin A1c were evaluated using ion-exchange high-pressure liquid chromatography (Bio-Rad Variant II, Hercules, CA, USA). The C-peptide was measured by C-peptide ELISA kit (10-1141-01, Marcodia). Intra-and inter-assay coefficients of variance for insulin were 4.3% and 6.2%, respectively. Adiponectin was measured by Adiponectin Dutset (DY1065, R&D systems, Minneapolis, MA, USA). Intra-and inter-assay coefficients of variance for insulin were 4.3% and 6.2%, respectively. Free fatty acid was measured by free fatty acid ELISA kit (KA1667, Abnova). Inter-assay coefficients of variance for insulin was 2.23%. The human uPA concentrations were measured by the Human uPA ELISA kit (R&D systems, Minneapolis, MA, USA) in duplicate. Statistical Analysis SPSS version 13.0 statistical package for Windows (SPSS, Chicago, IL, USA) was used for data analysis. The continuous variables were expressed as mean ± SE. In animal and cell models, nonparametric statistics with Mann-Whitney U test were used for comparison of two groups. For comparison of more than two groups in animal or cell studies, nonparametric statistics with Kruskal-Wallis H test were applied. The ratio of AUC in insulin and glucose during OGTT was calculated and represented as ability of insulin secretion after glucose challenge. For data analysis, all subjects with T2DM were divided into four quartiles, uPA1-uPA4, from the lowest to highest, based on plasma uPA levels. The correlation between the uPA and insulin/glucose AUC ratio was evaluated with Pearson's correlation. The independent factor of uPA on insulin/glucose AUC ratio was assessed by multivariate linear regression, after adjusting for age, sex, and BMI. One-way ANOVA with Bonferroni post hoc test was applied, in order to compare the differences among quartiles. All statistical data were expressed as two-sided, and p-values less than 0.05 were considered to be statistically significant. Conclusions In conclusion, a lack of uPA impairs insulin secretion and regeneration of β cells in mice and cell models and induces hyperglycemia. Hyperglycemia reduces the uPA expression on the islet, resulting in a vicious cycle that promotes the development of T2DM. Patients with lower uPA levels have a reduced ability of insulin secretion after the oral glucose challenge. A link between the deficiency of uPA and the developmental of T2DM is suggested, offering a new direction for the study and treatment of T2DM. The role of uPA is an important factor in the development of T2DM, and further study of these mechanisms is needed. Acknowledgments: All authors acknowledge the help of Mary Goodwin, English department, National Taiwan Normal University, in manuscript editing. Conflicts of Interest: The authors declare no conflicts of interest. Ethics Approval and Consent to Participate: The study protocol was approved by the institutional review board and ethics committee, Cardinal Tien Hospital, Xindian, New Taipei City, Taiwan (Approval Number: CTH-97-2-4-035) and performed in accordance with Helsinki Declaration and the relevant regulations. All written informed consent was obtained from all study subjects. Availability of Data and Material: The datasets used and analyzed during the current study are available from the corresponding authors upon reasonable request.
2019-11-22T00:56:03.020Z
2019-11-20T00:00:00.000
{ "year": 2019, "sha1": "38c3db4eef7581cbf7640aa7832383c3ba59468f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/molecules24234208", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46a9c4d508744e86d8b491bd434e81d09e58fd0b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268039612
pes2o/s2orc
v3-fos-license
Examining the relationship between nutritional status and wound healing in head and neck cancer treatment: A focus on malnutrition and nutrient deficiencies Abstract The research was conducted to examine the correlation between nutritional status and wound healing in individuals who were receiving treatment for head and neck cancer. Specifically, this study sought to identify crucial nutritional factors that influenced both the recovery process and efficacy of the treatment. From February 2022 to September 2023, this cross‐sectional study was undertaken involving 300 patients diagnosed with head and neck cancer who were treated at Tianjin Medical University Cancer Institute and Hospital, Tianjin, China. In order to evaluate nutritional status, body mass index (BMI), serum protein levels and dietary intake records were utilized. The assessment of wound healing was conducted using established oncological wound healing scales, photographic documentation and clinical examinations. After treatment, we observed a noteworthy reduction in both BMI (p < 0.05) and serum albumin levels (p < 0.05). There was slightly increased prevalence of head and neck cancer among males (61.0%, p < 0.05). Over the course of 6 months, significant enhancement in wound healing scores was noted, exhibiting overall improvement of 86% in the healing process. An inverse correlation was identified between nutritional status and wound healing efficacy through multivariate analysis. A logistic regression analysis revealed a significant positive correlation (p < 0.05) between elevated levels of serum protein and total lymphocytes and enhanced wound healing. Conversely, negative correlation (p < 0.05) was observed between larger wound size at baseline and healing. The research findings indicated noteworthy association between malnutrition and impaired wound repair among individuals diagnosed with head and neck cancer. The results underscored the significance of integrating nutritional interventions into therapeutic protocol in order to enhance clinical results. This research study provided significant contributions to the knowledge of intricate nature of head and neck cancer management by advocating for multidisciplinary approach that incorporates nutrition as the critical element of patient care and highlighted the importance of ongoing surveillance and customized dietary approaches in order to optimize wound healing and treatment efficacy. This research study provided significant contributions to the knowledge of intricate nature of head and neck cancer management by advocating for multidisciplinary approach that incorporates nutrition as the critical element of patient care and highlighted the importance of ongoing surveillance and customized dietary approaches in order to optimize wound healing and treatment efficacy. cancer nutrition, head and neck oncology, treatment efficacy, wound healing Key Messages • Nutritional decline, marked by reduced body mass index and serum albumin levels, significantly correlates with impaired wound healing in head and neck cancer patients.• Males exhibited a marginally higher incidence of head and neck cancer, with specific cancer types influencing nutritional status and wound recovery.• Enhanced wound healing was observed with increased total lymphocyte count and serum protein levels, while larger initial wound sizes negatively impacted healing outcomes.• The study underscores the importance of integrating nutritional assessment and intervention in the treatment of head and neck cancer to improve clinical outcomes. | INTRODUCTION The term 'Healing in Head and Neck Cancer' pertains to the complex progression of mechanical restoration and tissue healing that individuals experience as a result of undergoing surgical interventions, chemotherapy or radiation therapy. 1 This healing process is especially difficult in cases of head and neck cancer, where critical structures and functions, such as respiration and speech are involved and where treatments may have adverse effects on these functions. 2Healing efficacy is critical for the successful continuance of cancer treatment and restoration of quality of life, therefore, it is the primary concern in the management of patients with head and neck cancer. 3,4he nutritional status of individuals is critical for their overall health and recovery, particularly those who are combating cancer.The location of head and neck cancer exponentially increases the significance of nutrition, as it can have direct influence on a patient's nutrient absorption and consumption. 5,6Malnutrition or inadequate nutritional status may manifest as an outcome or factor that contributes to the disease's severity.][9] The correlation between nutritional status and wound recovery holds notable importance in the context of treating head and neck cancer.These patients frequently sustain wounds, which may be the consequence of surgical procedures or cancer itself. 10The complex process of wound healing necessitates sufficient quantities of protein, vitamins and minerals.Insufficient nutrition may result in compromised treatment outcomes, elevated susceptibility to infection and postponed wound healing. 11Efficient wound recovery is the critical factor to be taken into account, as it directly impacts the sustainability of cancer treatments and well-being of the patients. 12evertheless, investigating this correlation presents numerous obstacles.To commence, the evaluation of nutritional status in individuals diagnosed with cancer is a multifaceted task, necessitating consideration of treatment adverse effects, tumour-induced alterations and variations among patients. 13Furthermore, the interplay between nutritional status and wound healing is facilitated by multitude of biological processes.This calls for an interdisciplinary research approach that integrates knowledge and perspectives from oncology, nutrition, surgery and additional disciplines. 11omprehending this correlation carries substantial ramifications for the field of clinical practice.This has the potential to facilitate the creation of nutritional interventions that are specifically designed to enhance wound healing and treatment results.An example of how recuperation could be facilitated is by identifying particular nutrient deficiencies in patients and rectifying them via dietary modifications or supplementation.Furthermore, this understanding can assist in the development of allencompassing therapeutic strategies that fundamentally incorporate nutritional assistance. 14he main purpose of this research was to examine the relationship between nutritional statuses and wound healing in individuals receiving treatment for head and neck cancer.Specifically, the study seeks to identify nutritional factors that have effect on recovery and the effectiveness of treatment.Furthermore, the study was supposed to formulate nutritional guidelines that are supported by empirical evidence, with the aim of enhancing clinical outcomes and improving the overall quality of life for the individuals involved. | Study design A cross-sectional investigation was undertaken to examine the correlation between nutritional statuses and wound healing in head and neck cancer treatment.The main aim of this study was to examine the relationship between patients' nutritional status and effectiveness of wound repair while undergoing treatment for head and neck cancer. | Setting and period The investigation was conducted at Tianjin Medical University Cancer Institute and Hospital, Tianjin, China, that specializes in oncology-related treatment.The research investigation spanned from February 2022 to September 2023, which afforded sufficient duration for comprehensive data gathering and analysis. | Sample size A total of 300 patients who had received a diagnosis of cancer of the head and neck comprised the study population.The individuals who were included in this study had to meet rigorous inclusion requirements, including definitive diagnosis of head and neck cancer, ongoing treatment and informed consent to participate. | Inclusion and exclusion criteria Patients who were 18 years of age or older, had been diagnosed with head and neck cancer and were undergoing treatment modalities such as surgery, chemotherapy or radiation throughout the study period were eligible to participate.Patients who were undergoing palliative care, those who had concurrent cancers or those who were unable to provide informed consent due to cognitive impairment or language barriers were excluded from the study. | Data collection methods Extensive documentation was compiled regarding the nutritional status of the patients, encompassing body mass index (BMI), serum protein levels and dietary intake records.The progression of wound healing was methodically monitored and documented via routine clinical evaluations, which included wound size measurements and observations of healing rates. | Method of nutritional assessment Standardized instruments, including laboratory tests and Subjective Global Assessment (SGA), were utilized in nutritional status evaluation to quantify albumin and pre-albumin levels, among other vital nutritional markers.In addition, dietary evaluations were carried out via interviews with patients and food frequency questionnaires. | Methods of wound healing evaluation Periodically, photographic documentation and comprehensive clinical examinations were utilized to assess wound healing.Prominent scoring systems and established scales, which have been validated in oncological wound healing research, were applied quantitatively to evaluate the progression of healing. | Statistical analysis Statistical analysis was performed on the gathered data utilizing SPSS software Version 26.0.In order to evaluate the association between nutritional statuses and wound healing, regression analysis and correlation coefficients were applied.Additionally, subgroup analyses were performed in order to ascertain the effects of various treatment modalities and nutritional interventions on the process of wound healing.p-values were employed to ascertain the significance of results, with the predetermined threshold for statistical significance at p < 0.05. | Ethical determinations Upon evaluation and approval by the institutional review boards of the participating centres, study protocol was implemented.Written informed consent was obtained from all participants and research was carried out in adherence to ethical standards and guidelines specific to human subjects. | RESULTS The findings of our research provide an exhaustive examination of a range of factors pertaining to patient characteristics, nutritional status and wound healing indicators.The findings provided substantial knowledge regarding the interaction of these variables within the framework of head and neck cancer therapy.It was observed that males had marginally higher incidence of head and neck cancer than females (61.0%, p < 0.05).According to the distribution of cancer categories, laryngeal and hypopharyngeal cancers were the most prevalent at 20.0% (p < 0.05); frequency of the remaining types did not vary significantly.Significantly, there was notable correlation between the type and location of cancer, with particular emphasis on the oral cavity and pharynx (33.0%, p < 0.05) and larynx and hypopharynx (20.0%, p < 0.05) (Table 1).A decline in nutritional status during treatment was indicated by the statistically significant decrease in BMI ( p < 0.05) and serum albumin levels ( p < 0.05) from baseline to post-treatment (Table 2).It was also indicated that wound healing improved gradually over time.From 1 month to 6 months after treatment, mean wound healing score increased substantially ( p < 0.05), indicating noteworthy overall enhancement of 86% in the healing process (Table 3).A comprehensive analysis of wound characteristics is presented in Table 4.During the course of treatment, notable decreases in both lesion size and depth ( p < 0.05) were observed, in addition to improvements in pain intensity and infection status.The impact of multivariate factors on wound healing is explicated in Table 5.The negative coefficients for nutritional status ( p < 0.05) and BMI ( p < 0.05) indicated an inverse relationship between these variables and wound healing efficacy.Additionally, age and modality of treatment emerged as significant factors.The results of logistic regression analysis is presented in Table 6, demonstrating that increased total lymphocyte count and serum protein levels were statistically substantially correlated with improved wound healing outcomes ( p < 0.05).Conversely, larger wound size at baseline had negative impact on healing ( p < 0.05).Thus the significance of nutritional status and particular clinical parameters in the wound healing process for patients undergoing treatment for head and neck cancer was clearly highlighted by these findings. | DISCUSSION The investigation conducted in this study regarding the relationship between nutritional status and wound healing in patients with head and neck cancer has produced noteworthy findings that support and expand current understanding in the fields of oncological nutrition and wound care. Our results indicated a significant reduction in the BMI and serum albumin concentrations.Continue previous investigations that have identified malnutrition as the prevalent concern among patients with head and neck cancer following treatment. 15The substantial decline in dietary intake scores serves to emphasize the difficulty of sustaining sufficient nutrition throughout the course of treatment.The significance of these findings lies in the fact that they support the concept that nutritional decline is not simply an adverse effect of the illness, but rather a causal element that affects the efficacy of treatment. 7he pivotal finding of our study is the progressive improvement in wound healing scores that was observed over the period of 6 months.The observed association between enhanced nutritional indicators (such as lymphocyte count and serum protein levels) and wound healing is consistent with the findings of Wang et al. (2022), who demonstrated the critical role that nutrition assumes in the processes of wound healing. 16The unexpected correlation between nutritional status and wound healing effectiveness, specifically negative coefficients for BMI, represents fresh perspective that proposes wound recovery dynamics may be influenced not only by the existence but also by the severity of malnutrition. The higher prevalence of head and neck cancer observed in males, as confirmed by our study, is consistent with global epidemiological data that also indicate the male preponderance in the incidence of head and neck cancer. 17The correlation that exists between the type and location of cancer, specifically in the larynx and hypopharynx and oral cavity and pharynx, provides additional insight into the complex ways in which the location of cancer may impact nutritional status and wound healing. The multivariate analysis that uncovers the influence of nutritional status, age and treatment modality on wound recovery is especially enlightening.The results of this study indicate that personalized treatment strategies that take into account these factors may be advantageous.The detrimental effect of an initial larger lesion size on healing outcomes, as demonstrated by our logistic regression analysis, emphasizes the significance of concerning aggressive wound management and early intervention in therapy of head and neck cancer. Moreover, correlation between distinct wound healing trajectories and utilization of diverse treatment modalities indicates the necessity for integrated care strategies.The discovery that radiation therapy and chemotherapy have distinct effects on wound healing in these patients supports Deptula et al. (2019) claims regarding the complexity of wound management in cancer patients and calls for a more individualized approach to wound care. 18ignificant implications for clinical practice result from our findings.ongoing and timely nutritional assistance to individuals diagnosed with cancer. 19The observed association between enhanced nutritional status and improved wound healing outcomes suggests that addressing nutritional deficiencies may represent the crucial approach to improving treatment efficacy and, consequently, wound healing.Although our research offers valuable insights, it is not devoid of constraints.Because of its cross-sectional design, it is difficult to establish causality.In order to enhance comprehension of the temporal associations between nutritional status and wound healing, longitudinal studies are imperative.Additionally, the concentration of the study on patients diagnosed with head and neck cancer in Tianjin may restrict the applicability of the results.Further investigation should strive to incorporate a wide range of populations in order to augment the practicality of the findings. Our research concluded by emphasizing the critical importance of nutritional status in the recovery of wounds in patients with head and neck cancer.This highlighted the importance of incorporating nutritional assessment and intervention into cancer treatment as an integral part of comprehensive care. | PRACTICAL IMPLICATIONS By proactively addressing malnutrition and specific nutrient deficiencies, healthcare providers can significantly improve patient recovery outcomes and quality of life. | CONCLUSION The present study investigates the intricate relationship between nutritional status and wound repair in patients diagnosed with head and neck cancer.The results of our study demonstrate a significant association between compromised wound healing and deteriorating nutritional indicators, such as serum albumin levels and BMI.This study highlights the importance of incorporating nutritional assessment and targeted interventions into the treatment plan for patients with head and neck cancer in order to improve wound healing outcomes, treatment efficacy as a whole and quality of life for patients.These observations facilitate the development of innovative and all-encompassing strategies for cancer treatment. T A B L E 3 * Wound healing assessment.Indicates the significant values. Participant demographics and clinical characteristics. T A B L E 1 They suggest that proactive nutritional interventions, such as dietary counselling and nutritional supplementation, should be incorporated into standard of care for patients with head and neck cancer.This aligns with the suggestions put forth by Ravasco et al. (2019), which endorse the provision of T A B L E 5 Multivariate analysis of factors influencing wound healing. *Indicates the significant values.
2024-02-29T06:17:08.250Z
2024-02-27T00:00:00.000
{ "year": 2024, "sha1": "db2f87ed3ab042092a51f43587839c0cae96b82c", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/iwj.14810", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c480d5153291f39fe16b34ea78f035e1ff0c1dc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271564040
pes2o/s2orc
v3-fos-license
A Mature Tertiary Lymphoid Structure with a Ki-67-Positive Proliferating Germinal Center Is Associated with a Good Prognosis and High Intratumoral Immune Cell Infiltration in Advanced Colorectal Cancer Simple Summary Tertiary lymphoid structures (TLS) arise in non-lymphoid tissues due to inflammation or cancer and play a key role in adaptive immune responses. In this study, we analyzed the TLS maturity in 78 patients with pathological T4 colorectal cancer (CRC). Mature TLS, identified by organized T (CD3+) and B (CD20+) lymphocytes with Ki-67-positive B cells, have been linked to microsatellite instability and improved cancer-specific and post-recurrence survival. High tumor Ki-67 expression correlated with poorer outcomes. The absence of mature TLS independently predicted poor survival. Tumors with mature TLS showed a higher infiltration of CD3+ T cells, FOXP3+ T cells, and CD86+ immune cells, including M1-like macrophages. Focusing on the Ki-67 expression pattern, the simultaneous evaluation of TLS maturity and tumor proliferation potency is suggested to be a potential prognostic indicator in CRC. Abstract Tertiary lymphoid structures (TLSs) are complex lymphocyte clusters that arise in non-lymphoid tissues due to inflammation or cancer. A mature TLS with proliferating germinal centers is associated with a favorable prognosis in various cancers. However, the effect of TLS maturity on advanced colorectal cancer (CRC) remains unexplored. We analyzed the significance of TLS maturity and tumor Ki-67 expression in surgically resected tumors from 78 patients with pathological T4 CRC. Mature TLS was defined as the organized infiltration of T and B cells with Ki-67-positive proliferating germinal centers. We analyzed the relationship between TLS maturity and intratumoral immune cell infiltration. Mature TLS with germinal center Ki-67 expression was associated with microsatellite instability and improved survival; however, high tumor Ki-67 expression was associated with poor survival in the same cohort. Multivariate analysis identified the absence of mature TLS as an independent predictor of poor post-recurrence overall survival. Intratumoral infiltration of T lymphocytes and macrophages was significantly elevated in tumors with mature TLS compared to those lacking it. High Ki-67 levels and absent mature TLS were identified as poor prognostic factors in advanced CRC. Mature TLS could serve as a promising marker for patients at high-risk of CRC. Introduction Colorectal cancer (CRC) is one of the leading causes of cancer-related deaths globally [1].However, the prognosis of advanced CRC with therapeutic resistance and complex heterogeneity is often poor despite advancements in surgical techniques and adjuvant chemotherapy [2,3].Consequently, there is an urgent need to identify reliable biomarkers to predict patient outcomes and guide therapeutic strategies more effectively.This approach is expected to resolve important issues in the improvement of clinical CRC care. Recently, many researchers have highlighted the significance of the tumor microenvironment (TME) in cancer progression, patient prognosis, and therapeutic resistance [4].Among the various components of the TME, tertiary lymphoid structures (TLSs) have gained attention for their potential role in predicting patient prognosis and enhancing antitumor immunity through antigen presentation and production of tumor-specific antibodies [5,6].These structures, which resemble secondary lymphoid organs, including the spleen and lymph nodes, are composed of B and T cell zones and are found in chronically inflamed or cancerous tissues [7].It has been reported that the presence of TLS in tumor tissues is associated with good prognosis in several cancers, including CRC [8][9][10][11].In contrast, not only TLS densities but also maturity levels have been reported to be important in the regulation of antitumor immunity, which contributes to the prognosis and treatment resistance of cancer patients [12,13]. Mature TLSs are characterized by the presence of well-developed germinal centers with proliferating B cells [10,14].These structures have been reported to facilitate robust antigen presentation and antibody production, thereby enhancing immunotherapy sensitivity and antitumor immunity within the TME.In cancer research, high expression levels of the proliferation marker Ki-67 in cancer cells are well known to be related to cancer aggressiveness and poor prognosis in various cancers [15][16][17][18][19]. Therefore, the evaluation of Ki-67 within the TME suggests distinguishing not only TLS maturity but also cancer cell aggressiveness, providing a possibility for a clearer understanding of the immune landscape and cancer characteristics simultaneously.However, the prognostic value of the simultaneous evaluation of tumor Ki-67 expression and mature TLS with germinal center Ki-67 expression in patients with advanced CRC remains unclear. This study aimed to clarify the relationship between TLS maturity, tumor Ki-67 expression, clinicopathological factors, and tumor-infiltrating T lymphocytes, macrophages, and B lymphocytes in patients with advanced CRC.Therefore, we performed histochemical staining for the T lymphocyte marker CD3, B lymphocyte marker CD20, and proliferation marker Ki-67 to identify the maturity of peritumoral TLS in surgically resected specimens from patients with pathological T4 (pT4) CRC. Clinical Samples This study enrolled 78 patients diagnosed with pT4 CRC who underwent curative resection at Gunma University Hospital between July 2013 and February 2020.Of the 78 patients, 50 received adjuvant chemotherapy after surgery.The exclusion criteria were preoperative treatment and non-curative resection due to distant metastasis.Relevant clinical data were retrieved from the medical and surgical records. Evaluation of Peritumoral Mature TLS Sequential sections of surgical specimens from pT4 CRC were immunohistochemically stained for CD20, CD3, and Ki-67 to assess the presence and maturity of TLS in the TME.TLSs were identified by clustering both B cells (CD20+) and T cells (CD3+).The maturity of TLS was determined by evaluating nuclear Ki-67 expression in immune cells within the germinal center.TLS containing germinal centers with nuclear Ki-67-expressing immune cells were classified as mature cells.If one Ki-67-positive TLS was found in the peritumoral area, especially within 1 mm of the invasive margin, it was considered positive (Figure 1). Evaluation of Peritumoral Mature TLS Sequential sections of surgical specimens from pT4 CRC were immunohistochemically stained for CD20, CD3, and Ki-67 to assess the presence and maturity of TLS in the TME.TLSs were identified by clustering both B cells (CD20+) and T cells (CD3+).The maturity of TLS was determined by evaluating nuclear Ki-67 expression in immune cells within the germinal center.TLS containing germinal centers with nuclear Ki-67-expressing immune cells were classified as mature cells.If one Ki-67-positive TLS was found in the peritumoral area, especially within 1 mm of the invasive margin, it was considered positive (Figure 1). Evaluation of Tumoral Ki-67 Expression in Tumor Tissues Tumoral Ki-67 expression was assessed in sections of surgical specimens that were immunohistochemically stained for Ki-67.Images of five representative fields were captured at 200× magnification using a microscope (BZ-X700; Keyence, Osaka, Japan).The Ki-67 expression in these images was manually quantified using a Java-based image processing software (ImageJ 1.53; National Institutes of Health, Bethesda, MD, USA).We specifically evaluated the nuclear Ki-67 expression in 100 cancer cells per image, totaling 500 cells per sample.The average value across the five fields was computed for each patient.Furthermore, Ki-67 expression levels were classified into two categories based on the ROC curve for disease-free survival, with a cut-off value of 22.8 (Ki-67 low, n = 68; Ki-67 high, n = 10). Image Acquisition and Quantitative Evaluation of Immune Cells To count the tumor-infiltrating immune cells (CD3+, CD8+, FOXP3+, CD86+, and CD163+), we captured four fields from the tumor section, encompassing 36 images covering 9.070624 mm 2 , using a microscope (BZ-X700; Keyence, Osaka, Japan).A Hybrid Cell Count System (Keyence, Osaka, Japan), a semi-automatic image analysis software, was used to count immune cells in digital images.The density of tumor-infiltrating immune cells was calculated by dividing the number of cells by the total area (mm 2 ), yielding cell density per mm 2 . Statistical Analysis Chi-squared and Fisher's exact tests were used to examine the relationships between categorical values.The Mann-Whitney U test was used to compare the means of continuous variables across different groups.Survival curves were visualized using Kaplan-Meier curves with the log-rank test to assess differences between groups.Cox regression analyses, both univariate and multivariate, were conducted to identify the independent predictors of post-recurrence overall survival.Statistical analyses were performed using JMP Pro 15 (SAS Institute, Cary, NC, USA) and GraphPad Prism 10 (Dot Matics, Boston, MA, USA).Statistical significance was defined as p < 0.05. Evaluation of Distribution and Maturity of TLS in pT4 CRC Samples To clarify the significance of TLS maturity in pT4 advanced CRC, we defined a 1 mm area from the invasion front line of the tumor tissue as the peritumoral area.Next, we immunohistochemically evaluated the distribution of TLS in this area as previously described (37016103) (Figure 1a).The histological characteristics of TLS were identified as a lymphoid tissue structure with T cells surrounding the germinal center B cells, using the T cell marker CD3 and B cell marker CD20.This study defined TLS with germinal center Ki-67 in the peritumoral area as mature TLS and TLS without Ki-67 as immature TLS (Figure 1b).Among the 78 pT4 CRC samples, 76 (97.4%, 76/78) exhibited the presence of at least one peritumoral TLS, with an average of 8.7 (±6.6)TLSs observed in each specimen.Additionally, 30 (38.5%, 30/78) demonstrated the coexistence of mature and immature TLSs.Based on the presence of mature TLS, 30 (38.5%, 30/78) were classified into the mature TLS group and 48 (61.5%, 48/78) into the immature TLS group (Table 1). Association of Mature TLS with the Clinicopathological Features and Survival of Clinical Advanced CRC Patients Table 1 shows the relationship between mature TLS, germinal center Ki-67, and patient clinicopathological characteristics.The positivity of mature TLS was significantly associated with microsatellite status, which has been reported to play a significant role in activating antitumor immunity (p = 0.0081) (Table 1). We explored the prognostic impact of mature TLS with germinal center Ki-67 expression using survival analyses of overall survival, cancer-specific survival, and disease-free survival in 78 patients with surgically resected pT4 CRC.CRC and mature TLS were significantly associated with prolonged cancer-specific survival (p = 0.0104) (Figure 2a).Moreover, patients with recurrent CRC (n = 29) with mature TLS had better post-recurrence overall survival than did those without mature TLS (p = 0.0068) (Figure 2a).The differences were not significant in terms of disease-free and overall survival (Figure 2a).sion using survival analyses of overall survival, cancer-specific survival, and disease-free survival in 78 patients with surgically resected pT4 CRC.CRC and mature TLS were significantly associated with prolonged cancer-specific survival (p = 0.0104) (Figure 2a).Moreover, patients with recurrent CRC (n = 29) with mature TLS had better post-recurrence overall survival than did those without mature TLS (p = 0.0068) (Figure 2a).The differences were not significant in terms of disease-free and overall survival (Figure 2a).The high expression levels of tumoral Ki-67 were related to poor overall survival and disease-free survival in our cohort (p = 0.0328 and p = 0.0051, respectively) (Figure 2b).However, the difference was not significant in terms of cancer-specific survival and postrecurrence overall survival (Figure 2b). Table 2 shows the results of the multivariate analysis of post-recurrence overall survival using the Cox regression model.Multivariate analysis revealed that negativity for mature TLS was an independent predictor of shorter post-recurrence overall survival (HR The high expression levels of tumoral Ki-67 were related to poor overall survival and disease-free survival in our cohort (p = 0.0328 and p = 0.0051, respectively) (Figure 2b).However, the difference was not significant in terms of cancer-specific survival and postrecurrence overall survival (Figure 2b). Table 2 shows the results of the multivariate analysis of post-recurrence overall survival using the Cox regression model.Multivariate analysis revealed that negativity for mature TLS was an independent predictor of shorter post-recurrence overall survival (HR = 32.546,95% CI: 2.8759-368.31,p = 0.0049) (Table 2). Discussion This study clarified that the presence of peritumoral mature TLS was associated with microsatellite instability and improved cancer-specific and post-recurrence overall survival; however, high expression of tumoral Ki-67 was related to poorer disease-free and overall survival in the same cohort.Moreover, multivariate analysis identified the negativity of peritumoral mature TLS as an independent predictor of poor post-recurrence overall survival in patients with advanced pT4 CRC.In addition, intratumoral infiltration levels of T lymphocytes, FOXP3+ regulatory T cells, and CD86+ immune cells including M1-like macrophages were significantly higher in tumors with mature peritumoral TLS than in those without mature TLS. In this study, patients with pT4 CRC with mature TLS had better cancer-specific and overall survival rates than did those without mature TLS, although there was no significant difference in disease-free survival.Moreover, we showed that patients with CRC and mature TLS had better post-recurrence survival compared to patients with CRC but without mature TLS.Therefore, it has been proposed that local tumor immune activation by mature TLS contributes minimally to preventing recurrence in locally advanced CRC.However, TLS status is associated with the response to treatment following recurrence. Discussion This study clarified that the presence of peritumoral mature TLS was associated with microsatellite instability and improved cancer-specific and post-recurrence overall survival; however, high expression of tumoral Ki-67 was related to poorer disease-free and overall survival in the same cohort.Moreover, multivariate analysis identified the negativity of peritumoral mature TLS as an independent predictor of poor post-recurrence overall survival in patients with advanced pT4 CRC.In addition, intratumoral infiltration levels of T lymphocytes, FOXP3+ regulatory T cells, and CD86+ immune cells including M1-like macrophages were significantly higher in tumors with mature peritumoral TLS than in those without mature TLS. In this study, patients with pT4 CRC with mature TLS had better cancer-specific and overall survival rates than did those without mature TLS, although there was no significant difference in disease-free survival.Moreover, we showed that patients with CRC and mature TLS had better post-recurrence survival compared to patients with CRC but without mature TLS.Therefore, it has been proposed that local tumor immune activation by mature TLS contributes minimally to preventing recurrence in locally advanced CRC.However, TLS status is associated with the response to treatment following recurrence.Evaluating TLS maturity may be useful for predicting the therapeutic response in patients with advanced CRC who experience recurrence. In various gastrointestinal cancers, higher density and maturity of TLS in tumors have been linked to favorable outcomes [20][21][22].Previously, researchers have evaluated TLS density and maturation within the tumor and peritumoral regions.However, among patients with CRC without metastatic regions, peritumoral TLS, rather than intratumoral TLS, demonstrated a strong predictive value for patient prognosis [8].They also showed that intratumoral TLSs were detected in a minority of cases, and their presence or density did not correlate with patient prognosis.Similarly, Ding et al. showed that peritumoral TLSs exhibit favorable prognostic implications in intrahepatic cholangiocarcinoma [23].Thus, this study focused on peritumoral TLS to analyze the relationship between TLS maturity and clinicopathological significance in our CRC cohort.Furthermore, the originality and novelty of this study are that we focused on the relationship between peritumoral TLS maturity and intratumoral immune cell infiltration in a unique cohort of pT4 locally advanced CRC specimens rather than in typical Stage I-III CRC specimens. Ki-67 is a widely recognized marker used to assess the proliferation rate of tumor cells [24].In CRC, elevated Ki-67 expression is associated with increased tumor aggressiveness and poor prognosis [25,26].Similar to previous findings, higher Ki-67 expression was associated with significantly shorter disease-free survival and overall survival in patients with pT4 CRC.In addition to providing information on tumor proliferation and aggressiveness, Ki-67 can be used to assess the local immune response against cancer.Based on these findings, Ki-67 evaluation is a simple and informative method that can be used as a biomarker for assessing cancer aggressiveness and TLS maturity in CRC. Mature TLSs play a crucial role in influencing immune cell infiltration into the TME [9].An integral part of the TLS regulates the trafficking and recruitment of T cells, B cells, and macrophages through several mechanisms, such as the secretion of peripheral node addressin, mucosal addressin cell adhesion molecule-1 (MAdCAM-1), and L-selectin ligands [27][28][29].For instance, MAdCAM-1 acts as a potent recruiter of immune cells through its interaction with the α4β7 integrin present in immune cells, including T cells and macrophages [30][31][32].In this study, mature TLSs with Ki-67 expression were associated with a significantly higher intratumoral infiltration of CD3-, CD8-, FOXP3-, and CD86positive cells.These findings suggest that mature TLS may have a significant impact on peritumoral immune cell infiltration through several mechanisms, including the formation of venules and the secretion of adhesion molecules, selectins, and addressins.Interestingly, although we found a significant correlation between the presence of peritumoral mature TLS and a higher number of CD86+ cells (including M1-like macrophages) in tumors, there was no significant difference in CD163+ cells (a marker for M2-like macrophages).Cytokines such as interferon (IFN)-γ and tumor necrosis factor (TNF)-α, secreted by abundant intra-tumoral T cells within mature TLS, promote a pro-inflammatory M1 phenotype polarization [33,34].These findings indicate that mature TLS might regulate the M1-like polarization and infiltration of intra-tumoral macrophages via cytokines such as IFN-γ and TNF-α secreted by T cells and addression/L-selectin.Taken together, these results suggest that the TLS may not only play a role in immune cell recruitment but also be crucial for delicately balancing the TME. This study has certain limitations.First, it was conducted retrospectively at a single institution and focused on patients with surgically resected pT4 CRC.Moreover, the sample size of the cohort was limited to 78 patients.This relatively small cohort may have introduced sampling bias.Our findings may not fully capture the importance of mature TLS in all patients with CRC, including those with inoperable disease.Second, we did not evaluate the regulators related to TLS formation, such as CXCL13 and MAdCAM-1, which are primarily involved in the mechanisms that attract immune cells to the TLS.Investigating the levels and dynamics of these regulators may provide valuable insights into the formation and regulation of TLS in CRC.Future research should consider comprehensive analyses to elucidate the biology of TLS in CRC.Moreover, additional research, including patients with recurrent/unresectable CRC, is required to assess whether detecting mature TLS in pretreatment biopsy samples could predict the response to chemotherapy. Conclusions Our findings have potential implications for future CRC treatments.By focusing on the Ki-67 expression pattern, we identified two poor prognostic factors in advanced CRCs: high tumor Ki-67 expression and a lack of mature TLS with germinal center Ki-67 expression.The presence of mature TLS is associated with high infiltration of intratumoral immune cells and better survival in patients with pT4 CRC.This suggests that TLS maturity is an important regulator of intratumoral immune cell infiltration, and the evaluation of mature TLS could serve as a promising predictor for identifying high-risk CRC patients. Figure 1 . Figure 1.Evaluation of mature TLS with germinal center Ki-67 in pathological T4 (pT4) colorectal cancer (CRC) samples.(a) The area 1 mm from the invasion front of the pT4 CRC was defined as the peritumoral area.This study evaluated the significance of TLS in the peritumoral area.(b) Representative images of pan-T cell marker CD3, B cell marker CD20, and proliferation marker Ki-67 in pT4 CRC specimens.Upper panel: TLS without Ki-67-positive proliferating B cells in the germinal center was defined as immature TLS.Lower panel: TLS with Ki-67-positive proliferating B cells in the germinal center was defined as mature TLS.Images were captured at 200× magnification.Scale bar, 100 µm. Figure 1 . Figure 1.Evaluation of mature TLS with germinal center Ki-67 in pathological T4 (pT4) colorectal cancer (CRC) samples.(a) The area 1 mm from the invasion front of the pT4 CRC was defined as the Table 1 . The relationship of clinicopathological factors and TLS with germinal center Ki-67 in 78 patients with pT4 CRC. Table 2 . Multivariate analysis for post-recurrence overall survival in pT4 CRC patients with recurrence.
2024-07-31T15:05:54.413Z
2024-07-28T00:00:00.000
{ "year": 2024, "sha1": "f5d8568a446c9732d56ca12e5c7487104bc4b725", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a38d9093270c0a73482a07b94e1d2f9638a69cea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
199655104
pes2o/s2orc
v3-fos-license
Unsupervised Single-Image Super-Resolution with Multi-Gram Loss : Recently, supervised deep super-resolution (SR) networks have achieved great success in both accuracy and texture generation. However, most methods train in the dataset with a fixed kernel (such as bicubic) between high-resolution images and their low-resolution counterparts. In real-life applications, pictures are always disturbed with additional artifacts, e.g., non-ideal point-spread function in old film photos, and compression loss in cellphone photos. How to generate a satisfactory SR image from the specific prior single low-resolution (LR) image is still a challenging issue. In this paper, we propose a novel unsupervised method named unsupervised single-image SR with multi-gram loss (UMGSR) to overcome the dilemma. There are two significant contributions Introduction Super-resolution (SR) based on deep learning (DL) has received much attention from the community [1][2][3][4][5][6][7]. Recently, Convolutional neural networks (CNN)-relevant models have consistently resulted in significant improvement in SR generation. For example, the first CNN-based SR method SRCNN [4] generated more accurate SR images compared with traditional methods. In general, many high-resolution (HR)-low-resolution (LR) image pairs are the building blocks for DL-SR methods in a supervised way. The SR training uses the HR image as the supervised information to guide the learning process. Nevertheless, in practice, we barely collect enough external information (HR images) for training under severe conditions [8][9][10], e.g., medical images, old photos, and disaster monitoring images. On the other hand, most DL-SR methods train on the dataset with fixed kernel between HR and LR images. In fact, this fixed kernel assumption creates a fairly unrealistic situation limited in certain circumstances. When a picture violates the fixed spread kernel of training data, its final performance decreases in a large margin. This phenomenon is also highlighted in ZSSR [11]. In addition, if there are some artifacts, e.g., kernel noise or compression loss, a pre-trained DL model with a fixed kernel relationship will generate rather noisy SR images. As a result, we claim that we can turn to synthesis of the SR image with a single input, and it may become a solution to the problematic situation mentioned above. Theoretically, SR is an ill-posed inverse problem. Many different SR solutions are suitable for one LR input. Intuitively, the more internal information of the LR input involves in the generation process, the better result can be expected. The changing route of DL-SR shows that various carefully designing strategies are being introduced to improve the learning ability. However, as a typical supervised problem, supervised DL-SR models train on the limited HR-LR image pairs. Model is restricted by the training data. In contrast, our method is conducted on single-input SR, i.e., designing a SR model for one image-input condition. We define the special condition as the unsupervised SR task following [11]. A new structure is proposed in our model. Moreover, to learn the global feature [12][13][14], we introduce the style loss to the SR task, i.e., the gram loss in the style transfer. Some experimental results show that the well-designed integrated loss can contribute to a better performance in the visual perception as depicted in [15]. Taking advantage of new structural design and loss functions, we can acquire considerably high-quality SR images both in the accuracy and the texture details. Specifically, the accuracy refers to the pixel alignment, which is commonly measured by the peak-signal-to-noise-ratio (PSNR) and the structural similarity index (SSIM) [2,4,5,7,16,17]. Moreover, the texture details are highlighted in some SR methods, such as [3,8,18,19], trying to generate satisfying images in visual perception by minimizing the feature distance between the SR image and its HR counterpart in some specific pre-trained CNN layers. To sum up, in this paper, we propose a new unsupervised single-image DL-SR method with multi-gram loss (UMGSR) (Our code is available in the address: https://github.com/qizhiquan/ UMSR). To address the aforementioned issues and improve visual performance, we dig three main modifications to the existing approaches. Firstly, we implement a specific unsupervised mechanism. Based on the self-similarity in [20], we denote the original input image as the G HR . Then, the degradation operation is equipped to gain the corresponding G LR counterpart. The training dataset is constituted with the G HR -G LR pairs. Secondly, we build a high-efficient framework with the residual neural network [21] as building blocks and introduce a two-step global residual learning to extract more information. The experimental results confirm that our approach performs well at the texture generation. Thirdly, we introduce the multi-gram loss following [22], which is commonly used in the texture synthesis. Accordingly, we form the loss function in UMGSR by combining the MSE loss, the VGG perceptual loss, and the multi-gram loss. Benefiting from these modifications, our model eventually achieves better performance in visual perception than both existing supervised and unsupervised SR methods. A comparison of SR images with different DL-SR methods is shown in Figure 1. There are two main contributions in this paper: • We design a new neural network architecture: UMGSR, which leverages the internal information of the LR image in the training stage. To stably train the network and convey more information about the input, the UMGSR combines the residual learning blocks with a two-step global residual learning. • The multi-gram loss is introduced to the SR task, cooperating with the perceptual loss. In detail, we combine the multi-gram loss with the pixel-level MSE loss and the perceptual loss as the final loss function. Compared with other unsupervised methods, our design can obtain satisfying results in texture details and struggle for SR image generation similar to the supervised methods. Figure 1. A comparison of some SR results. The figure shows the generation of ZSSR (an unsupervised DL-SR method), EDSR (a supervised method with best PSNR score), SRGAN (method good at the perceptual learning), ResSR (the generator of SRGAN), and our proposed method with three different loss functions. From the details, we can infer that more pleasant details are shown in the last pictures. The generations of different loss functions further provide change route of details. Related Work SR is one of basic computer vision tasks. In the realm of SR, there are mainly three distinct regimes: interpolation-based methods [23,24], reconstruction-based methods [25], and pairs-learning-based methods [1][2][3][4][5]7,11,20,26]. A lot of works are done to address this issue. like [27][28][29]. Recently, DL models achieve greatly success in many CV area, like [14,[30][31][32]. In SR area, DL-SR methods become hugely successful, in terms of the performance both in accuracy and perceptual feeling. Most content achievements refer to outstanding DL-based approach and can be divided into three branches: supervised SR methods, unsupervised methods, and Generative Adversarial Networks (GAN) related methods. Supervised SR methods. After AlexNet [33] firstly demonstrates the enormous advantage of DL over shallow methods in image classification, a large body of work applies deep CNN to traditional computer vision tasks. Regarding SR, the first DL-SR method is proposed by Dong et. at. in [4,34], which is a predefined upsampling method. It scales up of the LR image to the required size before training. Firstly, a traditional SR method (bicubic) is used to get the original scaled SR image. Then, a three layers CNN is employed to learn the non-linear mapping between the scaled SR image and the HR one. Noting that despite only three convolutional layers are involved, the result demonstrates a massive improvement in accuracy over traditional methods. Later, researchers succeed in building sophisticated SR networks to strive for more accurate performance with relatively reasonable computation resource. For example, a new upsampling framework: the Efficient Sub-Pixel five layers Convolutional Neural Network (ESPCN), is proposed in [7]. Information of different layers is mixed to obtain the SR result. Meanwhile, the training process works with the small size LR input, and the scale-up layer is based on a simple but efficient sub-pixel convolution mechanism. Because most layers deal with small feature maps, the total computation complexity of ESPCN is considerably dropped. The sub-pixel scaling strategy is widely used in subsequent algorithms, such as SRGAN [3] and EDSR [1]. On the other hand, as mentioned in SRCNN, while it is a common sense that a deeper model accompanied with better performance, increasing the number of layers might result in non-convergence. To bridge this gap, Kim et al. design a global residual mechanism following the residual neural network [21], to obtain a stable and deeper network. This mechanism eventually develops into two approaches: Very Deep Convolutional Networks (VDSR) [5] and Deeply Recursive Convolutional Network (DRCN) [35]. Due to the residual architecture, both networks can be stacked with more than 20 convolution layers, while the training process remains reasonably stable. The following SR research mostly focuses on designing new local learning blocks. To building a deep and concise network, Deep Recursive Residual Network (DRRN) is proposed in [6], which replaces the residual block of DRCN with two residual units to extract more complex features. Similar to DRCN, by rationally sharing the parameters across different residual blocks, the total parameters of DRRN are controlled in a small number, while the network can be further extended to a deeper one with more residual blocks. In the DenseSR [36], new feature extracting blocks from DenseNet [37] contribute to fairly good results. To leverage the hierarchical information, Zhang et al. propose Residual Dense Block (RDB) in Residual Dense Network (RDN) [17]. Benefiting from the learning ability of local residual and dense connection, RDN achieves state-of-the-art performance. Besides, the Deep Back-Projection Networks (DBPN) [2] employs mutually up-down sampling stages and error feedback mechanism to generate more accurate SR image. Features of LR input are precisely learned by several repetitive up and down stages. DBPN attains stunning results, especially for large-scale factors, e.g., 8×. Unsupervised SR methods. Instead of training on LR-HR image pairs, unsupervised SR methods leverage the internal information of single LR image. In general, there are a large body of classical SR methods follow this setting. For example, [38,39] make use of many LR images of the same scene but differing in sub-pixels. If the images are adequate, the point-spread function (PSF) can be estimated to generate the SR image. The SR generations are from a set of LR images with blurs, where pixels in the fixed patch following a given function. However, in [40], the maximum scale factor of these SR methods is proved to be less than 2. To overcome this limitation, a new approach trained with a single image is introduced in [20]. As mentioned in the paper, there are many similar patches of the same size or across different scales in one image. Then, these similar patches build the LR-HR image pairs, according to the single input and scaled derivatives for PSF learning. The data pre-processing in our work is similar to their idea. However, we adopt a DL model to learn the mapping between LR and SR images. In addition, Shocher et al. introduce "Zero-Shot" SR (ZSSR) [11], which combines CNN and single-image scenario. Firstly, the model estimates the PSF as traditional methods. Then, a small CNN is trained to learn the non-linear mapping from the LR-HR pairs generated from the single-input image. In the paper, they prove that ZSSR surpasses other supervised methods in non-ideal conditions, such as old photos, noisy images, and biological data. Another unsupervised DL-SR model is the deep image prior [26], which focuses on the assumption that the structure of the network can be viewed as certain prior information. Based on this assumption, the initialization of the parameter serves as the specific prior information in network structure. In fact, this method suffers from over-fitting problem if the total epochs go beyond a limited small number. To our knowledge, the study of unsupervised DL-SR algorithm hardly receives enough attention, and there is still a big space for improvement. GANs related methods. Generative Adversarial Networks (GANs) [41] commonly appears in image reconstruction tasks, such as [3,19,42,43], and is widely used for more realistic generation. The most important GAN-SR method is SRGAN [3], which intends to generate 4× upsampling photo-realistic images. SRGAN combines the content loss (MSE loss), perceptual loss [43], and adversarial loss in its last loss function. It can obtain photo-realistic images, although its performance on PSNR and SSIM indexes is relatively poor. In fact, our experiments also support their controversial discovery: a higher PSNR image does not have to deliver a better perceptual feeling. Besides, in [19], the FAN (face alignment) is introduced into a well-designed GAN model to yield better facial landmark SR images. Their experiments demonstrate significant improvements both in quantity and quality. For the restriction of facial image size, they use 16 × 16 as input to produce 64 × 64 output image. However, the FAN model is trained on a facial dataset, and it is only suitable for facial image SR problem. Inspired by the progress in GANs-based SR, we combine the SRGAN and Super-FAN in our architecture. We also make refined modification to address the unsupervised training issue. Methodology In this section, all details of the proposed UMGSR are shown in three folds: the dataset generation process, the proposed architecture, and the total loss. Referring to training DL-SR model upon unsupervised conditions, how to build the training data solely based on the LR image is the primary challenge to our work. Moreover, we propose a novel architecture to learn the map between generated LR andĤR images. We also introduce a new multi-gram loss to obtain more spatial texture details. The Generation of Training Dataset How generating LR-HR image pairs from one LR input I in is the fundamental task for our unsupervised SR model. Indeed, our work is a subsequent unsupervised SR learning following [11,20,44,45]. To generate satisfactory results, we randomly downscale I in in a specific limited scale, which comes from the low visual entropy inside one image. Therefore, we obtain hundreds of different sizes I HR and perform further operations based on these HR images. Most supervised SR methods learn from dataset involving various image contents. The training data acts as the pool of small patches. There are some limitations for this setting: (1) the pixel-wise loss leads to over-smooth performance in the details; (2) supervised learning depends on specific image pairs and perform poorly when applied to significantly different images, such as old photos, noisy photos, and compressed phone photos; (3) no information of test image is involved in the training stage while it is crucial for the SR generation. Therefore, supervised SR models try to access the collection of external reference without the internal details of the test image. Figure 2 shows the mentioned drawbacks of supervised methods. It can be inferred from the comparison that handrails of SR image in Glasner's [20] looks better than its counterpart of VDSR [5]. There are several similar repetitive handrails in the image, and details of different part or across various scale can be shared for their similarity. Training with these internal patches obtains better generations than the ones with external images. Normally, the visual entropy of one image is smaller than that of a set of different images [46]. Moreover, as mentioned in [11,46], lower visual entropy between images leads to better generation. Based on this consideration, learning with one image will result in an equal or better qualitative result than diverse LR − HR image pairs. In our work, we continue this line of research by training with internal information, as well as incorporating more features. From Figure 1, we can see that our unsupervised method achieves a similar result as the state-of-the-art SR method in common conditions. For non-ideal images, it performs better. Normally, the objective of SR task is to generate I SR images from I LR inputs, and information of I HR acts as the supervised information during training. However, there are no or few available I HR images for training in some specific conditions. Unsupervised learning seems to be a decent choice. In this circumstance, how to build the HR-LR image pairs upon a single image is a fundamental challenge. In our work, we formulate the dataset from the LR image by downsampling operation and data enhancement strategy. This maximized use of internal information contributes to a better quality of I SR . Based on the generated training dataset, the loss function is shown as: Figure 2. The comparison of supervised and unsupervised SR learning under "non-ideal" downscaling kernals condition. The unsupervised DL-SR method (ZSSR) firstly estimate the PSF, and learning internal information by a small CNN. The supervised method is one of the best ones named EDSR which is trained by a lot of image pairs. The comparing result shows that the unsupervised method surpasses the supervised method in the repetitive details, which potentially indicates the validity of internal recurrence for SR generation. To obtain a comprehensive multi-scale dataset, we implement the data augmentation strategy on input image which is further down-scaled in a certain range. The process is in following. Firstly, an input image I acts as the I HR image father. To use more spatial structure information, we introduce a down-scaled method to produce various different scaled HR images I HR i , i = 1, 2, · · · , n, which are dealt with several different ratios. Secondly, we further downscale these I HR i with a fixed factor to get their corresponding LR images I LR i (i = 1,2,. . . ,n). Lastly, all these image pairs are augmented by rotation and mirror reflections in both vertical and horizontal directions. The final dataset contains image pairs with different shapes and contents. More information about the change of pixel alignment comes from a variety of scale images. In summary, all training pairs contain similar content architecture. Hence, the more pixel-level changing information among images of different sizes is involved, and then the better result will be yielded. Unsupervised Multi-Gram SR Network Based on ResSR, our model incorporates with a two-step global learning architecture inspired by [19]. Some specific changes are implemented for the specific of unsupervised SR purposes. Architectures of our UMGSR, ResSR, and Super-FAN are shown in Figure 3. There is limited research on unsupervised DL-SR. To our knowledge, ZSSR [11] obtains a significant success in accurate pursuing route. They introduced a smaller and simpler CNN SR image-special model to obtain SR upon smaller diversity I HR i and I LR i from the same father image than any supervised training image pair. They announced that a simple CNN was sufficient to learn the SR map. At the same time, to some extent, the growth track of better PSNR supervised method indicates an obvious affinity between the network complexity and the SR generation accuracy. For example, EDSR [1] reports that their significant performance is improved by extending the model size. Therefore, we propose a more complex unsupervised model-UMGSR-shown in Figure 3c. The total architecture of UMGSR. Generally speaking, the SR network can be divided into several blocks according to the diverse image scales during training. Taking 4× for example, there are three different inner sizes: the original input, the 2× up-scaling, and the 4× up-scaling. For simplicity, we define these intermediate blocks as L s1 , L s2 , and L s4 . Several blocks are stacked to learn the specific scale information in the corresponding stage. Then, ResSR leverages 16 residuals as L s1 for hierarchical convolution computation. The final part contains a 2× scaled block L s2 and a final 4× scaled one L s4 . In general, the total architecture of ResSR can be denoted as 16 − 1 − 1 (i.e., L s1 − L s2 − L s4 ). From the comparison in Figure 3a-c, the architectures of three methods are: 16 − 1 − 1 , 12 − 4 − 2, and 12 − 4 − 2 respectively. The first part of the network contains one or two layers to extract features from the original RGB image. To this end, former methods mostly use one convolutional layer. By contrast, we use two convolutional layers for extracting more spatial information as in DBPN [2]. The first layer leverages a 3 × 3 kernel to generate input features for residual blocks. It is worth pointing out that there are more channels in the first layer for abundant features. For the purpose of acting as a resource of global residual, a convolutional layer with a 1 × 1 kernel is applied to resize layers same as the output of branch. For middle feature extracting part, the total residual blocks in all three models are similar. The main difference refers to the number of scaled feature layers. In fact, as pointed out in super-FAN, only using a single block at higher resolutions is insufficient for sharp details generation. Based on super-FAN, we build a similar residual architecture for a better generation. In detail, the middle process is separated into two sub-sections, and each subsection focuses on a specific 2× scaled information learning. Inheriting the feature from the first part, layers in the first subsection extract features with the input size. Because more information of the input is involved here, more layers (12 layers) are employed in the first subsection, which aims at extracting more details of the image and producing sharper details. In contrast to the first subsection, the second one contains three residual blocks for further 2× scale generation. Global residual learning. Another important change is a step-by-step global residual learning structure. Inspired by ResNet, VDSR [5] firstly introduces global learning in SR, which succeeds in steady training a network with more than 20 CNN layers. Typically, the global learning can transmit the information from the input or low-level layer to a fixed high-level layer, which helps solve the problem of dis-convergence. Most of the subsequent DL-SR models introduce global learning strategy in their architectures to build a deep and complicated SR network. As shown in Figure 3a, the information from the very layer before the local residual learning and the last output layer of the local residual learning are combined in the global residual frame. However, only one scaling block for SR image generation is not enough for the large-scale issue. Therefore, in UMGSR, we arrange the global residual learning in each section: two functional residual blocks with two global residual learning frames. In fact, the first global learning fulfills stable training, and the closely adjacent second section can leverage similar information of the input image. Local residual block architecture. Similar to SRGAN, all local parts are residual blocks which has proved to achieve better features learning results. During the training stage, we also explore the setting as in EDSR [1] abandoning all batch normalization layers. In general, the local residual block contains two 3 × 3 convolutional layers and a ReLU activation layer following each of them. Results of ResSR and EDSR elucidate the superior learning ability of this setting. Pixel, Perceptual, and Gram Losses In the realm of SR, most DL-SR methods train models with the pixel-wise MSE loss. Because there is direct relationship between MSE loss and standard PSNR index which commonly measures final performance. In [43], a novel perceptual loss is proposed to learn texture details. The new loss calculates Euclidean distance between two specially chosen layers from a pre-trained VGG19 [47] network. In SRGAN [3], the perceptual loss is firstly introduced to SR, and it shows great power in the generation of photo-realistic details. Another loss for feature learning is the gram loss [13] which is widely used in the realm of style transfer. Gram loss performs as a global evaluating loss, which measures the style consistent. To extract more information about spatial structure, we use multi-gram loss in this paper. Ultimately, the loss function of UMGSR combines MSE loss, perceptual loss, and the multi-gram loss. More details are shown in the followings. Pixel-level loss. Pixel-level loss is used to recover high-frequency information in I SR i with supervised I HR i . Normally, traditional l 1 or l 2 norm loss is widely used in DL-SR model, and they can produce results with satisfactory accuracy. In our UMGSR, the MSE loss is also introduced as the principle pixel-level loss for high accuracy. It is defined as: where W and H are shape factors of input, and s is the scale factor. The MSE loss contributes to finding the least distance in pixel-level among all possible solutions. When measuring the accuracy, models achieve the best PSNR and SSIM without using other loss. However, the I SR s suffers from the over-smooth issue, which leads to an unreal feeling in visual. A detailed illustration will be shown in the experimental part. To deal with this problem, we further propose perceptual loss and multi-gram loss. Perceptual loss. To obtain more visual satisfying details, we apply the perceptual loss [43] as in SRGAN [3], which minimizes the Euclidean distance of a pre-trained VGG19 [47] layer between the corresponding HR and SR images. It aims at better visual feeling results, as well as reducing of PSNR. To facilitate the understanding, we illustrate the architecture of VGG19 in Figure 4. In SRGAN, only one specified layer of VGG19 is involved in the perceptual loss, i.e., VGG 5,4 (the fourth convolution before the fifth pooling layer). Different layers of the network represent various levels of feature. In other words, the former part learns intensive features, and the latter one learns larger coverage information. As a result, we argue that one layer for perceptual loss is not enough. To fix this weakness, we propose a modified perceptual loss by mixing perceptual losses in several different layers of VGG19. In our experiments, we use the combination of VGG 2,2 , VGG 2,3 , VGG 3,4 , and VGG 5,4 with different trade-off weights, i.e., In fact, this new loss helps us abstract feature information in different feature sizes. Although it is proved in [7] that the perceptual loss in high-level layer promotes better texture details, we still insist that the training of DL-SR network is a multi-scale learning process, and more information involved can potentially lead to better results. During experiments, we propose a perceptual loss to generate visual transition details from high-frequency information. Multi-gram loss. In style transfer, the gram matrix measures the relationship among all inner layers in a chosen channel. It supplies the global difference information of all image features. The gram loss is first introduced to DL in [13], to train a DL network with gram loss as a style loss and MSE loss as a content loss between two images. In SR, I HR i and I SR i share similar spatial architecture and features. More spatially invariant can be extracted by the feature correlations in different sizes. Compared with style transfer, we introduce the multi-gram loss [22] in UMGSR to generate better visual details as [22], which first proposes the multi-gram loss from the Gaussian pyramid in a specific layer. Our redesign of the multi-gram loss for the SR purpose is shown as follows: In detail, the first function calculates the gram matrix in a specific layer. All i, j, r, s represent different feature maps: i, j in the r th layer and the s th scale octave of the Gaussian pyramid. The second function measures the gram loss between the source image and its counterpart. The last function refers to the specially chosen layers, where we expect to extract the gram loss. The values of v and w are chosen from 1 or 0, to keep or abandon the gram loss of one certain scale layer, respectively. The multi-gram loss determines the overall global texture in image compared to the perceptual loss on local features. Each of them can be served as the complementary role to another. The experiments show their positive effect on the details of the final SR output. In general, the final loss of UMGSR is constituted by summing up all the three losses with specific trade-off factors as: Experiments In this part, we conduct contrast and ablation experiments to evaluate our proposed UMGSR. All of our models are trained on a NVIDIA TITAN XP GPU with 4× scale factor. There are three parts as follows: Setting Details Because just one image acts as the input of UMGSR, we choose all input images I in from three different benchmark datasets (Set14 dataset [48], DIV2K dataset [49], and PIRM dataset [15]), to conduct a fair comparison with other supervised and unsupervised methods. The images with content consistent to various complicated conditions are qualified as the realistic ones. Training setting details. As mentioned in the methodology part, we firstly apply the data augment strategy to form the training dataset from I in . To obtain I HR i (i=1,2,. . . ,n), we randomly scale I in in the range of 0.5 to 1, following with rotation on I HR i in both horizontal and vertical directions. In addition, we do not apply random cropping, so that more information of I in can be kept. The initial learning rate is set to be 0.001, with half reducing when remaining epochs are half down. We perform Adam (β 1 = 0.9, β 2 = 0.999) to optimize the objective. The patch size is 30 × 30, and the corresponding HR size is changed to 120 × 120. The I LR i (i=1,2,. . . ,n) images are with smaller size since they are 4 × −8× down-scaled from the I in images. We set the total training epochs as 4000. Ablation setting. In the following part, we demonstrate the influence of proposed changes in UMGSR by ablation analysis. To this end, firstly, we train our model only with MSE loss. Secondly, we use both the MSE loss and the perceptual loss. Here, we also consider the comparison between single perceptual loss and the incorporating one to evaluate its influence. Finally, we investigate the performance with the total loss, combining the MSE, the perceptual, and the multi-gram loss. Except for the loss function, all other settings are kept consistently. We parallelly compare the generations of UMGSR (with different loss functions and structures), EDSR(https://github.com/thstkdgus35/EDSR-PyTorch), SRGAN(https://github.com/tensorlayer/srgan), and ZSSR (https://github.com/assafshocher/ZSSR). All generations are obtained by the pre-trained models from the url links. All results are compared in PSNR (Y channel), which measures the accuracy in pixels, and another total distribution index: the spectral image. Moreover, we further present the detail comparison of the same patch from all generations. Structure setting. UMGSR with 15 residual blocks is shown in Figure 3. In detail, the former 12 blocks are used to extract the first 2× features from the input. The remaining three residual blocks inherit information from previous 2× scaled blocks and achieve 4× up-scaling. All filter sizes equal to 30 × 30, and all residual blocks include 64 channels for feature learning in contrast to 256 channels in the deconvolutional part. We train the model with the 1008 HR-LR image pairs from one image. Ablation Experiments Training when β and γ are equal to zero. As most DL-SR methods, we use the MSE loss as the basic loss function. In this setting, our model is similar to the ResSR except for single difference in the total architecture. To show changes of new structure, we compare them with only structure difference. The final results of these two methods are shown in Figure 5. From the results, we can see that our two-step network produces pictures with more natural feeling than ResSR. In addition, spectral comparison in Figure 5 shows that the two-step network generates more accurate features. There is less blur information in the red rectangular area where two-step strategy is used. Training when γ equals to zero. In this part, we introduce the perceptual loss to the loss function. To be specified, layers VGG 2,2 and VGG 4,3 of VGG19 are used in the final loss function by fixing α 1 = 0.3 and α 3 = 0.7 in (3). Here, to comprehensively distinguish the effect of perceptual loss, we display the comparison between training with only the perceptual loss and with the combination of MSE and perceptual loss in Figure 6. From the detail contrast, we can tell that with single perceptual loss, many features in local block are missing. In our opinion, this phenomenon is due to the upsampling stage where the input must be enlarged by Bicubic to the required input size of VGG network, i.e., 224 × 224. However, the I SR and I HR images in UMGSR is 120 × 120. As a result, a lot of unfitting information appears in up-scaled images. This local mismatching information further results in poor generations. Training with all loss settings. In this part, we use the loss by incorporating the MSE loss, the perceptual loss, and new multi-gram loss. With the multi-gram loss, the network learns feature map in both global and local aspects. Because multi-gram loss measure spatial style losses, it leads to better visual feeling results both in details and shapes. Referring to super-parameters, α = 1 and β = γ = 2 × 10 −6 . This setting is proved to be useful by SRGAN. In general, the final loss function is: Loss total = Loss mse + 2 × 10 −6 Loss vgg 5,4 + 2 × 10 −6 Loss gram The multi-gram loss is somehow similar to the perceptual loss. Both learn loss from inner layers of a pre-trained VGG network with the final SR image and its corresponding HR image as the inputs. For multi-gram loss, the VGG 2,1 and VGG 3,2 are chosen to be the specified loss layers. All chosen layers are down-scaled to five pyramid sizes for spatial adaption. The size of the chosen layers must be large enough. Then, five different sub-layers-like pyramid structure are used to calculate gram losses as mentioned in Section 3.3. Similar to the perceptual loss, extra noise appears in the SR results if the model trained only with multi-gram loss. The final PSNR of images are summarized in Table 1, and the visual comparison is shown in Figure 7. With the introduction of multi-gram loss, more pleasant features appear in generations, which can be clearly observed in Figure 1. Furthermore, the MSE changing chart shows the advantage of final loss (combination of MSE, perceptual loss, and multi-gram loss) in Figure 8. Discussion In this paper, we compare the proposed UMGSR with other state-of-the-art supervised and unsupervised methods with both traditional PSNR value and the power-spectrum image contrast. Referring to the unsupervised setting of UMGSR, more analysis needs to be involved, to better evaluate its performance. On the other hand, the latest research in [50] suggests that there is a trade-off between distortion and perception. Our research pays much attention to the visual satisfactory generation, which hurts the PSNR to some extent. Hence, traditional accuracy measurement, such as MSE, PSNR, and SSIM [51] cannot justify the advantage of our method properly. We exhibit the SR results of five different methods, EDSR, ZSSR, SRGAN, UMGSR (MSE), and UMGSR (total loss), with HR images in Figure 9. The PSNR scores are shown in Table 1. In detail, image 1 is from DIV2K [49]. It acts as the training image of EDSR. According to the PSNR values, EDSR achieves the best result. On the other hand, from Figure 7, we can infer that UMGSR produces SR image with more carving details, leading to better visual feeling than EDSR. The conclusion is in keeping with the viewpoint of SRGAN: higher PSNR does not guarantee a better perceptual result. This phenomenon is fairly obvious in the comparison between UMGSR with MSE loss and with total loss. In unsupervised SR learning, PSNR of ZSSR is much higher than ours while their SR images are in worse visual details. To highlight the difference among these methods, we compare the SR images by their 3D power-spectrum [52] in Figure 9. From the spectrum distribution, we can clearly see the distribution of the whole image. It distinctly shows that our method is much better than ZSSR and EDSR, which generate obvious faults. We assume that it is due to the mixture loss leading to better texture generation ability in our model. Figure 7: HR, EDSR, ZSSR, UMGSR with MSE loss, and UMGSR with total loss. Smooth edge of spectra reflects more colorful details and sharp fault means the lack of some color range. Even though abundant power spectra does not mean accurate, it indeed prove more vivid details in the image. As a result, our model can generate dramatic features than accurate pursuing models(EDSR, ZSSR). To better evaluate these models, we show generations in the same chosen patch in Figure 7. These results show that traditional accuracy-pursuing SR methods generate rough details and better shape lines, while UMGSR (total loss) results in satisfactory performance in image details, which are even better than the supervised SRGAN. This is also verified in 3D power-spectrum image, where our result is quite similar to the HR. In general, high-frequency information (like shape lines) is more sensitive to accuracy driven methods, such as EDSR. Meanwhile, SR images generated by these methods hardly provide pleasant visual feeling. Their ensembles are like drawn or cartoon images. For example, Roma Desert place (The second test image -3rdand 4th rows in Figure 7) generated by EDSR shows sharper edges but untrue effect. Visual feeling pursuing models (like SRGAN and UMGSR) generate more photo-realistic features accompanied by inaccurate information in pixel-level. For example, SRGAN introduces rough details in the local parts far away from the ground truth, especially for the large flat space. In our opinion, this is the common weakness of GAN related SR methods. In particular, our two-step learning partly overcomes it. Accordingly, the SR images of UMGSR show better shapes than SRGAN along with better visual feeling than EDSR. Conclusions and Future Work In this paper, we propose a new unsupervised SR method: UMGSR, for the scenario of no supervised HR image involved. Compared with former supervised and unsupervised SR methods, UMGSR mainly introduces both a novel architecture and a new multi-gram loss. With these modifications, our UMGSR can address SR issue with single input in any condition. Experimental results show that UMGSR can generate better texture details than other unsupervised methods. In the future work, we will pay more attention to combining our model with GANs on supervised SR problems. Author Contributions: Project administration, funding acquisition, guidance for the research, and revised the paper, Y.S.;writing-original draft preparation, data curation, software, methodology, writing-review and editing and supervision, B.L.; supervision, writing-review and editing, and funding acquisition, B.W.; guidance for the research, conceptualization, software, validation, and supervision, Z.Q.; visualization, supervision, J.L.. Conflicts of Interest: The authors declare no conflict of interest.
2019-08-16T10:22:38.055Z
2019-07-26T00:00:00.000
{ "year": 2019, "sha1": "2a0046e3a3d26ad3b8bde7051604eea5cb62d414", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/8/8/833/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "819bff1fde01d73b60aee98bd1765fe2e26dadfd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
803839
pes2o/s2orc
v3-fos-license
Targeting of >1.5 Mb of Human DNA into the Mouse X Chromosome Reveals Presence of cis-Acting Regulators of Epigenetic Silencing Regulatory sequences can influence the expression of flanking genes over long distances, and X chromosome inactivation is a classic example of cis-acting epigenetic gene regulation. Knock-ins directed to the Mus musculus Hprt locus offer a unique opportunity to analyze the spread of silencing into different human DNA sequences in the identical genomic environment. X chromosome inactivation of four knock-in constructs, including bacterial artificial chromosome (BAC) integrations of over 195 kb, was demonstrated by both the lack of expression from the inactive X chromosome in females with nonrandom X chromosome inactivation and promoter DNA methylation of the human transgene in females. We further utilized promoter DNA methylation to assess the inactivation status of 74 human reporter constructs comprising >1.5 Mb of DNA. Of the 47 genes examined, only the PHB gene showed female DNA hypomethylation approaching the level seen in males, and escape from X chromosome inactivation was verified by demonstration of expression from the inactive X chromosome. Integration of PHB resulted in lower DNA methylation of the flanking HPRT promoter in females, suggesting the action of a dominant cis-acting escape element. Female-specific DNA hypermethylation of CpG islands not associated with promoters implies a widespread imposition of DNA methylation during X chromosome inactivation; yet transgenes demonstrated differential capacities to accumulate DNA methylation when integrated into the identical location on the inactive X chromosome, suggesting additional cis-acting sequence effects. As only one of the human transgenes analyzed escaped X chromosome inactivation, we conclude that elements permitting ongoing expression from the inactive X are rare in the human genome. X chromosome inactivation (XCI) transcriptionally silences an X chromosome early in mammalian development, thereby serving as a means of dosage compensation between the sexes (reviewed in Migeon 2011). XCI is a classic paradigm of epigenetic silencing, but while many of the molecular players in the inactivation process have been elucidated (Wutz 2011), less is known about the cis-acting DNA elements involved in the spread of silencing. A substantial number of X-linked genes continue to be expressed from the inactive X (Xi) in humans, and the clustered nature of these escapees in humans suggests that escape from XCI may be regulated in domains (Carrel and Willard 2005). Intriguingly, the number of escapees in mouse is considerably more limited, with only 4% of genes escaping XCI (20 transcripts) in mice compared to 15% (94 transcripts) of analyzed human genes (Carrel and Willard 2005;Lopes et al. 2010;Yang et al. 2010;Splinter et al. 2011). The ongoing escape from inactivation of the escapee Kdm5c when integrated at four different locations on the mouse X chromosome, while its flanking genes maintain their inactive state, provides strong evidence for intrinsic regulatory sequences, which we refer to as escape elements, that allow genes to escape from XCI (Li and Carrel 2008). Gartler and Riggs (1983) proposed the existence of waystations or booster elements that promote the spread of inactivation, based on the limited inactivation of autosomal regions of X;autosome translocations, and Lyon proposed that long interspersed elements (LINEs) that are enriched on the X chromosome could be such waystations (Waterston et al. 2002;Ross et al. 2005;Lyon 2006). Computational studies comparing the genomic neighborhoods of genes with different inactivation status have supported that inactivated and escape genes have different genomic environments with respect to the content of repetitive sequences (Bailey et al. 2000;Carrel et al. 2006;Nguyen et al. 2011). Several mouse escapees also escape from XCI in humans, and the difference in inactivation of flanking genes has been attributed to the presence or loss of the CTCF boundary element (Filippova et al. 2005;Goto and Kimura 2009). A direct comparison of the inactivation status of sequences integrated into the same genomic location would permit a direct analysis of the impact of cis-acting sequences on XCI and potentially identify sequences containing escape elements, waystations, and/or boundary elements. Most transgenes on the X chromosome are subject to XCI with the only transgenes shown to consistently escape XCI being a chicken transferrin transgene and the mouse Kdm5c bacterial artificial chromosome (BAC) transgene (Goldman et al. 1987(Goldman et al. , 1998Li and Carrel 2008), although a human collagen and an NF-ĸB-dependent EGFP reporter gene may also at least partially escape from XCI (Wu et al. 1992;Magness et al. 2004). Lack of knowledge of the integration site for the chicken transferrin transgene confounds the identification of cis-acting regulatory elements. Targeted single-copy integrations into a single location would provide a consistent resource to examine the spread of XCI into sequences not normally X linked, and Hprt has been described as a permissive locus with relatively minimal effect on transgene expression (Bronson et al. 1996;Cvetkovic et al. 2000). To assess whether a gene is subject to XCI many studies have examined allelic expression in females with nonrandom inactivation (e.g., Carrel and Willard 2005;Yang et al. 2010); however, an indirect means of assessing XCI is to study DNA methylation, which can be readily examined on banked DNA samples. CpG island promoters of X-linked genes that are subject to XCI on the Xi are typically DNA hypermethylated, while CpG island promoters are unmethylated on the active X (Xa), as on the autosomes (Wu et al. 1992;Chong et al. 2002;Matarazzo et al. 2002;Cotton et al. 2009;Yasukochi et al. 2010). Genes and transgenes that escape XCI demonstrate low levels of promoter DNA methylation on both the Xa and the Xi (Goodfellow et al. 1988;Goldman et al. 1998), resulting in overall low DNA methylation levels in both males and females, allowing DNA methylation to be used to assess XCI status of genes (Cotton et al. 2009;Yasukochi et al. 2010). Integration of transgenes at a single locus on the X chromosome allows for detailed examination of cis-acting DNA elements and we herein report the analysis of 74 human autosomal and X-linked transgenes knocked into the mouse Hprt locus that were generated from the Pleiades Promoter Project (Yang et al. 2009;Portales-Casamar et al. 2010;J.-F. Schmouth and E. M. Simpson, unpublished data), including eight human autosomal BACs of .195 kb integrated using the HuGX method [high-throughput human genes on the X chromosome (Schmouth et al. 2012)]. Overall there was a significant difference in DNA methylation between males and females, and most human promoters integrated into the X chromosome showed DNA hypermethylation in females, suggesting they were subject to XCI. Silencing from the Xi was verified by expression analysis of four BAC-derived transgenes present on the Xi in females that had nonrandom XCI. We identified one transgene that escaped from XCI and demonstrated that different constructs had differential capacity for DNA methylation when on the Xi, suggesting the presence of additional cis-acting epigenetic modulators. Pleiades Promoter Project constructs The Pleiades Promoter Project (Yang et al. 2009;Portales-Casamar et al. 2010) was an international collaborative effort to develop various human promoters driving specific expression patterns in the mouse brain, eye, and spinal cord. Most of the promoters originated from human autosomal genes, with only two X-linked promoters, DCX and MAOA, being assessed. All Pleiades strains were made using homologous recombination at the Hprt b-m3 deletion locus on the mouse X chromosome; integration of the Pleiades constructs generated a chimeric HPRT/Hprt locus containing the human HPRT promoter that rescued the deletion (Bronson et al. 1996). MiniPromoters (MiniPs) were #4 kb in size and were composed of different combinations of small putative regulatory elements (http:// pleiades.org/). In contrast, MaxiPromoters (MaxiPs) were human BAC-derived constructs that ranged from 100 to 195 kb of human genomic DNA, with a reporter inserted at the start codon of the gene of interest (J.-F. Schmouth and E. M. Simpson, unpublished data). For the MiniP constructs, the reporter lacZ or EGFP (or EGFP/cre) is 200 bp or 50 bp downstream of the MiniP, respectively. Thirty-seven of 57 target genes contained a promoter CpG island, which was generally truncated in MiniPs compared to the endogenous islands (Supporting Information, Table S1). Approval for the generation and breeding of mice carrying the Pleiades constructs was obtained from the University of British Columbia Committee on Animal Care. Generation of mouse strains The floxed Xist strain 129-Xist tm2Jae [stock no. 029172-UNC (Csankovszki et al. 1999)] was obtained heterozygous from the Mutant Mouse Regional Resource Center, and the credeleter strain FVB/N-Tg(ACTB-cre)2Mrt [stock no. 003376 (Lewandoski et al. 1997)] was obtained heterozygous from The Jackson Laboratory (JAX). Both strains were maintained by backcrossing to strain 129S1/SvImJ [stock no. 002448 (Simpson et al. 1997)] obtained from JAX. The floxed Xist strain was crossed to the cre-deleter strain at backcross generations JAX-plus N2 and N7, respectively, to generate females carrying the Xist deletion (129-XXist 1lox / X). Females with the Xist deletion were then crossed to males with the Pleiades construct integrated at the Hprt locus (B6-X MaxiP /Y) to generate 129B6F1-XXist 1lox /X MaxiP females. This Xist knockout has been shown to render the X chromosome carrying it unable to inactivate (Gribnau et al. 2005), thus resulting in the MaxiP knock-in X chromosome with an intact Xist becoming the Xi. Complete nonrandom XCI was verified by examining the relative expression levels of the B6 and 129 alleles at a single-nucleotide polymorphism of the Fln locus (data not shown). Expression analysis Expression of constructs was assessed by staining of lacZ with X-gal as previously described (Portales-Casamar et al. 2010). lacZ staining was performed on a limited number of ear notch samples (Table S1 and Table S2) and on the brains of female mice carrying the Xist deletion. The images of lacZ staining of the mouse brain sections (Figure 1) were adjusted for brightness and contrast using Photoshop, but all images for the same construct (test and control samples) received the same adjustments. For analysis of transcription, 2 mg of RNA extracted from tissues was converted to cDNA with standard reverse transcription conditions, using M-MLV (Invitrogen) at 42°for 2 hr followed by a 5-min incubation at 95°. Quantitative PCR (qPCR) was used to determine relative transcription levels of PHB, HPRT, and the intergenic region between PHB and HPRT compared to Pgk1 in mice carrying the Ple133 construct (NGFR BAC), using a Ste-pOnePlus Real-Time PCR System (Applied Biosystems, Darmstadt, Germany), using Maxima Hot Start Taq (Fermentas) and EvaGreen dye (Biotium). Conditions for qPCR were as follows: 95°for 5 min; followed by 40 cycles of 95°f or 15 sec, 60°for 30 sec, and 72°for 1 min; and a melt curve stage of 95°for 15 sec, 60°for 1 min, and an increase of 0.3°until 95°. Serial dilutions of genomic DNA from an NGFR female (without the Xist deletion) were used as the standards to which each sample cDNA was compared, to generate a relative quantity of the PHB, HPRT, Pgk1, and intergenic transcription between PHB and HPRT. Expression levels were normalized to Pgk1 expression level, and quantifications were done in triplicate, with any outlier excluded from the analysis. Primer sequences and distances from the assay to the transcription start site are found in Table S3. DNA methylation analysis Using the EZ DNA Methylation-Gold Kit or the EZ-96 DNA Methylation-Gold Kit (Zymo Research), 500 ng of DNA obtained from the lysed ear notches and a limited number of liver and brain samples were bisulfite converted, following the manufacturer's instructions. Internal bisulfite conversion controls were included in the pyrosequencing assays to monitor complete conversion of DNA. Each 25-ml pyrosequencing PCR was performed with 1· PCR buffer (QIAGEN, Valencia, CA), 0.2 mM dNTPs, 0.625 unit Hot Start Taq DNA polymerase (QIAGEN), 0.25 mM forward primer, 0.25 mM reverse primer, and 12-35 ng bisulfite-converted DNA. Assays for CCKBR, ICMT, NOV, and NR2E1 were performed with 0.5 mM forward and reverse primers. Conditions for PCR were 95°for 15 min, 50 cycles of 94°for 30 sec, annealing temperature for 30 sec (see Table S3), 72°for 1 min, and finally 72°for 10 min. One forward or reverse primer was biotinylated, depending on which strand contained the target region to be sequenced, to subsequently isolate the strand of interest for pyrosequencing. Template preparation for pyrosequencing was done according to the manufacturer's protocol, using 10-15 ml of PCR products. CDT tips were used to dispense the nucleotides for pyrosequencing, using the Pyro-Mark MD machine (QIAGEN). Variability in pyrosequencing results within a sample was observed for some DNA, which we attributed to degradation of ear notch DNA that was stored at 4°for up to 3 years. All promoter assays were replicated at least twice and the average is presented. If the standard deviation of a sample for a particular assay was large enough to be considered an outlier using the modified Z-score method (see below), the data point was not included in the analyses. HPRT, Phf6, and lacZ assays were replicated on sufficient samples that we were confident of their reliability (average standard deviations of 5%, 3%, and 5%, respectively), and therefore for these three assays not all samples were replicated. Each human promoter assay was tested in at least one mouse sample without the target transgene to ensure the specificity of the human primers. The University of California, Santa Cruz (UCSC) definition and designation of a CpG island (GC content of at least 50%, length .200 bp, observed CpG /expected CpG ratio .0.6) were used (Gardiner-Garden and Frommer 1987). The promoter of PHB was classified as a CpG island with intermediate CpG density, which was defined as having a GC content .50%, an observed CpG /expected CpG .0.48, length at least 200 bp, and no overlap with the UCSC CpG islands (Weber et al. 2007); the PHB promoter CpG island was located using the CpGIE program (Wang and Leung 2004). Primers were designed using PSQ Assay Design software (QIAGEN). At least three CpGs were analyzed for each CpG island examined, and the distance of the analyzed CpGs from the transcription start sites can be found in Table S3. Promoter identification of PITX2 (Ple158) was based on the ENCODE chromatin states track downloaded from the UCSC genome browser (Ernst et al. 2011). Repeat masker (http://www.repeatmasker.org, Institute for Systems Biology) and Galaxy (Giardine et al. 2005;Blankenberg et al. 2010;Goecks et al. 2010) were used to determine the base pair composition of LINEs and Alu in the MaxiP constructs. Statistical analysis Statistical analyses were performed using GraphPad Prism 5.02. An a-value of 0.05 was used for testing significance. A Mann-Whitney t-test was used to test for significant differences in DNA methylation levels between male and female mice. Mouse strains with modified Z-scores .3.5 in absolute value were considered outliers . Spearman's correlation was performed to examine the relationship between DNA methylation levels of neighboring genes. DNA methylation reflects XCI of Pleiades constructs The Pleiades Promoter Project generated reporter constructs with human promoters and targeted them to the Hprt locus on the mouse X chromosome by homologous recombination (Yang et al. 2009;Portales-Casamar et al. 2010;J.-F. Schmouth and E. M. Simpson, unpublished data). Integration of the Pleiades promoter constructs created a chimeric HPRT/Hprt locus that consisted of the human HPRT promoter and exon 1 and mouse Hprt exons 2-9 ( Figure 1A). While MiniP constructs contained a human promoter of #4 kb in size that drives a reporter (lacZ, EGFP, or EGFP/cre), MaxiP constructs were derived from human BACs of up to 195 kb with the reporter (lacZ or EGFP) inserted at the translation start codon of the gene of interest on the human BAC. To predict whether the constructs were subject to the cis-regulation of XCI now that they were X linked, we generated female mice carrying a deletion at the Xist gene [Xist 1lox (Csankovszki et al. 1999)] on the X chromosome without the knock-in, thereby causing the Pleiades knock-in to always be on the Xi. The MaxiP constructs AMOTL1 (Ple5), MAOA (Ple127), NR2E1 (Ple142), and NR2F2 (Ple143) were not expressed in the brains of females heterozygous for the Xist deletion, but were expressed in various parts of the brain in females without the Xist deletion ( Figure 1B), indicating that these MaxiP constructs are expressed only when present on the Xa and are thus subject to XCI. DNA methylation analysis on DNA from ear notch samples of hemizygous male and heterozygous female mice transgenic for AMOTL1 and NR2E1 showed that CpG island promoters on the BACs were signif-icantly DNA hypermethylated in females compared to males of the same strain (P , 0.05; Figure 1C), in agreement with the XCI statuses assessed by lacZ expression in the females with nonrandom XCI. DNA methylation was also examined in brain and/or liver samples for a few constructs, including the NR2E1 BAC, and similar levels of DNA methylation to those in ear notches were observed (Table S2). We conclude that DNA methylation at CpG island promoters is a reliable predictor of XCI status for transgenes at Hprt. DNA methylation was therefore used to determine the XCI status of the remaining MaxiP constructs (NOV, NGFR, PITX2, and LCT). PITX2 (CpG island 46 designation on UCSC) showed less female DNA methylation than the other MaxiP constructs, which could reflect either the presence or the absence of cis-acting regulators of XCI or a tendency to be preferentially located on the Xa. To examine the latter possibility we looked at DNA methylation of the flanking Phf6 and HPRT promoters. DNA methylation of flanking genes reflects both skewing of XCI and differential capacity for DNA methylation on the Xi The Pleiades construct and the human HPRT promoter are located on the same chromosome; therefore, if substantial skewing of XCI were present, their DNA methylation levels would be correlated, reflecting the proportion of cells in which they are both on the Xi. In contrast, Phf6 DNA methylation should not be affected by skewing since it is present on both X chromosomes. The HPRT promoter CpG island was truncated in the chimeric gene, but the chimeric gene complemented the Hprt deletion and provided resistance to HAT selection. Both the HPRT and the Phf6 promoters demonstrated significant DNA hypermethylation in females compared to males (HPRT, female average 38%, male average 5%, P , 0.0001; Phf6, female average 34%, male average 5%, P , 0.0001), suggesting that both neighboring genes were generally subject to XCI. Compared to Phf6 DNA methylation, HPRT showed higher variability in promoter DNA methylation levels between female mice (standard deviations: 10% for HPRT, 4% for Phf6), consistent with variability in levels of skewing of XCI in the samples analyzed. A correlation between the DNA methylation levels at the human promoter and at HPRT, but not with Phf6 was observed ( Figure 2), supporting the presence of skewing of XCI in the analyzed ear notch samples. Intriguingly, different MaxiP constructs showed different slopes in the correlation of their DNA methylation level with HPRT ( Figure 2B), suggesting that Pleiades promoters have different capacities for DNA methylation when located at the same site on the Xi. To confirm that different constructs had different levels of DNA methylation on the Xi, we analyzed the promoter and HPRT DNA methylation levels in females homozygous for the knock-in and in females heterozygous for the Xist deletion who carry the knock-in solely on the Xi. The AMOTL1, NOV, and NR2E1 MaxiP constructs showed similar levels of HPRT DNA methylation on the Xi (70%) but slightly different levels of promoter DNA methylation on the Xi ( Figure 2B). DNA methylation levels at PITX2 and NGFR were strikingly different from those at the other MaxiP constructs. PITX2 (CpG island 46) showed a much lower range of DNA methylation when compared to DNA methylation of AMOTL1, NOV, and NR2E1, and DNA hypermethylation of HPRT indicates that the low level of female DNA methylation at PITX2 is not attributable to skewing of XCI but to its intrinsic resistance to accumulate DNA methylation ( Figure 2B). In contrast, the NGFR MaxiP construct showed a lower HPRT DNA methylation range (13-33%) compared to the other MaxiP constructs, suggesting that the capacity of HPRT to accumulate DNA methylation is altered in this construct. We also designed a DNA methylation assay 720 bp downstream of the start codon in the lacZ reporter, which showed similar DNA methylation levels on the Xi for all constructs except the NGFR BAC ( Figure S1). The NGFR BAC showed lower levels of HPRT and lacZ DNA methylation on the Xi than expected (HPRT average 41%, outlier; lacZ average 56%), suggesting the region is subject to substantial influence from the genomic context. Therefore, PITX2 showed the largest decrease in capacity to accumulate promoter DNA methylation and the NGFR BAC showed an impact on HPRT DNA methylation. To understand the cis-modulatory effects of the integrated DNA, we explored the PITX2 and NGFR BACs in more detail. PITX2 is DNA hypermethylated at transcription start sites as well as intragenic and intergenic CpG islands in females CpG island 46 on the PITX2 BAC is not annotated as the start of the PITX2 transcript, so we analyzed the DNA methylation levels at eight additional locations on the BAC including Figure 1 BAC-derived MaxiP constructs of 120-195 kb are subject to XCI and show DNA hypermethylation of CpG island promoters in females compared to males. (A) Experimental system in which human promoters driving a reporter (Pleiades constructs) were integrated at the Hprt locus on the mouse X chromosome, using homologous recombination. A chimeric HPRT/Hprt locus was generated, consisting of the human HPRT promoter and first exon and the mouse counterpart for the rest of the gene. The majority of the female mice examined were heterozygous for the human knock-in, so the wild-type (WT) mouse locus is shown below the knock-in chromosome. The sizes of the Pleiades construct and the internal exons are not shown to scale. (B) Generation of mice carrying the Pleiades knock-in on the Xi. Top, the breeding scheme crossing Xist 1lox females with males carrying the Pleiades knock-in, to generate females carrying the Xist deletion and the knock-in on the Xi (Xi), and females with wild-type Xist and the knock-in, which could be on the Xa or Xi (Xa/Xi). Bottom, the lacZ staining in the brains of females with AMOTL1, NR2E1, MAOA, and NR2F2. Regions with lacZ staining in the brain sections of mice carrying the NR2E1 transgene are indicated by the white arrowheads. Images labeled with the same tEMS number were obtained from the same mouse. (C) DNA hypermethylation of the MaxiP constructs predicts XCI status. Each construct is denoted with a Pleiades (Ple) number, along with the human gene from which the constructs originate. Constructs with lacZ and EGFP as the reporter are colored in blue and green, respectively. DNA methylation of PITX2 was examined at CpG island 46 (UCSC). The DNA methylation shown for the LCT BAC (Ple126) is the promoter DNA methylation at the MCM6 gene present on the same MaxiP, not at the promoter of the LCT gene itself. Significance was tested using a Mann-Whitney t-test. n.t., not tested due to the limited sample numbers. Circles, DNA methylation of the individual sample; bar in the center of the error bars, average DNA methylation for the strain; error bars, 61 standard deviation between mice for the strain; shaded regions, 2 standard deviations from the average DNA methylation level. exon 1 and intron 2 (non-CpG island), three internal CpG islands, the promoter CpG island of the annotated alternative isoform, and two intergenic CpG islands ( Figure 3). Although the first exon does not contain a CpG island, it still showed significantly higher DNA methylation in females than in males (P = 0.0084; Figure 3). In fact, all the locations tested in PITX2 generally showed DNA hypermethylation in females compared to males, including the CpG island at the alternative promoter and the intergenic CpG islands. Although our DNA methylation assays are located in the gene body of PITX2, chromatin modifications associated with promoters were found to overlap the assays in intron 2 and CpG islands 46 and 196 ( Figure 3) (Ernst et al. 2011), suggesting PITX2 has additional internal promoters. Intergenic CpG islands 59 and 29 show no or very weak chromatin modifications associated with promoters or enhancers, yet both CpG islands showed female-specific DNA hypermethylation ( Figure 3). Similarly, analysis of an intergenic and an intragenic CpG island on the NR2E1 BAC demonstrated female-specific DNA hypermethylation (data not shown). Interestingly, lacZ showed a clear difference in DNA methylation levels between males and females ( Figure 3), in agreement with the DNA methylation status of multiple sites in PITX2. Thus, while CpG islands 18 and 46 showed lower female DNA methylation (average 14%), because other locations in the gene consistently showed DNA hypermethylation in females at levels consistent with XCI, we conclude that PITX2 is likely subject to XCI based on DNA methylation. Consistent with published data (Straussman et al. 2009), our assessment of male and female blood and lymphoblast lines suggests that in humans the promoter CpG island 196 is always unmethylated while other sites show variable DNA methylation that is not sex specific (data not shown), with the exception of CpG island 29, which appears to be DNA methylated in all tissues except sperm. A truncated gene on the NGFR BAC construct partially escapes from XCI A distinguishing characteristic of the NGFR construct from the other MaxiP constructs was the presence of a truncated gene at the end of the BAC that is adjacent to the HPRT/Hprt locus ( Figure 4A). The PHB gene is truncated within the 39-UTR 200 bp from the end of the gene, and we hypothesized that PHB escaped from XCI and that the run-on transcription from PHB through the HPRT/Hprt locus positioned 2.5 kb downstream could be the cause of the reduced HPRT DNA methylation on the Xi. We therefore examined the transcription levels of PHB and the intergenic region between PHB and HPRT/Hprt in males and in females with and without the Xist deletion ( Figure 4B). By qPCR, we showed that PHB was not a highly expressed gene relative to Pgk1, but was expressed from the Xi in females heterozygous for the Xist deletion at levels up to 30% of the level of expression observed in males ( Figure 4B), while females with random XCI showed a level of PHB expression close to 60% of that in males. Variability was observed in PHB expression levels from the Xi between females, perhaps reflecting the variable escape from XCI previously described for X-linked genes (Carrel and Willard 2005). However, while the expression level at 1.4 kb downstream of the truncated PHB gene in the intergenic region was essentially the same as at the 39-UTR of PHB ( Figure 4B), this transcription had ceased by 250 bp upstream of HPRT/Hprt, indicating that there is no substantial run-on transcription through the HPRT/Hprt locus. In addition, analysis of HPRT expression showed that HPRT/Hprt remained inactivated on the Xi despite its lower level of DNA methylation and proximity to a gene escaping from XCI. In agreement with the PHB expression analysis, the promoter of PHB has an island of intermediate CpG density (GC% = 52.9, observed/expected CpG = 0.57, length = 1823 bp) that showed relatively low DNA methylation in females with the Xist deletion, but the PHB DNA methylation level on the Xi was still distinct from the level of DNA methylation on the Xa in males ( Figure 4C). Overall, it appears that PHB partially escapes from XCI; however, run-on transcription through HPRT/Hprt is not the cause of altered HPRT DNA methylation capacity on the Xi. MiniP constructs are generally subject to XCI Since our MaxiP results agreed with previous reports that DNA methylation is an accurate marker for XCI status (Goldman 1998;Weber et al. 2007), we analyzed promoter DNA methylation of the MiniP constructs to predict their XCI statuses. Heterozygous females overall showed significantly higher DNA methylation levels at promoter CpG islands compared to males (female average, 45%; male average, 12%; P , 0.0001). To determine whether there were MiniP promoters that might escape XCI, we analyzed the DNA methylation levels of the constructs separately. DNA methylation levels were analyzed at 46 island-containing MiniP constructs, which originated from 23 human genes. For MiniP constructs that were generated from the same gene and thus shared the same core promoter sequence, the same CpGs were examined for DNA methylation levels. Almost all MiniP constructs showed promoter DNA hypermethylation in females compared to males, with female and male averages of 44% and 4%, respectively, with the outliers removed in the analysis ( Figure 5A). Of the Pleiades constructs that were also examined for expression in the ear notch samples, excluding the outliers, all showed female-specific DNA hypermethylation independent of whether the transgenes displayed expression ( Figure S2). For three constructs we observed DNA methylation levels ,2 standard deviations below the average in a single female, although still .2 standard deviations above the average in males. These females thus might represent a gene with variable inactivation between mice; however, identification of variable escapees is confounded by skewing of XCI that could result in high standard deviations in DNA methylation levels among females of the same strain. The low DNA methylation level in single females for the three constructs was likely attributable to skewing of XCI, since HPRT DNA methylation levels were also lower (8-27%) in these females. Therefore, for a transgene to be qualified as a potential escapee, we required consistent low promoter DNA methylation in multiple heterozygous female mice. Since our MiniP constructs generally showed elevated average DNA methylation in females compared to males, we concluded that none of the MiniP constructs appeared to escape XCI. Promoter DNA hypermethylation was observed in males for the MiniP constructs derived from genes CARTPT, GPX3, ICMT, OXT, and POGZ, but did not appear to correlate with transgene silencing (Table S1 and Figure S2). It is unknown what DNA sequences in these elements generate exceptions to the DNA methylation patterns observed for the majority of the MiniPs, but interestingly for one of these bar in the center of the error bars, average DNA methylation for the strain; error bars, 61 standard deviation between mice for the strain. Significance is tested using a Mann-Whitney t-test. n.t., not tested due to limited male samples. (CARTPT) the endogenous island shows DNA hypermethylation at the endogenous promoter. In general, we analyzed fewer mice per construct for the MiniPs, but overall MiniP constructs showed higher levels of DNA methylation compared to the MaxiP constructs (average 45% and 33%, respectively, P , 0.0001), perhaps reflecting a closer association of the MiniPs with X-linked cis-acting elements or a protective influence of sequences within the large BAC constructs. DNA methylation levels at HPRT and Phf6 were not significantly different between MiniPs and MaxiPs (Figure 5, B and C). lacZ reporter consistently reflects DNA methylation pattern of CpG island promoters lacZ DNA methylation resembled the DNA methylation pattern of the promoter region in PITX2, leading us to test the utility of lacZ DNA methylation to predict XCI status. Similar to CpG island promoters, female mice showed significantly higher lacZ DNA methylation than males (P , 0.0001), even though males did have substantial DNA methylation (male and female average DNA methylation levels of 26% and 49%, respectively). Mice with an autosomal lacZ showed no significant difference in DNA methylation levels between males and females ( Figure S3), indicating that the differ-ence in the DNA methylation levels of the X-linked lacZ between the sexes is likely a consequence of the epigenetic regulation of XCI. The lower level of male (Xa) DNA methylation for X-linked lacZ may reflect the previously reported permissive nature of the Hprt integration sites (Bronson et al. 1996;Cvetkovic et al. 2000). Although lacZ showed overall higher DNA methylation than the CpG island promoters (male P , 0.0001; female P = 0.0029), lacZ DNA methylation showed a significant correlation with DNA methylation of the promoter island in females ( Figure 6A). Since constructs with and without CpG islands in the promoter both showed a significant difference between female and male lacZ DNA methylation levels (P , 0.0001 and P = 0.0008, respectively), we used lacZ DNA methylation as a surrogate for promoter DNA methylation and screened additional Pleiades constructs for which there was no assay for promoter DNA methylation ( Figure 6B). Consistent with the lack of lacZ expression in XXist 1lox /X MAOA and XXist 1lox / X NR2F2 mice ( Figure 1B), MAOA and NR2F2 showed femalespecific lacZ DNA hypermethylation ( Figure 6B), further supporting the usage of this locus as a surrogate to determine XCI status. However, compared to promoter DNA methylation ( Figures 1C and 5A), males more often showed DNA hypermethylation of the lacZ reporter ( Figure 6B and Internal exons are not shown to scale. (B) Expression of PHB, the intergenic region between PHB and HPRT/Hprt, and HPRT/Hprt (exon 1), normalized to Pgk1. DNA from a mouse targeted at Hprt with BACs for MKI67 (Ple131) and NR2E1 (Ple142) served as negative controls (2) since they lack the PHB gene. The x-axis indicates whether PHB was present only on the Xi or on the Xa in a given mouse, or on either X chromosome (Xa Xi), as in the case for females with random XCI. Error bars indicate 61 standard deviation between two qPCR runs. (C) DNA methylation of PHB promoter in ear notches of mice carrying the NGFR MaxiP on the Xi, either the Xa or the Xi, or the Xa. Error bars indicate 61 standard deviation between mice for the strain. Figure S4). Using the criteria of nonoverlapping standard deviations of DNA methylation between the sexes and a male average DNA methylation level ,2 standard deviations of the female average of all strains, we excluded seven strains, including two (DCX Ple53 and VIP Ple250) for which the single male analyzed showed higher DNA methylation than the female average. Thus, we predict an additional 11 constructs subject to XCI based on lacZ DNA methylation. Discussion Arguably the most dramatic example of cis-regulation in the mammalian genome is silencing of one X chromosome in females. However, the cis-acting elements involved in spreading heterochromatin along the 155-Mb chromosome from the initiating elements in the X inactivation center remain unknown. Having 74 different human transgenes integrated into the mouse X chromosome presented us with an opportunity to assess cis-regulation of 1.5 Mb of DNA at the identical genomic location. Analysis of female mice heterozygous for an Xist deletion causing nonrandom inactivation of the knock-in-bearing X chromosome provided clear evidence for XCI of four of the knock-ins (Figure 1). As a more rapid approach to assess the XCI status of multiple transgenes, we used DNA methylation as a surrogate measure of inactivation, since promoter DNA hypermethylation in females relative to males can be attributed to DNA methylation of the Xi and thus reflects inactivation of the gene (Yasukochi et al. 2010;Cotton et al. 2011). We demonstrated that in addition to such DNA hypermethylation for the genes subject to XCI, the escaping PHB gene in our system exhibited low promoter DNA methylation in both sexes, validating the usage of DNA in absolute values were marked as outliers . (B and C) Phf6 promoter (B) and HPRT promoter (C) both showed a significant difference in DNA methylation levels between males and females. A Mann-Whitney t-test was used to test for significance. Boxplot whiskers are 5th-95th percentiles. Circles, the average DNA methylation levels of each strain. methylation to detect genes subject to, and escaping from, XCI. As the determination of whether a gene is subject to XCI could be confounded by skewing of XCI and there can be variability between females for whether a gene escapes XCI (Carrel and Willard 2005), we included an assessment of HPRT DNA methylation to detect samples with skewed XCI and we required consistently high promoter DNA methylation in multiple females to call a gene subject to XCI. Overall, 92% of the constructs analyzed showed DNA hypermethylation of the human promoters in female mice compared to males ( Figures 1C and 5A), indicative of XCI of the knock-in gene. Our goal in determining the XCI status of the 74 knock-in constructs was to characterize cis-acting elements involved in the spread of epigenetic silencing. In 1983 Gartler and Riggs proposed waystations as elements enriched upon the X chromosome that aid in spreading the silencing signal along the chromosome, based on the limited spread of XCI into autosomes that is seen for X;autosome translocations (Gartler and Riggs 1983). Because MiniP constructs are small transgenes, it is not surprising that all were shown to be subject to XCI, since they are in close proximity to X-linked DNA and putative waystations. Indeed, the majority of previous studies on X-linked transgenes have reported silencing of the examined transgenes on the Xi. However, the 187-kb chicken transferrin transgene is one of the few exceptions that consistently escaped from XCI (Goldman et al. 1987(Goldman et al. , 1998. As the MaxiP constructs were of a similar size (120-195 kb), we anticipated that the MaxiPs originating from autosomes would have a high probability to lack waystations and escape from XCI. However, our results dem-onstrated that mouse XCI is consistently capable of inactivating foreign transgenes up to 195 kb. Thus, escape from XCI of the chicken transgene may reflect integration into a waystation-poor region relative to Hprt. In addition, however, there is now evidence for a different type of cisregulatory element. Recently four different integrations of a Kdm5c BAC recapitulated both escape from XCI for Kdm5c and silencing for the flanking Tspyl2 and Iqsec2 genes, strongly supporting the existence of a cis-acting element on the BAC that controlled escape from XCI (Li and Carrel 2008). Therefore, we hypothesize that there are both waystations and escape elements regulating the spread of XCI. Escape elements are presumably outside the promoter as none of the 46 MiniPs examined in this study nor the majority of previously examined transgenes escape from XCI. We determined that the PHB gene escapes from XCI, and as waystations are reduced in abundance on autosomes and NGFR is subject to XCI while being farther from the mouse X-linked DNA than PHB (see Table S2), we conclude that the PHB region likely carries an escape element to escape from XCI. The observed reduction of HPRT DNA methylation to 41% on the Xi when adjacent to PHB, from an average of 70% DNA methylation on the Xi for other MaxiP constructs, suggests that the dominant escape element also influenced the HPRT locus. The PHB gene is truncated in the 39-UTR of the gene; however, we demonstrated that transcription ceased before the HPRT promoter, and since the promoter, splice junctions, and coding sequences are intact, we do not believe that this truncation per se affects either the propensity for PHB to escape from XCI or the loss of Figure 6 lacZ DNA methylation can be used as a surrogate for promoter DNA methylation. (A) Spearman's correlation between lacZ and promoter DNA methylation levels in females. Circles, DNA methylation levels from an individual mouse. (B) lacZ DNA methylation levels of the Pleiades constructs that do not have a DNA methylation assay in the promoter region due to difficulty in assay design, assay failure, or absence of a CpG island. All constructs shown here have lacZ as the reporter on the X chromosome. Circles, DNA methylation levels from an individual mouse; bar in the center of the error bars, average DNA methylation for the strain; error bars, 61 standard deviation between mice for the strain; shaded regions are the 2 standard deviations from the female average DNA methylation level with the outlier strains removed. Outliers are marked with asterisks (*). Female outlier: VIP (Ple250). DNA methylation at HPRT. Interestingly, the reduced DNA methylation at HPRT was still sufficient to maintain XCI, while the 15% DNA methylation of PHB was insufficient for silencing, although there was not full expression from the Xi relative to the Xa. Given that we observed only one such escape element in 47 genes and 1.5 Mb of DNA, our data support the existence of relatively rare dominant escape elements. As escape from XCI is frequent in X;autosome translocations [30% (reviewed in Yang et al. 2011)], it is likely that, as previously proposed, such escape more generally reflects a lack of waystations rather than the presence of escape elements. We therefore decided to examine the large MaxiP constructs to determine whether there was evidence for particular elements that might be functioning as waystations. Waystations have been proposed to be repetitive elements, and we calculated the base coverage in the MaxiP for several repeat elements previously positively or negatively correlated with genes escaping XCI: LINE-1, LINE-2, and Alu (Bailey et al. 2000;Carrel et al. 2006;. Only Ple142 (NR2E1) and Ple126 (LCT) appeared to possess an environment that resembles the genomic context of escapees based on LINE-1, LINE-2, and Alu base coverage on the BAC ( Figure S5). However, promoter DNA hypermethylation of NR2E1 and LCT in females suggests that the genes were subject to XCI ( Figure 1C). Although only eight BACs were examined, our analysis of LINEs and Alu content suggests that these three repetitive elements are insufficient to determine whether a gene is subject to XCI and that additional repeats or a combination of other factors may be required to provide a better prediction of the XCI status. In addition to our search for cis-acting regulatory elements, our analysis of the Pleiades human knock-in constructs into the X-linked Hprt docking site revealed several other insights into the relationship of DNA methylation with XCI. First, we demonstrated that constructs have an intrinsic differential capacity for DNA methylation. Through analyzing MaxiP DNA methylation of female mice with complete nonrandom XCI due to an Xist deletion, we showed that different MaxiP constructs could accumulate DNA methylation at the promoters to different extents on the Xi ( Figure 2B). However, the HPRT promoter and the lacZ reporter, which are shared among the MaxiP constructs, generally exhibited similar levels of DNA methylation ( Figure S1), suggesting that the capacity to accumulate DNA methylation is a characteristic of the DNA sequence. Intriguingly, there may be differences between hemizygous and heterozygous or between homozygous and heterozygous states, as the observed promoter DNA methylation levels in females with the Xist deletion tend to be lower than the expected level of DNA methylation on the Xi based on the assumption that the DNA methylation on the Xa is equivalent between males and females. Second, the differential female:male DNA methylation observed on the X is found beyond CpG-island promoters. Xa-specific DNA methylation has been reported in gene bodies (Hellman and Chess 2007). Our results demonstrated that the influence of XCI on DNA methylation of transgenes applies not only to promoters, but also to gene body and intergenic CpG islands, since all analyzed locations on the PITX2 BAC showed female-specific DNA hypermethylation that was not observed on the endogenous human chromosome (Figure 3), including regions such as CpG island 29 for which there is no evidence for promoter activity (Ernst et al. 2011) or overlap with conserved transcription factor binding sites (TRANSFAC Biobase, http://www.gene-regulation.com/ pub/databases.html). Therefore, it is possible that the default state of CpG islands on the X chromosome is to acquire DNA methylation on the Xi and this is independent of whether the transgenes were expressed ( Figure S2), consistent with the majority of the CpG islands being DNA hypermethylated on the Xi Sharp et al. 2011), and tissuespecific genes such as the human X-linked androgen receptor showing female-specific DNA hypermethylation independent of expression (Allen et al. 1992). The recognition of CpG islands for DNA methylation on the Xi could explain the DNA hypermethylation in females compared to males on the X chromosome for the lacZ reporter ( Figure 6), which is essentially an 3000-bp CpG island. Promoter-less artificial CpG islands inserted into the 39-UTR of an autosomal and an X-linked gene have been shown to recruit the unmethylated CpG-binding protein Cfp1 and the promoter histone mark H3K4me3 even in the absence of RNA polymerase II binding (Thomson et al. 2010), although the X-linked locus has some DNA methylation presumably due to XCI. The ability of CpG-rich sequences to acquire characteristics of promoters further supports using lacZ DNA methylation as a surrogate for promoter DNA methylation to predict whether transgenes are subject to XCI. However, lacZ DNA methylation is not as robust a predictor of XCI status as the promoter DNA methylation since a higher frequency of males with DNA hypermethylation was observed. A third insight was the unexpected observation that the differential female:male DNA methylation may reflect not only gain of DNA methylation on the Xi, but also protection from DNA methylation on the Xa. In general, promoter CpG islands are unmethylated on the autosomes; however, 4% are reported to show DNA methylation, often with variability between tissues (Shen et al. 2007). Four of the 35 autosomal CpG islands analyzed (CARTPT, OXT, THY1, and PITX2: CpG island 29) showed an average of .20% DNA methylation in male and female cell lines and/or blood samples (data not shown; Straussman et al. 2009). However, when THY1 and the PITX2 BAC were present on the X chromosome, they became unmethylated on the Xa; this loss of DNA methylation on the Xa compared to the autosomal locus was also observed for a non-CpG island site (exon 1 of PITX2) and the lacZ reporter. In the knock-in mice, CARTPT continued to show DNA methylation; however, OXT dropped from 60% DNA methylation on the autosome to only 20% DNA methylation in males, again showing decreased DNA methylation in males. In general we observed a dominant regulation of XCI on promoter DNA methylation, with female-specific gain of DNA hypermethylation and in several cases a male-specific loss of DNA methylation. Overall, our analysis of the Pleiades human promoter constructs integrated into the mouse Hprt locus identified 1 of 47 genes in .1.5 Mb of human DNA that escaped inactivation. We propose that there is a dominant cis-acting escape element near the PHB gene that allows it to escape from XCI in an otherwise inactivated region on the X chromosome. This element exerts an influence on the DNA methylation of the flanking HPRT locus, but does not lower DNA methylation to a level that allows expression from the Xi. That eight autosomal BACs ranging in size from 120 to 195 kb contained a gene subject to XCI when integrated at the Hprt site suggests that waystations are likely able to act over a distance of at least 100 kb, in the absence of dominantly acting escape elements. Further analyses of BAC integrations at the Hprt locus will be useful to identify the nature of these escape elements as well as the boundaries that prevent their influence on adjacent genes. Figure S2 Promoter DNA methylation of the Pleiades constructs is reflective of XCI and independent of expression. Females generally showed DNA hypermethylation compared to males regardless of expression status of the transgene. Expression status (expressed and non--expressed) was based on lacZ staining in the ear notches of mice. Circles, DNA methylation of the individual sample; bar in the center of the error bars, average DNA methylation for the strain; error bars, ± one standard deviation between mice for the strain. Outliers in DNA methylation are marked with asterisks (*). Female outlier: POGZ (Ple167). Male outliers: ICMT (Ple123), OXT (Ple152, Ple153), POGZ (Ple167, Ple170). Modified Z--score greater than 3.5 in absolute values were marked as outliers . Figure S3 DNA methylation of autosomal lacZ reporter was not significantly different between the sexes. Three categories of mice with an autosomal lacZ reporter at the ROSA26 locus (Gt(ROSA)26Sor tm1Sor /J; Soriano 1999) were assessed. Category 1: lacZ is activated by EGFP/cre driven by different X--linked MiniPs. Category 2: lacZ had been previously activated by cre. Category 3: lacZ expression did not required activation by cre (Friedrich and Soriano 1991). Each circle represents the level of DNA methylation in an individual mouse. In all categories the autosomal lacZ was driven by the same promoter. Bar, average; error bars, ± one standard deviation between mice for the strain. Significance was tested using Mann--Whitney t--test; n.t., not tested. Figure S4 lacZ reporter in constructs that were analysed for promoter DNA methylation generally showed DNA hypermethylation in females compared to males. Circles, DNA methylation of the individual sample; bar in the center of the error bars, average DNA methylation for the strain; error bars, ± one standard deviation between mice for the strain. Figure S5 MaxiPs, while all subject to XCI, showed no correlation between the XCI status and repeat contents. Repeat content for LINE--1 (A), Alu (B), and LINE--2 (C) did not correlate strongly with the XCI statuses of the MaxiPs. Genes that are subject to XCI are predicted to be enriched in LINE--1 and LINE--2 and depleted in Alu, while genes that escape from XCI are predicted to be depleted in LINE--1 and LINE--2 and enriched in Alu. The median levels of repeat content in 100--kb windows surrounding genes subject to XCI (red line) and genes escaping from XCI (green line) were estimated from . Expression SNP Primers biotinylated at the 5' end are indicated with an asterisk (*). Position of analysed CpGs is relative to the sequencing primer. Distance of pyrosequencing assays from transcription start site (TSS) is the distance of the closest CpG to the TSS. Ta, annealing temperature for PCR; Alt--TSS, alternative transcription start site; SNP, single--nucleotide polymorphism.
2017-05-29T22:42:59.311Z
2012-12-01T00:00:00.000
{ "year": 2012, "sha1": "2dc9a1328b14f9583a9e76c057d4e45c98ce7a1c", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3512139?pdf=render", "oa_status": "GREEN", "pdf_src": "Highwire", "pdf_hash": "47c97e29caa7d789ded0ad72d1bbda0b23ebd9f8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
255218564
pes2o/s2orc
v3-fos-license
Use of Mobile Phones in Classrooms and Digitalisation of Educational Centres in Barcelona : In the wake of the COVID-19 pandemic, multiple educational contexts experienced a sudden and accelerated digital transformation. However, this is not a new phenomenon. For years, public and private initiatives have been designed and tested in Spain. In this regard, the role and use of cell phones in the classroom has been a key and, at the same time, controversial aspect. In Barcelona (Catalonia), for example, recent educational policies have promoted the pedagogical use of cell phones. Within this framework, this article analyses whether these initiatives to promote the use of mobile phones are effectively transferred and implemented in the classroom. Using qualitative research, based on co-design, case studies and content analysis, we examined the reality of three educational centres in Barcelona. In these three contexts, field observations, interviews with management teams and ICT coordinators, and discussion groups with teachers were conducted. The information generated was grouped into five main categories of analysis. As a result, it was observed that the mobile phone has been losing prominence in the classroom. Schools tend to prohibit the use of cell phones and prefer computers to give priority to the control of technological tools in order to use the Internet safely. Mobile phones, in this sense, are only used at certain times when there is a pedagogical objective, although there is still a need for more pedagogical and digital training for teachers. Introduction The evolution of mobile phones in the world has been explosive since the first call was made from the first mobile phone in 1973 [1].Today, recent studies on mobile technology show that its use has been actively intensifying [2][3][4] and that this phenomenon will continue to grow.The main uses of these mobile devices are far from just making phone calls.Today, this evolution has meant that mobile phones can perform a huge number of simultaneous functions, boosted by the momentum and reach of the Internet, as well as the development of countless applications and the use of social networks.Thus, their main functions are now focused on socialising, conducting business, searching for information, shopping online, paying in stores, playing games, among many others, and of course, learning [5]. While this is not a new phenomenon, the truth is that with the COVID-19 pandemic the use of mobile devices has only grown, especially among the younger population.According to the recent report by Common Sense [3], the use of mobile phones by children aged 12 to 18 has increased by 17% since the pandemic began, and much more among adolescents (13 to 18 years).In the case of Spain, the most recent data state that Internet use is practically universal (99.7%) among people aged 16 to 24 and that 68.7% [4] of children aged 10 to 15 have a mobile phone.These data are very relevant and have intensified Educ.Sci.2023, 13, 21 2 of 14 the debate around the uses that children and young people make of this device, and the attitudes of families and the education system in general regarding its active use as a medium or tool for learning in schools [6,7]. Unlike other digital technologies, such as personal computers or laptops, which have been introduced and promoted as useful tools for learning and for the personal and professional development of students in the near future, mobile phones today pose a challenge when it comes to integrating them into the classroom [5,8].While their ubiquity, their socialising function and their role in the development of digital skills are recognised, there is a clear fear that smartphones, due to their individualised and difficult to control usage, generate social inequalities and distractions that undermine the efforts of teachers [6,9]. UNESCO, for its part, advocates the appropriate use of mobiles in the classroom rather than banning them, arguing that in order to maximise the potential of information and communication technologies in education, we cannot ignore the mobile phone, a personal device that virtually all students have at hand, and proposes "to continue to harness mobile devices to support teachers and, by extension, improve learning opportunities for students around the world" [10] (p.66). In Spain, the lack of consensus on the issue at hand is also observed in the different political stances of the autonomous communities.Mellado-Moreno [11] refers to the existence of three different discourses.While the communities of Madrid, Castilla-La Mancha and Galicia have opted for prohibition, other autonomous communities have softened their positions, such as the Valencian Community and Aragon.Catalonia, on the other hand, through the mòbils.eduplan [12], is committed to promoting the use of mobile devices as a strategic educational tool for curriculum development, competence work, inclusive education, tutorial action and the management of coexistence and human relations to promote educational success [11]. Thus, the disparate pronouncements of the different autonomous communities contribute to increasing confusion within the education sector about how to deal with the fact that young people already routinely use this technology outside of school, beyond what they do in the classroom [13]. In this context, the project "Author" came about, whose main objective was to identify and analyse the discourses, practices and positions of educational administrations, teachers, young people, families and companies within the sector on the use of mobile phones in compulsory secondary schools in Spain.Based on an established classification prior to the implementation of the project, which placed the autonomous communities according to their political positioning on the use of mobile phones in the classroom (prohibition, promotion and indeterminacy), ten case studies [14] were conducted in compulsory secondary schools in four autonomous communities in Spain: four cases in Catalonia, which has promotion policies; two in the Valencian Community, which has indeterminate policies; and four in Madrid (2) and Castilla-La Mancha (2), which has prohibition policies.The purpose of the research has been to detect the forms of appropriation or reaction to the official discourses and to analyse the educational practices and dynamics that are promoted for the use of mobile phones in the classroom. In the case of Catalonia, the fieldwork was carried out in three schools in the province of Barcelona and one in the province of Girona.This article presents the results of the three cases developed in secondary schools in the province of Barcelona (two public and one state-subsidised) that, in the first instance, were positioned as centres in favour of the use of mobile phones in the classroom and that had an explicit commitment to include mobile technology to promote learning processes and access to knowledge. It is worth mentioning that the three schools analysed, in addition to what is established in their School Education Projects (SEP).The SEP is a document that includes the school's identity features, the pedagogical principles, the organisational principles and the linguistic project.This document specifies the values, objectives and priorities for action of the school, the curriculum and the cross-cutting treatment in the areas, subjects or modules of education in values and other teachings.In public schools, the SEP is drawn up by the teaching staff, at the initiative of the head teacher.In state-subsidised schools, the head of the school approves the SEP, having listened to the school council (Departament d'Educació, Generalitat de Catalunya shorturl.at/acS38,accessed on 1 October 2022), have the Rules of Organisation and Functioning of the Centre (NOFC), which is a set of rules that regulates aspects related to: (1) the organisational structure and functioning of the school; (2) the participation of the school community in the life of the school; and (3) coexistence (the rights and duties of students, families and teachers).In this sense, the main research questions addressed in this article are: (1) Is there any promotion of the pedagogical or educational use of mobile phones in the classrooms of the schools analysed in the province of Barcelona?(2) Is there congruence between the schools' policies and regulations on the use of mobile phones (Discourses) and the practices carried out by teachers (Practices)? Thus, in this article we present, after the analysis and interpretation of the research evidence, some answers to the questions raised from the reality within the schools themselves.All of this allows us to establish an overview of what happens at the different levels of concretion of the norms and the curriculum.It also allows us to reveal the different realities that are being shaped in educational practice from the analysis and interpretation of what is explicit in the SEP and in the NOFC, and discover what is really happening with regard to practices, contradictions, adaptations, successes, failures, fears and the potential of the use of mobile phones in secondary education in Catalonia as part of the digital transformation of education in the Spanish state. Materials and Methods The methodological framework addresses the need to respond comprehensively to the research questions outlined previously.This implies adopting a methodological perspective that allows us to understand and account for the transformations and implications of the digital society in the realities of schools.Thus, this article is the result of a qualitative research based on the development of three case studies (descriptive-interpretative) carried out in secondary schools in Barcelona, in which, according to [14], a contemporary phenomenon (the "case") is investigated in depth and within its real-world context, especially when the boundaries between the phenomenon and the context may not be clearly evident. The criteria taken into account in the selection of the cases were: (1) public and statesubsidised compulsory secondary schools; (2) schools with an initial position in favour of promoting the use of mobile phones in education; (3) schools in which the educational use of mobile devices will be carried out; and (4) schools willing to participate in the study on a voluntary basis.Table 1 shows the profile of the participating schools: It is committed to the participation of the entire educational community and is committed to the objective that its students achieve competency learning. In order to analyse the educational realities in depth, the research techniques were designed, and the instruments presented below were applied (Table 2).The process of designing the research instruments was based on collaborative work among the project participants.The starting point was the general research objectives and the specific objectives of each phase of its development.From there, the initial dimensions of analysis were defined and agreed upon by all members of the team, integrating the various contexts of implementation of policies and regulations (meso/institutional and micro/classroom).Subsequently, indicators were designed for each dimension to account for all the aspects to be investigated in the case studies, and these were specified in a matrix of dimensions and base indicators to elaborate the relevant items for each research instrument (Figure 1).The process of designing the research instruments was based on collaborative work among the project participants.The starting point was the general research objectives and the specific objectives of each phase of its development.From there, the initial dimensions of analysis were defined and agreed upon by all members of the team, integrating the various contexts of implementation of policies and regulations (meso/institutional and micro/classroom).Subsequently, indicators were designed for each dimension to account for all the aspects to be investigated in the case studies, and these were specified in a matrix of dimensions and base indicators to elaborate the relevant items for each research instrument (Figure 1).The design of the instruments contemplated the integration of various sources of information, which allowed us to include the voices of the main educational agents in the case studies (Table 2) in order to subsequently carry out a triangulation of both sources of information and instruments and techniques for collecting information.In this sense, the items of each instrument were designed and adapted for each of the agents or sources of information: management team, teachers and students. The data analysis was conducted by means of a content analysis understood as the set of techniques of analysis of the communications tending to obtain indicators (quantitative or not) by systematic and objective procedures of description of the content of the messages, allowing the inference of knowledge relative to the conditions of production/reception (social context) of these messages [15]. The units of analysis coded in the general analysis matrix (Table 3) were used as a starting point, and in which subsequently, emerging units of analysis arose from the in vivo coding, to identify and integrate the voices of the participants in the analysis.This in turn enriched the process and highlighted the way in which the intentions and actions of the people where the educational action was taking place were involved with the use of mobiles [16]. A transcendental aspect for the analysis was the need to structure it in such a way that it would shed light on the research questions posed, and for this reason, the analyses were grouped into: (a) Discourses, that is, what is clearly stated or made explicit in the policies and regulations of the centres; and (b) Practices, that is, what is really happening in educational practice for each of the sources of information consulted in the study. Results In accordance with the above, the results of this research are presented below.The first part (vision of the management teams of all the schools) includes the discourses and practices of the schools in relation to the regulations, protocols and management of the use of mobile phones.The second part (vision of the teaching staff of all the schools) includes the practices, the construction of materials and the appropriate use of the mobile phone in classroom contexts. Regulations and Policies of the Centre vs. Functioning With regard to what is included and expressed in the regulations on the use of mobile phones in schools, it can be seen that all of them clearly state that they are in favour of the integration of digital technologies in educational processes and in the functioning of the school, and even present themselves as innovative schools that ensure the development of students' skills in accordance with the advances of the information society.However, not all of the schools have updated these rules, and one of them even argues that due to the pandemic, and all the issues that have had to be managed urgently because of it, the updating of the school's policies in general, not only on the use of mobile phones, has been deferred: "Standards and policies have not been updated due to the pandemic.It is not currently considered a priority item in the management of the facility" (C1). In the School Educational Project (SEP), despite explicitly stating that it is in favour of the use of digital technologies, there is no specific mention of mobile phones. In the specific operating regulations of each school (NOFC), it is clearly defined that the use of mobile phones in the classroom is not allowed without the consent of the teaching staff, and that students are penalised in different ways when they misuse them in any of the school contexts (classrooms, playgrounds, corridors or facilities).Only one of the three schools states that, in addition to keeping the regulations up to date, these are periodically and participatively reviewed and updated by the school's teaching staff, students and families; furthermore, this school explicitly states that it is in favour of the pedagogical use of mobile phones and any other digital technological device that contributes to learning: "Proper and regulated use is encouraged in all spaces, and we have seen that incidences decrease dramatically and there is no need to penalise or remove mobiles [...] Measures are taken when there is evidence of misuse and with serious implications for students" (C3).This school has a clear and transparently disseminated policy for the entire educational community on the use of mobile phones at all times and in all contexts.It also encourages the use of mobile phones with families to manage the attendance of students or the participation of families in other school activities. Creation of Materials and Protocols for the Use of Mobile Phones In relation to the creation of training materials and actions focused on the use and pedagogical integration of digital technologies, especially mobiles, as well as protocols that contribute to the good use of mobiles in the school and among the educational community, this is clearly a much less developed and systematised issue for most schools: "There are no policies for the transfer of this knowledge, material or training.There is training only in Moodle and a talk by the Mossos (Catalan police force) to families... Teachers in isolation share external resources that can help raise awareness" (C1). "Training only on the platform of the centre: Google Suite and some of the Mossos (Police) to families" (C2). "We have designed and adapted a colour protocol for the appropriate use of mobile phones according to the different contextual situations and we have posters and infographics in the corridors and in the different contexts of the centre [...] We do training in the centre, for families, for pupils, and we have a digital welcome plan" (C3). In general, the creation of didactic materials for the educational use of digital technologies in the classroom or the development of the digital competence of teachers and students is sporadic, and when it occurs, they are isolated experiences of teachers who develop these initiatives because they detect an educational need that is considered relevant. Freedom, Democracy and Co-Responsibility in Education The results of the analysis show very little freedom, in general, when it comes to promoting the democratic use of mobiles for educational purposes in schools.Their uses are very specific, and control and punishment predominate most of them.This translates into the preference of schools to use Chromebook or laptop computers in order to ensure strict control, even for the most inquisitive and innovative teachers, of the use of the Internet and programs or applications that can be used with a truly educational and motivational purpose.Adopting the Chromebook or laptop option is inherent in knowing the responsibility of families in the use of these devices: "As the family cannot control the use of leisure time on the mobile phone, they pass on to the school the responsibility to prohibit it, to act in a disciplinary way [...] Families say that they do not know what to do about the risks, such as bullying, and all that their children do with their mobile phones" (C1). Educational co-responsibility is not at all clear, since sometimes it is the families who transfer all the responsibility to the centre.Sometimes it is the students who delegate responsibility to the family and to the schools, and only some teachers and students recognise that it should be a shared responsibility. Uses of Mobile Phones: Contexts and Risks The management teams state that the use of mobile phones in schools is very sporadic, and they are only used for educational purposes when the teachers know how to do so.When they do not know how to use them, the possibility of investigating their educational and didactic potential is ruled out, so there is a great lack of knowledge of their potential in the teaching and learning processes, with some exceptions on the part of the teaching staff: "Students take pictures, make videos, upload everything to the Internet (to Youtube or playing games or making TikToks) in secret, in the corridors, the toilets, and in the playground" (C1). "It is used in all contexts of the centre with colour regulation for reflective, positive, healthy and learning use" (C3). "Exceptional use of mobiles is made according to the pedagogical needs of the teaching staff" (C2). In short, fear and a lack of knowledge predominate, especially in how to manage pupils' misuse and the risks involved.Along the same lines, school management teams state that the use of mobiles is very low with a few exceptions, but they are still far from exploiting their full educational potential: "It is rarely used only at specific times and by some teachers.Teachers are afraid and are largely unaware of the potential of mobile phones for education and learning" (C1). "It is only used very sporadically by some teachers" (C2). "Teachers recognise the potential of ICT and mobile phones in general.Their speed, immediacy, accessibility; they are the door to everything and everyone.They have a very high potential at the level of sensors and so they are suitable for the areas of science, technology and in physical education.They are also apt for detecting body parameters, location, and making calculations..." (C3). In addition, the mobile phone triggers conflicts that cause disruptive behaviour by students in schools.Even when the mobile phone is not used as a mediator of the teaching and learning processes, it can trigger risks of kinds among students.Adolescents who find it difficult to manage themselves adequately, both within the school with teachers and students and outside the school with their families: "Children and adolescents are not prepared to manage a tool such as a mobile phone.They are not capable of managing this device in matters related to the violation of privacy" (C1). "There are some uses and situations of mobile phone use that are not good.The compulsive use of social networks, addiction to games [...] but we recognise that it is not the problem of the mobile, but of the young person who has that addiction, something that is really worrying" (C3). "Young people think that if they don't have a mobile phone, their parents are marginalising them" (C1). One of the key factors why mobile phones are possibly not being used in teaching and learning contexts in a generalized way is precisely the fear that each and every one of the educational agents has expressed; a distrust that prevents them from imagining a world of potential and possibilities.These suspicions show a lack of adequate and systematic training and display certain deficits associated with the limited digital competence of teachers as education professionals in the 21st century. Management of the Centre and Examples of Proper Use of Mobile Phones The results indicate that, without a doubt, the educational centres are concerned about the subject of mobiles and young people.They recognise that it is a technology that permeates the daily lives of all people but especially those of young people: "The centre is faced with the need to consider how to manage the use of mobile phones, as the families say that they cannot.Rather than banning them, which makes no sense, we have to teach the young people in their use and accompany them" (C1). "We cannot ban it completely.It is clear that the mobile phone accompanies us in our daily lives.But let's see how we use it in a way that we do it well, both for school camps and for academic activities" (C2). In general, teachers believe that the use of mobile phones by students in schools must be managed in some way.However, they do not find a way to do this adequately, and therefore, in the face of fear, management is oriented towards control and penalisation.Although some schools state that they do need adequate training and knowledge to manage the need to improve their relationship with the use of mobile phones as a school, they also recognise the need to organise training with their teaching staff and students, and also with and for their own families. Even though schools in general are reluctant to use mobile phones in the classroom, we have mentioned that there is evidence that some teachers in various subject areas are exploring the educational uses of mobile phones and some applications that are really beginning to enhance student learning: "In the humanistic itinerary we made a practical trip with Maps to visit the spaces of the Civil War in Barcelona.Every 2-3 students prepared a SPAR Route in which they had to take a picture of the place and explain to their classmates what this space was.Thus, visiting and explaining historical places in which each place was geolocated, having an itinerary with Maps, uploading a photo, making a summary, sharing with the tutor of the subject, and giving credit to the people with whom you had collaborated, is an example and an indisputable guarantee of what is a good didactic use with the mobile" (C1). "We created WhatsApp groups for the management and coordination of the centre, as well as an APP for attendance control and communication with families, among other functions.We also implemented a Clickedu type management platform for the centre" (C3). Finally, it should be pointed out that there is the professional development of the teachers themselves, which is positively valued by the school management, and it is considered that they should always be the forefront in the supervision of the students. Regulations and Policies of the Centre/Functioning The teaching staff of the participating schools stated that they are aware of the school's regulations on the use of digital technologies in a broad sense, and of mobile phones in particular.However, some teachers say that there are contradictions between what is recommended at the level of the Generalitat de Catalunya, the practices of the centre, and what actually happens in each of the classrooms in each specific area of knowledge.They consider that teachers should adapt the regulations at all times depending on the group of pupils, their characteristics and their particularities: "We know the rules of the centre, and we agree, but it is very difficult to carry them out or to apply them.We try by all means.I think that all these measures that every teacher takes in our groups are for a reason.I think that the use of mobile phones and computers is good, it's a tool, but they misuse the computer and the mobile phone, and that's why we take these measures.We all follow the internal rules of the school, but when the mobile phone is something personal and the rules say that you can't touch it, then you can't do anything with their computer or mobile phone" (C1). "There is a contradiction between the regulations established by the Department of Education and the school's regulations and what can actually be done in the classroom with mobiles and technology.It is a very restrictive vision in which the department itself organises training for the educational use of some social networks or mobile phones and then blocks it.Therefore, teachers do not have "so much freedom" to be creative or innovative with ICT or mobile phones.They give the option of being able to activate or deactivate the permission to use "minijuegos.com"but not TikTok, for example, which can be used for educational purposes, and from which teachers learn to reflect critically on its use with their students" (C3).Some teachers are even more restrictive than the school to avoid disruptive behaviour, and others, occasionally or sporadically, are the ones who explore the practice of using mobile phones for the benefit of their students' learning. Creation of Materials and Protocols for the Use of Mobile Phones The creation of didactic materials for the educational use of mobile phones or the drafting of protocols for action at the school regarding the proper use of mobile phones is practically non-existent.Only in one of the schools (C3) are materials and protocols for the use of mobile phones created and disseminated within the school (classrooms, playground, corridors) and among families through information sessions at the beginning of the school year: "They know the school rules.In each classroom there is an explanatory sign with four colours: (1) red, which indicates that you cannot use the mobile phone because the teacher is explaining or does not give permission at that moment and you cannot use it; (2) yellow, which means that you can use the mobile phone if the teacher gives permission; (3) blue, which means that you can only use it to look for information in classrooms, laboratories and workshops with the teacher's permission; and (4) green when participating in activities organised outside the centre, as long as it does not interfere with teaching activities" (C3). In practically all of the schools, teachers say that to a greater lesser extent they need training, not only in relation to the use of mobile phones in education but also, in a broad sense, in digital teaching skills: training that allows them to make use of the diversity of digital processes for managing their own teaching.Some centres conduct specific training on the use of their educational platforms, such as Moodle, but it is not a continuous or systematic training: "I would say that each one of us, individually, has been able to be trained, but I don't think we received training in ICT" (C1). "In July we attended a rather boring training on the uses of mobile devices in the classroom.I think that what we need is competence training.What we received was a classic training, which I find very incoherent and very impractical.In the school we have a regulation that states which social networks we must limit, but we attended a training session in which we were encouraged to use social networks.It was a waste of time.We are acquiring digital competence little by little.We share our experiences with each other, which I consider very positive" (C3). The teaching staff also say that on sporadic occasions, and at the initiative of a teacher, they organise small training sessions, but this is not an institutionalised practice.The same happens with the creation of educational materials related to the use of digital technologies or the good use of mobile phones.Very few teachers create and share resources with their colleagues. Freedom, Democracy and Educational Co-Responsibility There is much fear of the addiction and misuse of mobile phones by students.Educational centres attempt to control the situation and manage or negotiate the conflict by trying to take away the students' mobile phones.In that sense, each teacher adjusts the rules, depending on the type of group and their behaviour: "I think that the mobile phone regulations, properly adjusted and understood, are very much in line with our way of doing things.We give teachers a lot of freedom to use this device if they think it is convenient, or if an activity requires it, but it is true that this situation has been perverted a bit.This freedom is sometimes misunderstood, and I think that we have crossed a limit and that the use of the device is being misused a little.When you enter the classroom, this device should not be present.It should only be present when the teacher requires it.That way we would avoid sanctions, discomfort and annoyance, both among teachers and students, because sanctions always lead to discomfort, both for the teacher who has applied them and for the students who receive them" (C3). In general, teachers consider that students do not have the capacity or the responsibility to make good use of mobile phones, and they consider that families do not know how to do so either and pass the responsibility on to the schools. Uses of Mobile Phones: Contexts and Risks It is confirmed that the mobile phone is rarely used for educational purposes and only at very specific times, both in the classroom and on school outings, but always under the supervision of teachers.In the playground, mobile phones are prohibited in some schools, and in others they are closely supervised.Even though personal use within the school premises is forbidden in all the schools, it is observed that students always find a way to use the device in secret from teachers and not for educational purposes: "They use it before entering the school, before starting classes, some in the corridors on the sly.In the playground, it is allowed but with school supervision, and in the classroom, they always have it at hand, but it should only be used when indicated by the teachers to carry out a class activity.At a pedagogical level, I have seen very interesting activities and dynamics that have been done with the mobile phone.I personally use it in physical education classes" (C3). In the cases in which it is used, either in the classroom or on outings, it is confirmed that there are several disciplines in which the potential of the device is explored, such as science, physical education, language and in the reception classroom; however, the great lack of knowledge is reiterated, as is the need for training on the potential of the mobile phone in education. We can also say that the teachers of all the schools participating in the study agree on the enormous amount of risk associated with the misuse of mobile phones and the need for training of teachers, students and families in their good use, not only educationally but also ethically: "Students spend many hours on the screens, and this can affect their health.The hours they use screens at school, plus the hours they use them at home or in their daily lives, are many.They have an obsession with looking at a screen to feel safe.There is a very high dependence on mobile phones, and so they always need to have them in their hands" (C1). "We have been working all these months on developing a regulation on cyberbullying.We are concerned about the obsession with showing off on the networks, and exhibitionism" (C2). "If you leave them alone, they're on social media, WhatsApp, video games, looking at their Instagram account, or personal stuff.They don't make good use of it.In the hallways they sneak a peek.In the playground they isolate themselves quite a bit by playing video games.Even when using it in class for educational purposes, they tend to use it when it's not their turn or for other things.They can't live without their cell phone.Every time you have tried to take it away from them, there are some students who get very angry and don't want to give it up" (C3). It is evident that there is much fear of all the risks related to the use of mobile phones.Most teachers have had among their students cases of addiction, excessive use, cyberbullying and identity theft, among many other conflicts that they do not know how to deal with. Harnessing the Potential of Mobile for Learning and Good Mobile Use Linked to the previous aspect and given the low use of mobile phones in the classroom, it is difficult to speak of a real use of their potential in the teaching and learning processes.Some teachers recognise that increasing the use of mobile phones in class could be something positive and necessary, while others prefer to follow traditional teaching methodologies.Some consider that it is not necessary to use the mobile phone in class proposals and activities because laptops or Chromebooks are already being used, while others argue that the ideal would be to be able to use all available devices correctly: "I think it would be ideal to use the mobile phone and the computer and all the tools that make our work easier and with which we can do research, but the problem is that the students don't make good use of the mobile phone or the computer because they are not responsible.Ideally, everyone should have a computer and a mobile phone, and everyone should use them for what they are supposed to use them for.But they arrive with iPods and headphones, and they don't listen to you.Many times, no matter how much you want to innovate, using platforms or mobile phones, you reach a point where you realise that the only way to see if they have learned or not is to go back to the old ways, and then you wonder what's the point of investing in new technologies since they, deep down, know more than me.It is a generation that was born with these technologies, and we are learning as we go along" (C1). Although the mobile phone is used very occasionally for educational purposes, there are some very interesting and beneficial pedagogical experiences for the development of skills among students and teachers: "The mobile phone is very necessary in the reception classroom as it allows them to use the simultaneous translator if they need it.It is a tool that allows them to reinforce their autonomy inside and outside the school, so I think it is very important that they learn to use the mobile phone" (C1). "We use the mobile phone in physical education to record and self-evaluate.In social sciences we use it for virtual reality practices using Google Cardboard and Oculus.We also use apps to measure air pollutants, and Arduino Science Journal, to measure decibels and assess noise pollution" (C3). "The mobile phone is very useful in the case of students with special educational needs.It is a tool which, if they know how to use it, helps them to take a step forward in accessing information.It allows them to translate, or listen, if they cannot read" (C1).These experiences of using the mobile phone, although very positive, are unique and there are no established spaces in which to share and democratise them. Discussion The discussion has been organised based on the most relevant issues arising from the analysis.Here a dialogue will be established between the positions of the management teams and teachers, and the arguments derived from the theoretical framework of the project. The Management Team and Mobile Phones The management teams of the three schools are reluctant to use mobile phones in the classroom.This explains to a large extent why the regulations and policies of the schools focus on prohibiting the use of mobile phones.There is a great lack of knowledge of the potential of these devices as mediators of the teaching and learning processes. The main fear factor observed is the lack of knowledge of risk management involving the inappropriate use by young people, who also do not receive training and guidance.This leads to fears of potential conflicts among students [17].The management also perceives the pressure from families who seem to shift the responsibility for this issue to the school, also due to a lack of knowledge of how to deal with it with their children [18]. Teachers and Mobile Phones Teachers, in general, also claim to have certain fears and insecurities when it comes to facing and managing the uncontrolled use of mobile phones by students.They repeatedly state that with each passing day the risks are higher, more frequent and have greater reach, and the speed at which the damage spreads is increasingly faster.All of this combines with a lack of techno-pedagogical training to take advantage of the educational possibilities of mobile phones [19].Faced with this situation, in most cases, teachers opt for excessive surveillance and control of everything students do with their devices at all times.There are teachers who, in an isolated and unique way, dare to explore the potential and possibilities of mobile phones in the classroom: teachers who are conducting powerful learning experiences, even in schools with the most restrictive regulations.The considerable potential to improve and facilitate learning with mobile technologies is widely recognised, among others, by UNESCO (2017); however, it is not something obvious and easy to achieve and there are very few teachers who carry out pedagogical practices with mobile phones that enhance the learning experiences of students.Practices that, according to research evidence, work well, increase motivation for learning and content, and foster collaborative work, creativity and coexistence.Within the recommendations of UNESCO [10], there is an explicit understanding that educational interventions in the use of mobiles should be integrated into carefully planned projects that should go far beyond the technology itself.Therefore, the appropriate use of mobiles in the classroom and the development of appropriate policies for it should take into account timely pedagogical training (development of digital teacher competence), both for teachers in training and active teachers throughout their professional development, as well as for students and families, in response to the challenges of each particular educational context.We agree with UNESCO [10] that the appropriate use of mobiles in the classroom complements and enriches formal and non-formal education, insofar as it contributes to making learning more accessible, equitable, personalised and flexible for learners around the world. Fombona and Rodil state that: "Mobile devices have a reduced use as a teaching and learning tool in classrooms, although most teachers and students would like to use it more often.Possibly there are still some difficulties, fears and methodological ignorance regarding its use, to be able to implement it as a common working tool in the classroom.Its growth and evolution have been so rapid that perhaps we have not yet had enough time to take advantage of its potential" [6] (pág.32). Laptops vs. Mobiles The evidence of this research has allowed us to observe that the mobile phone has gradually been losing prominence in the classroom.In this sense, it is noted that both teachers and students tend to prefer the use of Chromebooks and laptops in order to prioritise the control of resources and safe Internet browsing.Mobile phones, therefore, represent a pedagogical and instrumental challenge, more than an opportunity, when integrating them into the classroom [5,8].Thus, while it is impossible to eliminate the use of mobile phones, since it is part of the daily life of students and teachers, there is a clear need to generate protocols in the educational community that aim to reduce their potential as a distracting element and favour their sensible and constructive use. The Market vs. the Educational Curriculum The market stimulates the use of technological resources that modify and/or transform the implementation of the educational curriculum [6].For example, in some centres they do not use mobile phones, but they use similar tools such as educational platforms or Virtual Learning Environments on portable devices.In centres 1 and 3, they mainly use Chromebook, laptops and mobiles very sporadically for pedagogical use.In centre 2, they use mainly laptops and again mobiles sporadically in the classroom.Moodle is used as the preferred platform for subject management in all the centres, but its use is also combined with the use of Google Suite, especially for centre and teacher management, as well as communication between the teaching staff.Thus, the market offers different resources that schools have acquired without the necessary understanding of all the implications that this type of decision entails. Fear and Privacy Risks In our fieldwork, we have observed that there is a clear lack of knowledge on the part of the educational community about the uses that students make of mobile phones outside of schools.The truth is that families hand over a mobile device to increasingly younger ages [3]; however, neither they nor the children and adolescents have the knowledge and skills that allow them to make safe use of the device and understand the risks to which their privacy is exposed.In view of this, it is essential that families, students and the school community as a whole agree on a sensible, responsible and reasonable use of mobile phones both inside and outside school. Figure 1 . Figure 1.Research instruments and coding matrix design process. Figure 1 . Figure 1.Research instruments and coding matrix design process. Table 2 . Research instruments and techniques by centre. Table 3 . Final general code matrix for data analysis structure.
2022-12-29T16:01:19.438Z
2022-12-25T00:00:00.000
{ "year": 2022, "sha1": "47a41b5db8f95632fcf52816414da7f2a370b091", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7102/13/1/21/pdf?version=1671958702", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ff64ec0f0f93e28f60c6fcb574dda9f3f1dc96b7", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
149403451
pes2o/s2orc
v3-fos-license
The Use of Social Media in E-Learning: A Metasynthesis The adoption of social media in e-learning signals the end of distance education as we know it in higher education. However, it appears to have very little impact on the way in which open and distance learning (ODL) institutions are functioning. Earlier research suggests that a significant part of the explanation for the slow uptake of social media in e-learning lies outside of conventional factors attributed to distance learning reforms. This research used the conceptual framework for online collaborative learning (OCL) in higher education. Social media such as blogs, wikis, Skype or Google Hangout, Facebook; and even mobile apps, such as WhatsApp; could facilitate deep learning and the creation of knowledge in e-learning at higher educational institutions. This metasynthesis is an interpretative integration of peer-reviewed qualitative research findings on social media in e-learning. It includes a synthesis of data, research methods, and theories used to investigate social media in e-learning. Seven themes emerged from the data which have been recrafted into a framework for social media in e-learning as the final product. The proposed framework could be useful to instructional designers and academics who are interested in using modern learning theories and want to adopt social media in e-learning in higher education as a deep learning strategy. Résumé de l'article The adoption of social media in e-learning signals the end of distance education as we know it in higher education. However, it appears to have very little impact on the way in which open and distance learning (ODL) institutions are functioning. Earlier research suggests that a significant part of the explanation for the slow uptake of social media in e-learning lies outside of conventional factors attributed to distance learning reforms. This research used the conceptual framework for online collaborative learning (OCL) in higher education. Social media such as blogs, wikis, Skype or Google Hangout, Facebook; and even mobile apps, such as WhatsApp; could facilitate deep learning and the creation of knowledge in e-learning at higher educational institutions. This metasynthesis is an interpretative integration of peer-reviewed qualitative research findings on social media in e-learning. It includes a synthesis of data, research methods, and theories used to investigate social media in e-learning. Seven themes emerged from the data which have been recrafted into a framework for social media in e-learning as the final product. The proposed framework could be useful to instructional designers and academics who are interested in using modern learning theories and want to adopt social media in e-learning in higher education as a deep learning strategy. The Use of Social Media in E-Learning: A Metasynthesis This research represents a conceptual framework designed to explain the adoption of social media into elearning by using online collaborative learning (OCL) in higher education. Social media in e-learning signals the end of distance education in higher education. However, it appears to have very little impact on how ODL institutions are functioning. Earlier research suggests that a significant part of the explanation for the slow uptake of social media in e-learning is because of the many changes in social media which leave academics lagging behind permanently, leaving them-and sometimes also the students-unable to grasp the affordances of social media landscape in education (Carpenter & Krutka, 2015). Conditions such as the lack of capacity in higher educational institutions, the large number of underqualified educators with poor technological skills, and political resistance to a shift towards a more technologically enhanced learning paradigm might be cited as reasons. Close to two decades of experiences in using e-learning tools, an abundance of social media options available, and the analysis of literature such as Mehlenbacher et al. (2005), Petersen (2007), Park (2011), andHadjerrouit (2010) led to a realization of the importance of the usability of social media in open and distance and e-learning (ODeL). This research investigates the use of social media in e-learning. The continued use of social media such as wikis, blogs, discussions, Facebook, and Twitter as tools in education, to name but a few, have led to a proliferation of technologies that have not necessarily been designed for teaching and learning (Harasim, 2012). If these tools, commonly used in our day-to-day lives, are not easy to use in education; or if they lack important functionalities to enable learning, there would be no benefit in using them. It is, therefore, desirable to identify the usability, limitations, and strategies for using social media in elearning to provide a framework for the effective use of applications and technological tools in higher education. E-learning and social media are widely described in higher education, but the tools and the specific uses in higher education for learning are not clear from the literature. Social media must be used for specific purposes in e-learning to effectively facilitate social learning, collaboration, and interaction among students and between students and lecturers to enhance deep learning in a safe environment. Social media is used most often as something nice to have in e-learning without acknowledging or considering the specific purpose and educational theory behind its usage (Bates, 2015). In addition, there is a need to explore the most suitable social media for specific learning purposes according to constructivism and the online collaborative learning theory as described by Harasim (2012). The proliferation of studies dealing with social media in e-learning lack direction for the implementation of the findings; therefore, it was necessary to synthesize the knowledge generated in this area to draw conclusions from the use of social media in e-learning in higher education. Background and Theoretical Framework of the Study Online collaborative learning theory focuses on educational applications that facilitate idea generation, idea organization, and intellectual convergence through the internet (Harasim, 2012). The OCL theory was proposed by Harasim and is composed of three intellectual phases: namely, idea generating (IG); idea organizing (IO); and intellectual convergence (IC). Idea generating is the first phase during which the collaborating group is characterized by differing ideas and activities resulting from brainstorming, verbalizing, and generating information, which lead to the sharing of information and subsequently to positions on a problem of interest (Harasim, 2012). Secondly, idea organizing mainly focuses on the process of conceptual change, intellectual progress, and a shift towards convergence of ideas to cluster them according to their strengths and relationships or the lack thereof (Harasim, 2012). Intellectual convergence is the third and final phase of the OCL theory and is, in simplest terms, a knowledge construction (Harasim, 2012). The OCL theory manifests into scientific knowledge or hypotheses and social application resulting in knowledge building. Figure 1 presents a graphic view of the OCL theory. Harasim (2012, 94). In order to facilitate idea generation, idea organization, intellectual convergence and knowledge building in e-learning, social media, and technologies are needed to facilitate collaborative interactivity in e-learning. The purpose of this research was, therefore, to investigate the usability of social media as educational tools in e-learning in order to identify key aspects and strategies to develop a framework which could inform higher education facilitators on the usability of social media in e-learning. There is a growing body of knowledge on using social media in e-learning. The results of research in the use of social media in e-learning in higher education are diverse and pose a problem to academics that need to select social media; therefore, a theoretical framework would be suitable to use in e-learning for deep learning and maximum student support. Furthermore, academics are overwhelmed by the wide range of social media for teaching and learning. A theoretical framework could guide decisions in selecting social media available for maximum impact on e-learning in ODL and other higher educational institutions. This metasynthesis synthesizes primary qualitative peer-reviewed research studies on the use of social media in e-learning in higher education from 2000 to 2015 as a metasynthesis focuses on synthesizing qualitative research. Research Method Qualitative, interpretive metasynthesis of data from primary qualitative research studies worldwide was used for the study, using various qualitative methods such as phenomenology, case studies, and ethnography; grounded theory approaches were also included. According to Figure 2, the components of a metasynthesis research study involve three components; namely, meta-data analysis, meta-theory analysis, and a meta-method analysis. A metasynthesis is derived from the results of the analytic components of the meta-method, meta-data analysis, and the meta-theory as displayed in Figure 2. The research process in a metasynthesis is composed of four distinct components; namely the analytic components of meta-data analysis, meta-method, and the meta-theory and the synthesis component in the form of a metasynthesis as described below in this paper. In this paper, we referred to the metasynthesis as investigations of results and processes of previous research (primary research). In effect, a metasynthesis is research of research. It entails analysis, the scrutiny of the theory, research methods and data analysis of research on social media, e-learning, and culminates into a synthesis while it generates new knowledge. Our review has a limitation related to the construction of the sample of articles. We encountered a problem in searching for primary qualitative research with poorly written abstracts of all research aspects, but mostly, with methodological issues. This created "false negatives" in our initial sample of qualitative articles in the sense that a large number of publications were not excluded when the abstracts were read for the first time, as displayed in Figure 3. Firstly, the meta-method analysis entails a thoughtful examination of how the research methodological approach is used to gather and interpret the data. Secondly, the meta-data analysis involves reinterpretations of the actual findings from the primary qualitative studies. Thirdly, the meta-theory analysis consists of an examination of the theories that lead the topics, frameworks, and research questions of primary researchers (Sandelowski, Trimble, Woodard, & Barroso, 2006;Thorne et al., 2002). From the literature searches, we identified 195 articles to be screened for the study from our literature searches. One hundred and sixty-five articles were excluded on the grounds of research methodologies as not qualitative research. At first, the majority of data sources from the 195 articles seemed to be qualitative research, but, on closer scrutiny, it was determined that they were mixed method research projects. Figure 3 is a summary of the screening and selection process. The abstracts of 30 articles were read to determine if the research they reported was about social media in e-learning. This review process rejected 19 articles because they did not have a qualitative research focus on e-learning and social media in higher education. The 11 articles screened by using the criteria in Table 1 did not comply with the research rigour guideline which we as researchers set at 80% for inclusion into the study. The last stage of the selection of articles involved a full scan of six articles that met the criteria for qualitative research methodological rigour as set out in Table 1 and Figure 3. Figure 3 displays the process of data source selection on the grounds of inclusion and exclusion criteria. Secondly, we used Table 1 to Primary qualitative research Research findings Research methods Theoretical frameworks Metasynthesis Meta-data-analysis Meta-method Meta-theory further screen the qualitative article to ensure that the most applicable qualitative research on e-learning and the use of social media was included in the metasynthesis. Inclusion and Exclusion Criteria Inclusion criteria were set to obtain qualitative articles on the usability of social media in e-learning in higher education institutions from 2000 to 2015. We included articles with a qualitative research strategy; qualitative research sampling in higher education; e-learning and social media in teaching and learning; and articles in which data were collected from learners, academics, or the online activities of the groups. Lastly, the research articles were screened for research rigour as displayed in Table 1 and an article must have received 80% or higher to be included into the study. The body of qualitative work on online learning has grown over the last two decades; thus, so has the emphasis on a metasynthesis. Articles that received less than 79% for research rigour were excluded from the study. Articles that did not deal with higher education, e-learning, and literature reviews were excluded since they were not the focus of this research. Most of the weaknesses in the published articles that were excluded after using the screening criteria in Table 1 were issues around the use of quantitative and qualitative methodologies (mixed method research methodologies) as a metasynthesis is focused on qualitative peer reviewed articles only. Weaknesses in the qualitative articles such as sampling, focus, and mixed methods research contributed to the small sample in this metasynthesis. The rigorous process of inclusion and exclusion and screening processes ensured that the best peer-reviewed qualitative research articles on e-learning and social media in higher education were selected for the metasynthesis as depicted in Table 1 Screening Criteria (Adapted from Paterson et al., 2001) No. Screening criteria for inclusion and exclusion 2 Yes Note. Total marks = 32 (Articles must have received 80% for inclusion into the study). Table 2 List of Articles Trustworthiness in the Qualitative Metasynthesis In order to ensure trustworthiness of the metasynthesis, it was described in detail as it unfolded to leave an audit trail. The structure of a metasynthesis as detailed by Paterson, Thorne, Canam, & Jillings (2001) was followed. Trustworthiness was further ensured by documenting the process of screening the selected research articles for inclusion or exclusion. Furthermore, trustworthiness was achieved by reviewing each article at least three times. An additional researcher reviewed our analysis of each article and gave input on the codes, quotations, and themes identified in the meta-study. We consulted and shared reviews with peers and with a colleague specializing in technology-enhanced learning regarding the data analysis processes and the identification of codes and themes. During the meta-data analysis, we read through each article and noted the possible themes as we progressed, using a highlighter on hard copies of the articles to get an idea of the phenomenon of e-learning and the use of social media in order to enhance the learning process. We read through each article at least three times and loaded the articles on the AtlasTI computer programme to assist with data organization and data analysis. The codes that were identified during the reading of the articles and codes in the OCL framework were loaded on AtlasTI. The codes were continuously updated and added to the data analysis to include inductive and deductive data and codes to ensure a complete picture of social media in e-learning. Data Analysis As indicated earlier, a metasynthesis comprises distinctive phases; namely, the meta-method analysis, the meta-data analysis and the meta-theory analysis, which is the systematic analysis of the qualitative research body of knowledge on social media and e-learning. The meta-method analysis is summarized in Table 3. The Meta-Method Analysis The purpose of the meta-method analysis was to determine how the interpretation and implementation of qualitative research methods have impacted on the research findings and the emergent theory on the use of social media in e-learning in higher education. Table 3 displays the epistemological soundness of the research methods, demographic data, and general research aspects of the studies such as sample, social media, country, and research characteristics. Each of the articles was analyzed in order to determine how the authors methodologically presented aspects such as aim and purpose of the studies, the research questions, trends in social media use, research design, data collection and data analysis, and the trustworthiness of the studies. Table 3 indicates that the research methods were mainly basic qualitative research using interviews. Four of the studies used traditional data collection methods, such as interviews and focus groups, to research e-learning and social media in the digital era. The structure did not make use of digital qualitative research methods such as online discussion, digital audio data collection, or visual digital media for their data collection. The sample criterion indicated that staff and students from higher educations were selected in the study. The social networks criterion gives information about the specific social media that were used in the original research. The most proliferate social media were blogs, wikis and Facebook. The country criterion was a simple reflection of the places where the research was conducted; and this study indicated that qualitative research on social media in e-learning was dominated by the USA and the UK. The Meta-Data Analysis Seven distinct themes have emerged from the data. The concept of the theoretical framework for the study such as idea generation, idea organization, and knowledge building was clearly deduced from the data. Themes such as social learning, deep learning, student support, and safe environment emerged from the data as displayed in the meta-data analysis. Reference was made to the various data sources by numbering the articles from two to seven-for example, (5:116)-where five indicated the data source number and 116 the place or reference in the data source. The main purpose of the meta-data analysis was to extend knowledge about the use of social media in e-learning in higher education from the theoretical perspective of online collaborative learning (OCL) (Harasim, 2012). Themes from the Online Collaborative Learning Theory by Harasim (2012) The online collaborative learning (OCL) theory, according to Figure 1, provides a theoretical framework to help design and inform activities in e-learning. The OCL theory advances from idea generation and idea organization to the intellectual convergence stage. E-learning activities are linked with conceptual processes to encourage deep learning in technology-enhanced learning environments. The three stages of the OCL were evident in the metasynthesis and described as theme one to three in this study. Theme One: Idea Generation This theme was about blogging and wikis and how students use it in e-learning. Blogging, as part of education, is a type of online journal that enables the teacher and the students to post comments on course content. Blogs can be used as learning and communication tools. In this study it was evident that students did not use them as learning tools, but rather as private communication tools. It is evident that the students must be made aware of the main aim of the blogs, such as an administrative blog for lecturers to post learning material. The second type of blog in education is the whole-class-blog for the purpose of comments on the lecturer's post on course content. Lastly, there could be individual student blogs for reflection on the learning process. When you write a study blog it's very personal and you mainly write it for yourself, and any coursemates who might look in. The blog was for me and not for anybody else. I could also demonstrate to my tutor that I was alive and working. These students' blogs were personal and they did not expect or seek comments and rarely read or responded to any comments that they received. (5:48) Wikis are tools that are shared spaces where teachers and students are able to post and build content in order to create a collaborative piece of information. In education, the purpose of wikis and blogs is very much the same, but wikis are focused on co-construction of knowledge by a group of students to promote constructivism in education. The participants in this meta-study reflect as follows on wikis in education: One student noted: Initially when the task was presented to us, I was hesitant to contribute, as I did not fully understand what was required" "Other students waited to see the wiki contributions from multiple students and hence claimed: when few students initially contributed, [it was] very difficult to engage. (3:4) Theme Two: Idea Organization This theme includes aspects on how students make sense of the knowledge and how problem-solving is employed to assist in making sense in learning. Meaning is one of the responsibilities of education to draw attention to noticeable aspects and ways in which meaning is constructed in the text digitally as well as how we fulfil our intentions in the world. The concept of community of practice (CoP) or community of inquiry (Col) in social media and education where people work together through interactions to create discourse by means of constructivism consists of mainly three components: teaching presence, cognitive presence, and social presence. This is described as follows: In the CoP in this research study, students continually negotiated meaning, worked towards their individual and collective goals, shared their application of learning in their practice, and supported each other through the rigors of doctoral study. (4:28) Problem-solving is an important part of many disciplines; and the development of problem-solving skills requires a sound knowledge base with various concepts and the ability to interconnect these concepts. Within a CoP the community is built and the practice is supported through the sharing of knowledge relevant to the shared domain of interest, but also through the sharing of self through personal and professional interactions. Working towards a common goal or finding collective solutions to problems in this CoP included such activities as explaining wrong answers (knowledge sharing), providing motivation (support), and explaining where to find resources (problemsolving). (4:21) Theme Three: Knowledge Building In this theme, students reach an agreement to disagree or to reach consensus. Knowledge construction and collaboration in terms of e-learning can be described as the process that suggests that students are much more actively involved in the joint enterprise with the teacher and peers in creating knowledge. The participants in the various studies describe knowledge building in terms of constructivism and the use of the wiki in education as follows: The wiki activities involved a group of students contributing requirements to the group-wiki, discussing the requirements, identifying conflicts and ambiguities within the requirements, and resolving the conflicts through discussions from the perspectives of different stakeholders, to produce an unambiguous requirements specification. The wiki activities were designed to be selfmanaged by the students and required minimal or no intervention by the tutor and thereby avoided any significant increase in the tutors' workload. (5:5) Students expressed in interviews and reflective accounts that wiki-based collaboration had facilitated their learning and that they became aware of the various issues and challenges of team-working in virtual teams in real-world software engineering projects. (5:17) Emergent Themes from the Meta-Data Analysis Four new themes (themes four to seven) emerged from the qualitative data on social media and e-learning; namely, social learning, deep learning, student support, and learning environment, as displayed in Figure 4. The OCL framework themes together with the four emerging themes below completed the framework for using social media in e-learning in this metasynthesis. Theme Four: Social Learning The theme consisted of social media and constructivism. Social media are simply digital technologies that allow us to create and share knowledge and material with others via the internet. Social media is all about participation, collaboration, interactivity, community-building, sharing, networking creativity, distribution, and flexibility. Constructivism is rooted in the works of Piaget (1896Piaget ( -1980 and Vygotsky in which the learning process is informed by cultural influences (Harasim, 2012) and is described as follows: The notion of social learning can be traced back to the theory of social constructivism in the 1960s. The basic principle is that students learn most effectively by engaging in carefully selected Learners supported one another in their learning and noted that they perceived their learning experience was enhanced by their interactions. Additionally, students did not appear to mix social and educational participation and seemed to need support in managing the expanded amount of information available to them. In order to manage their time and participation, learners devised strategies and "workarounds" to complete assigned activities and course commitments. (7:6) Theme Five: Deep Learning Theme five was all about building trust, meaning in learning, and cognitive deep learning. Trust is about effective teamwork and team members; and everyone is contributing equally and behaving appropriately and differences of opinion in the team can be sorted out in a supportive environment. Deep learning is described in terms of trust, interaction with other students, and reflection, as follows: It is the willingness to trust, interact, and share with others that develops a sense of belonging to the community. In this instance, one of the main functions of the CoP is the interactions amongst members who are geographically disbursed [sic].(4:37) Reflection is the process of stepping back from an experience to ponder, carefully and persistently, it's meaning to the self through the development of inferences; learning is the creation of meaning from the past or current events that serve as a guide for future behaviour. One of the goals of reflective learning is to encourage professionals to recognise the routine, implicit skills in their practice, which tend to be delivered without conscious deliberation or a deeper questioning of the 240 wider situation or context within which the practitioner is operating. (5:21) Theme Six: Student Support This theme was about a community of practice, which is a support structure where people come together to share knowledge and create a discourse through interaction as educators become more student-centred in their approach to teaching and learning. Those who are using social media in their teaching move from covering the content to helping students to master learning. This study indicates student support via social media as follows: This research demonstrates that it is possible to use Facebook as a student-developed CoP to facilitate collaboration and community-building among students in support of their learning. Group members appeared to be more comfortable asking questions and seeking clarification within the informal Facebook community than in more formal or official virtual spaces in their online programme where faculty members were present. (4:47) It is the interweaving of the community and the practice within the domain which is the foundational aspect of the CoP. The domain for this example includes being both a practitioner of educational technology and a student in the online doctoral programme in educational technology. (4:5) Theme Seven: Learning Environment This theme was about the safety and security issue when using social media in academia and related problems around it which are worth mentioning. Most universities do not have control over terms and conditions of social media. When agreeing to the terms and conditions of social media, lecturers and students take the responsibility on themselves to comply and act within the legal boundaries. You should monitor the sites and visibility of your work. The participants in this study seem to be mindful of safety issues when using social media in education and reflect as follows: Only in five of our twenty cases, the institutions reported of explicit attempts to safeguard the issues of a social software initiative. The safeguarding focused on reminding the students of the existing institution's computing code of conduct asking the students to formulate policies, or simply informing the students about the risks. Interestingly, we did not encounter any initiative which created specific safeguards to protect students from outside harm, although the threats are wellknown. We found that social software initiatives are largely initiated and carried out by individual educators with little guidance and support from their institutions. However, to mitigate these risks, institution-level support and interventions will help to manage the threats and to initiate a discourse which engages students and educators to formulate sound and practical solutions and guidelines. (6:3) Meta-Theory Analysis The meta-theory aspect of this research involved the analysis of a detailed study of research work on social media and the study of research into relevant e-learning theories. The major paradigms underlying the theoretical frameworks that were investigated were included in social learning theory, social interactivity theory, constructionism and social constructivism, and online collaborative learning theory (Harasim, 2012). Collaboration and social constructivism were the main theoretical frameworks guiding the use of social media in e-learning in higher education that point towards a more integrative (collaborative) and coconstructivism peer supportive approach to learning in the digital age. Discussion and Insights from the Data Each of the three analytic phases namely, the meta-method, the meta-theory and the meta-data analysis provided a unique angle or vision from which social media and e-learning were deconstructed and interpreted. In the following part of the paper, we consider the metasynthesis as it shaped the larger context of the project on social media and e-learning in ODL. It is not possible to predict the extent to which new knowledge or new theory can be synthesized, until the products of the meta-data analysis, meta-method and meta-theory are individually and collectively interpreted. The larger intent of the meta-synthesis is not to raise questions about highlighted issues, but to build a framework or to provide good practice guidelines for practice. The appeal for metasynthesis lies in our hunger for more truth, more accurate and real explanations and practice guidelines to make sense of our everyday practices and, in this case, the use of social media in e-learning. Insights from the Meta-Method Analysis The meta-method analysis offered a strategy to reflect on the role of research methodology and how it shaped the findings of individual studies. From the meta-method analysis, we begin to identify the nuances of the various qualitative approaches in e-learning. To an extent, the methods reflected the academic discipline choices; and in this metasynthesis, the qualitative research in social media and online learning pointed to an exploratory level of research. The included studies used explorative qualitative research methods; and one case study indicates that the academic interrogation in e-learning is not yet at theory development level and that most research is just scratching the surface. The data analysis methods in social media and e-learning qualitative research were also at line-by-line analysis level and not at synthesis level yet. The studies used mostly wikis, Facebook, and blogs as social media to enhance e-learning in this study. Insights from the Meta-Theory Analysis The meta-theory created the context in which the implications of a range of theoretical approaches impacted on the body of knowledge. Each primary qualitative article was studied individually and comprehensively for demographic and theoretical context to understand the different ways that researchers obtained various findings through the last decade on social media in e-learning. The research in e-learning and social media use is currently still at a social learning theoretical level without the use of educational pedagogies and conceptual frameworks. An alarming fact was that most of the studies in this synthesis referred to social constructivism, but did not interrogate the use of educational theories in e-learning further than just a sentence or two in the literature review. In the context of these theoretically poor underpinnings in the qualitative research, we did not interpret any particular conflict between the studies, but we did arrive at a comprehensive theory for the use of social media in e-learning which could assist the users of e-learning with a framework for decision-making as to the nature of the social media and its fitness for purpose in the various modules or courses in e-learning. Synthesizing Insights Seven distinct themes have emerged from the data and include idea generation, idea organization and knowledge building as described in the online collaborative learning (OCL) model as displayed in Figure 1. Emerging from this metasynthesis are the themes of social learning, deep learning, student support, and safe environment depicted in Figure 3. The metasynthesis indicates that for e-learning with social media to be successful, all teaching and learning efforts must be anchored in student support since student support is the foundation of any learning and especially e-learning. The metasynthesis for the use of social media in e-learning indicates that for effective and deep learning to take place in higher education, students must be guided in blogging and using wikis for the co-construction of knowledge. Students must be able to generate ideas because these ideas are required for engaging in group dissuasion, brainstorming, and articulating views and discussions related to knowledge issues in their respective disciplines. See figure 4. Secondly, the ideas must be organized and refined by the students through the processes of problem-solving and making sense of academic content. Students start organizing, analyzing and filtering ideas through agreement or disagreement with others in the group. Input from the facilitator might be needed as a form of moderation and analysis to cluster ideas into meaningful units and knowledge building. Analytical skills are needed in the process. Thirdly, knowledge creation, displayed in Figure 4, must be facilitated through social learning strategies such as real life examples, collaboration, and constructivist pedagogy. Students must be supported by interaction, guidance, and clear learning outcomes. Intellectual processes must take place, such as discussions and analysis of information through a process of convergence and synthesis of concepts to create knowledge. Intellectual convergence is also characterized by agreements and disagreements with the final product being a piece of knowledge created by a group of people or students or colleagues. In this process the group members move towards consensus in the knowledge creation process. For intellectual convergence and consensus on academic knowledge creation, the students or group members need a safe and secure environment to work in. Social media tools are important in the learning process. Central to collaborative learning and knowledge building is the need for a shared space for discourse and interaction. Therefore, higher education institutions that are moving towards adopting social media to facilitate knowledge creation must ensure that the participants can do it in a safe and supportive environment. See Figure 4. Lastly and perhaps most importantly, is the student's support factor, which emerged from the data coupled with a community of practice or communities of inquiry in e-learning. A well-known feature of ODL is the distance between the student and the lecturer, the institution, and the other students. With e-learning and the use of social media, the loneliness could be changed to communities of practice (CoP) or communities of inquiries (CoI). Social commitment, interactions, and friendships form the glue for all communities of practice and motivate active and regular member participation. Social and academic discourse in e-learning could be a mechanism for participation and knowledge building which is a part of constructivism and important for deep learning. For the framework for using social media in e-learning as displayed in Figure 4 to be effective, social engagement in which members of the academic module or course must demonstrate commitment and active participation in a safe and trusting environment. Among other things, Wenger, McDermott, and Snyder (as cited in Hartnell-Young and Morriss, 2007) suggested that any learning community must interact with other communities of practice in a purposeful way. The traditional isolation of the teacher must change to a more collegial approach to learning and communication as knowledge sharing is now possible. Interaction with the online community of practice provides teachers and educators with a global perspective as people from many countries communicate without ever meeting one another in person. Implications for e-Learning The metasynthesis indicates that most qualitative researchers in e-learning and the use of social media are still at the entry level of qualitative research and interviews instead of electronic resources are used to collect data. It is evident that e-learning in higher education is pointing towards a more integrative (collaborative) and constructivism and supportive approach for learning in the digital age as illustrated in Figure 1, the OCL theory Harasim 2012) and Figure 4. The conclusion of this metasynthesis is deemed to be valuable and applicable to use for planning and managing e-learning, using social media in higher education, as illustrated in Figure 4. If educators in higher education and ODL really want deep learning for their students in e-learning environments, it is important to plan interactions and strategies for using social media in e-learning by using the framework in Figure 4. The framework is based on the work of Harasim (2012) concerning online collaborative learning, which is aimed to ensure that students are supported, safe, and connected in their learning journey. Inasmuch as social media has significantly reduced the gap between learners, their peers, and their teachers, we should not forget that there are still important things that students miss in e-learning. The use of wikis and blogs in e-learning could facilitate social learning, peer review and co-creation of knowledge. Idea organization could be facilitated through the community of practice by using discussion forums, blogs or wikis for students to refine ideas by means of the OCL theory as described by Harasim (2012) and the emerging themes from this metasynthesis. There may be positive signs that these social networking tools will enable learning environments that are more personal, participatory and collaborative learning spaces. For deep learning to occur, we have to consider careful planning in e-learning (McLoughlin & Lee, 2007). Intellectual convergence or knowledge construction happens when students form collaborative online groups to facilitate communities of practice in which they can refine ideas. As shown above, concepts that emerged during meta-analysis and illustrated in Figure 1 and Figure 4, social media facilitate communication among the learners and between the learners and the teacher by forming social links among the participants. Once social networks such as CoPs are formed, knowledge is shared which leads to learning and the creation of new knowledge through collaborative activities. If students could use social media within the CoPs and feel safe and supported to build knowledge, the social learning could facilitate deep learning. Conclusion The findings showed that the use of social media still lacks important empirical data. The proposed framework could be useful to instructional designers who are interested in using modern learning theories and who want to adopt social media in e-learning in higher education as a deep learning 246 strategy. In conclusion, through this metasynthesis, a conceptual framework was developed for using social media such as blogs and wikis for idea generation, problem-solving through discussions, Skype or Google Hangout, Facebook, and even mobile apps, such as WhatsApp, to organize and co-create knowledge. This can only happen in a safe and supportive environment in e-learning where deep learning is facilitated through building trust in one another and in the learning process within a CoP. In a CoP, online students can refer, talk, or discuss and validate academic issues and co-create knowledge constructively. Students in e-learning can be developed into individuals who are technologically skilled for the digital age and who find meaning in learning through e-learning with technologies and social media.
2018-12-11T01:03:37.817Z
2017-08-15T00:00:00.000
{ "year": 2017, "sha1": "b181cf316dbd8513e6d3e2f78ed71b8074c7d92e", "oa_license": "CCBY", "oa_url": "http://www.irrodl.org/index.php/irrodl/article/download/3014/4273", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0179439027f450cbc01a59f931979c4f261c55c6", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Psychology" ] }
57375957
pes2o/s2orc
v3-fos-license
Effects of Prebiotic and Synbiotic Supplementation on Glycaemia and Lipid Profile in Type 2 Diabetes: A Meta-Analysis of Randomized Controlled Trials Purpose: Diabetes Mellitus (T2DM) as a chronic disease, is on rise in parallel with other non-communicable diseases. Several studies have shown that probiotics and prebiotics might exert beneficial effects in chronic diseases including diabetes. Because of controversial results from different trials, the present study aims to assess the effects of prebiotic/synbiotic consumption on metabolic parameters in patients with type2 diabetes. Methods: A systematic literature search was performed on randomized controlled trial published in PubMed/Medline, SciVerse Scopus, Google scholar, SID and Magiran up to March 2018. Of a total number of 255 studies found in initial literature search, ten randomized controlled trials were included in the meta-analysis. The pooled mean net change were calculated in fasting blood-glucose [FBG], Hemoglobin A1c [HbA1c] and lipid markers (total cholesterol [TC], triglyceride [TG], low-density lipoprotein cholesterol [LDL-C], high density lipoprotein cholesterol [HDL-C]). The meta-analyses was conducted using Revman Software (v5.3). Results: The pooled estimate indicated a significant difference for the mean change in FBG, HbA1c and HDL in treatment group in comparison with control group. Subgroup analysis by intervention showed a significant difference in TG, LDL and HDL (synbiotic group) and in TG, TC, FBG, HDL and HbA1c (prebiotic group) compared with placebo. In another subgroup analysis, high quality studies showed significant reductions in TG, TC, FBG and HbA1c in intervention group compared with placebo group. Conclusion: In summary, diets supplemented with either prebiotics or synbiotics can result in improvements in lipid metabolism and glucose homeostasis in type 2 diabetic patients. Introduction Type2 Diabetes Mellitus (T2DM) as a chronic disease, is on rise in parallel with other non-communicable diseases, not only in adults but also in children and adolescents worldwide. 1 190 million people were diabetic in 2008 and according to estimates this number will reach 366 million in 2030. 2 Both host genetics and environmental factors are clearly associated with the onset of T2DM. 3 Epidemiological studies revealed that there is a positive relation between high blood glucose levels (glycemia), lipid abnormalities and cardiovascular diseases. 4 Beyond the generally acknowledged idea that genetic factors assume an imperative part in diabetes susceptibility, developing evidence has shown that some variables such as chemical and diet, can affect diabetes development. Increasing evidence indicates that gut microbiota is strongly associated with type2 diabetes development. 5 As compared to non-diabetic subjects, diabetic subjects experienced a decrease in butyrate-producing bacteria such as Roseburia intestinalis and increases in Lactobacillus gasseri and some Clostridium microorganisms. Moreover, increased expression of microbiota genes involved in oxidative stress and inflammation was observed in diabetic patients. 6 Probiotics, prebiotics and synbiotics may alter the gut microbiota and stabilize microbial communities. Probiotics are defined as live microorganisms that can exert health effects on the host when administered adequately and were first described by Metchinkoff in 1908. 3,7 Probiotics have a pivotal role in the host's general health. 3 These products can be used as anti-diabetic agents since various studies have shown their possible ability to improve glucose homeostasis and delay the progression of diabetes in animal models. [8][9][10][11] A prebiotic is non-digestible food component that selectively stimulates the activity or growth of a few number of probiotic bacteria in the colon, especially, but not exclusively, lactobacilli and bifidobacteria. 12 Manipulation of gut microbiota through prebiotic consumption can exert metabolic health benefits in high risk individuals. 13 Synbiotic is a combination of probiotics and prebiotics which promotes host's metabolic health by selective growth stimulation and healthy microorganism activation. Synbiotic is a compound beyond a mixture of probiotics and prebiotics but there is a synergistic effects of these two components that makes it a more effective supplement compared with probiotic or prebiotic separately. 14 Several studies suggest positive effects of synbiotics on blood lipid profile, 4,15,16 while some other studies have failed to prove the positive effects of probiotics, as a part of synbiotics, on cholesterol. 17,18 Furthermore, it has been observed that synbiotics might promote fasting blood glucose (FBG), insulin levels, and the homeostasis model assessment-insulin resistance (HOMA-IR). 15 RCTs evaluating effects of prebiotics alone or in combination with probiotics have yielded controversial results. Therefore, there is a need for a study to provide a comprehensive conclusion on the effects of prebiotic/synbiotic supplementation in diabetic patients. The present study aims to evaluate whether prebiotic/synbiotic consumption can beneficially affect metabolic parameters including glycemic status and lipid profile in patients with type 2 diabetes in compared with non-diabetic subjects. Materials and Methods The current meta-analysis was undertaken in accordance with Preferred Reporting Items for Systematic reviews and Meta-Analyses(PRISMA) statement for systematic review and interventional researches. 19 Data Sources and Search Strategies Systematic research was conducted on the following electronic databases: PubMed/Medline®, SciVerse Scopus®, Google scholar, SID® and Magiran®; in order to detect the medical literatures for Randomized Controlled Trials (RCTs) of the effects of synbiotic and prebiotic supplementation on lipid profile and glycaemia in patients with DM. These databases were searched up to March 2018. Moreover, the keywords were applied included: (prebiotic OR synbiotic OR symbiotic OR fructooligosaccharide OR fructo-oligosaccharide OR galactooligosaccharide OR galacto-oligosaccharide OR inulin OR lactulose OR FOS OR GOS OR oligofructose) and (cholesterol OR "plasma lipids" OR triglycerides OR TG OR HDL-c OR LDL-c OR "serum lipids" OR FBS OR FBG OR "fasting blood glucose" OR HbA1c). The search strategy was implemented based on the database orientations using Boolean operators (OR and AND), parenthesis and quotation marks. Quotation marks were used to search for exact terms or expressions; parenthesis was used for representing a group of search words or combination of two categories of search words to capacitate all probable combinations of statements. Study Selection Studies must have had these following inclusion criteria to enter this meta-analysis: a controlled clinical trial in humans, that included synbiotic or prebiotic supplement intervention, in forms of either supplement or enriched food, and evaluated at least one of the following outcomes: TG,TC, LDL-c, HDL-c, FBG and HbA1c. In addition, only the human RCTs published in English or Persian language were used in the meta-analysis, whereas animal/molecular, observational, preclinical and duplicate studies, commentaries, case reports or series, conference proceedings, editorials, and book chapters/reviews were excluded. Data Extraction and Quality Assessment Data were extracted from qualified papers by two independent authors (F.R and S.M) using predefined protocols and cross-checked. Any divergence of opinion was resolved by consulting a third reviewer (S.J). The following data were extracted from the selected articles: year of publication, region (country), sample size, age, sex, follow-up duration, design of study, distinguishing the type of consumed supplement (prebiotic, synbiotic or placebo), dose of consumed synbiotic and prebiotic, methods of synbiotic/prebiotic delivery, clinical condition, and mean changes of metabolic indices. All the above-mentioned data were arranged in the Microsoft Office Excel® 2013 document (Microsoft Corporation, Washington, USA). The Jadad Scale was computed to assess the methodological quality of included clinical trial studies. Jadad Scores range from 0(very low) to 5(very high) based on 3 distinct parts of randomization, double blinding, and follow-up. This scale assigns 1 point for mentioning randomization in the text, 1 point for mentioning blinding in the text, 1 point for proper description of the fate of all subjects. 1 point if the randomization method was appropriate (−1 if inappropriate) and 1 point if the double-blinding was appropriate (−1 if inappropriate). 20 Quantitative data synthesis The meta-analyses was conducted using Review Manager Software (Version 5.3; Oxford, England). Furthermore, metabolic factors alterations from the baseline to the final time point of RCTs were calculated as the Mean Differences (MD) with the 95% Confidence Interval (CIs). All values were collated as in mg/dL and mmol/L. Mean net changes and standard deviation in metabolic indices including TC, TG, LDL-c, HDL-c, HbA1C and FBG were calculated for all studies. The conversion factor for cholesterol (consist of HDL-c, LDL-c and TC), TG and FBG was 1 mmol/L=38.66 mg/dL, 1 mmol/L=88.57 mg/dL and 1 mmol/dL=18 mg/dL; respectively. For assessment the degree of inconsistency across studies by heterogeneity, the I2 statistic was used and either fixed or random effects models were used according to the findings. An I2 value of larger than 50% reflects moderate to high heterogeneity. To clarify the influence of studies characteristics, pre-specified subgroup analyses were conducted based on the Cochrane handbook. We assessed the publication bias by visual inspection of funnel plots test. Asymmetric shape of funnel-plot can be indicative of a publication bias. Moreover, Egger's weighted regression test and Begg's rank correlation test were used to examine possible bias. A P-value of less than 0.05 was considered as statistically significant. Study selection A flow chart of literature search and selection is presented in Figure 1. In our initial search, 255 potentially relevant articles were identified. Of these, 8 were excluded because they were review articles. 15 were excluded because they were not available in either English or Persian language. Moreover, one-hundred forty six studies were excluded after screening the titles and summaries due to irrelevance and fifty-six potentially eligible articles were left for full-text assessing. Out of the 56 studies, 46 were excluded because they were preclinical studies or with lacking characterization of the subjects, with inadequate reporting of data, with insufficient data of placebo groups or with outcome measures other than lipid and glycemic indices. Finally a total of 10 RCTs were included in the present mete-analysis ( Figure 1). Table 1 shows characteristics of the included studies. These studies were all RCTs published up to March 2018. A total of 506 participants (including 251 subjects in the intervention group and 255 subjects in the control group) were reanalyzed in this study. The age of participants in trials varied from 20 to 70 years. Duration of intervention varied from 4 to 12 weeks. Four studies 4,15,16,21 used the synbiotic and six studies 22-27 used the prebiotic as intervention. Based on several previous meta-analysis studies which indicated the studies with Jadad score of more than 3 as high quality studies, [28][29][30] seven studies were classified as high quality studies 16,21-25,27 and the remaining three 4,15,26 as low quality studies. The present systematic review and meta-analysis summarizes data from 10 RCTs including a total number of 506 participants. Our finding supports the idea that prebiotic supplementation may improve some factors of blood lipids and glycemic control in type2 diabetic patients. In general, the findings are consistent with results of most individual studies; of 10 included studies, 8 reported some beneficial effects of prebiotic/synbiotics on glycaemia and lipid profile. 4,15,16,[22][23][24]27,31 In recent years, a considerable number of researches have been conducted with a focus on probable beneficial effects of prebiotics or synbiotics on metabolic profile in different target groups. There are limited systematic reviews which investigate the effects of synbiotic and/or prebiotic supplements on metabolic parameters in diabetic and/or overweight subjects. However, lack of subgroup analyses is considered as their limitation. 32,33 Therefore, our study is the first comprehensive meta-analysis, evaluating whether synbiotic/prebiotic supplementation has favorable effects on metabolic indices on diabetic patients based on both intervention and study quality analyses. The effects of intervention on blood glucose and lipid concentration Since there were different units for applied indices in included trials, they were transformed to single unit (mg/dl) for TG, TC, LDL-c, HDL-c and FBG. As there were significant heterogeneity among studies for the mean change of most indicators (except for HDL-c), the random effects model was used for pooling data. (Figure 2). Publication bias The funnel plot test was conducted to evaluate potential publication bias of the present meta-analysis. In the present meta-analysis, we assessed the publication bias by examining funnel plot test of the effects of prebiotic/synbiotic on HDL and LDL. Symmetrical funnel plots suggested that there is no publication bias (Figure 3). The absence of publication bias was confirmed by Egger's linear regression of LDL (intercept: 1.5; standard error: 529; 95% CI: -11.4, 14.4; t= 0.28, df=6; two-tailed p= 0.78). Additionally, publication bias was not apparent by Begg's rank correlation test (Kendall's Tau with continuity correction: 0.03; z=0.12; two-tailed p= 0.9). Subgroup analysis As there is a significant heterogeneity among studies, we decided to explore the source of heterogeneity by subgroup analysis. Thus, we performed the analyses based on intervention (prebiotic or synbiotic) and study quality (high quality or low quality studies) ( The heterogeneity was decreased significantly after subgroup analysis especially for study quality subgroup. In subgroup analysis based on intervention, the prebiotic and synbiotic group showed no significant heterogeneity across the trials in regard to TG/cholesterol and TG/LDL/HDL respectively (Table 2). On the other hand, subgroup analysis by study quality, showed the most reductive effect on heterogeneity. It has been shown that except for LDL and HbA1c, there is no significant heterogeneity across the trials in regard to other factors (Table 2). Vulevic et al reported that a galactooligosaccharide mixture could reduce markers of metabolic syndrome and modulate immune function in overweight adults. 34 A pilot study demonstrated that prebiotic consumption might beneficially affect insulin level, with no significant effects on plasma lipids, in patients with non-alcoholic steatohepatitis. 35 Eslamparast and her colleagues reported that synbiotic supplement can help in the management of metabolic syndrome and insulin resistance. 36 Two other studies also suggested protective effects of prebiotics in patients with prediabetes. 13,37 Published meta-analyses in this area are limited in number. A recent meta-analysis has been conducted on the effects of prebiotics on glycaemia, insulin concentrations and lipid parameters in overweight and obese adults and the results showed positive effects of prebiotics and synbiotics on dyslipidemia and insulin resistance. 32 Another systematic review was conducted to evaluate metabolic benefits of prebiotics in human subjects. The results indicated that prebiotic consumption is associated with improved self-reported feelings of satiety along with reduced postprandial glucose and insulin concentrations. 38 To the best of our knowledge, the present study is first to systematically evaluate effects of prebiotic consumption on glycaemia and lipid profile in T2DM patients. In the present study, a significant heterogeneity was found among individual studies for target indicators (except for HDL-c).Two subgroup analyses were conducted based on intervention type (prebiotic or synbiotic), and study quality (high quality vs. low quality studies). After the subgroup analysis by intervention, the prebiotic subgroup showed no heterogeneity in TG and TC significantly. The heterogeneity in TG, LDL and HDL has been removed after the subgroup analysis based on the synbiotic intervention. Anyway, the quality of studies was shown as the most important source of heterogeneity. c), while in low quality studies, intervention group had only significant improvements in HDL-c. There are controversial results on the efficacy of prebiotics/synbiotics in improvements of lipid profile and glycemic index. Increasing enteroendocrine cell activity, improved glucose homeostasis and modulated gut microbiota by intake of prebiotics, especially FOS, have been shown via prior studies. 39,40 On the other hand, some studies could not find these favorable effects of prebiotics; they showed no significant effects on glycemic and lipid indices, especially lipid profiles, in diabetic participants. 41,42 These controversial findings, and of course, the significant heterogeneity reported for our included studies, might be a result of different probiotic strains and prebiotic types, administration dosage, clinical characteristics of participants, duration of intervention, or lack of appropriate controls or placebo. 43 Our study is supportive of the idea that prebiotic/synbiotic consumption contribute to positive effects on blood lipid fractions; several mechanisms are proposed explaining this relationship. Inulin-type fructans reduce the denovo synthesis of fatty acids in the liver, thus result in decreased levels of serum or liver TG. 44 The bacterial fermentation of non-digestible oligosaccharides (NDOs) in GI tract, leads to the formation of short chain fatty acids (SCFAs) including propionate, butyrate and acetate with different ratios depending on the substrate type. 45 3-hydroxy-3-glutaryl-Co-A (HMG CoA) reductase, is a key enzyme in cholesterol synthesis; by inhibiting its activity, propionate might play a role in serum cholesterol reduction. 46 Probiotics can also reduce intestinal cholesterol absorption accompanied by its increased fecal excretion. 43 In this meta-analysis prebiotic/synbiotics showed promising effects in glucose homeostasis. Studies have explained the underlying mechanisms: soluble fibers can delay gastric emptying, retard entry of glucose into blood stream, and decrease the postprandial rise of serum glucose. In addition, soluble fibers modify the secretion of GLP-1 that is a gut hormone engaged in glucose metabolism; they also lead to SCFA production and therefore may affect serum glucose and insulin levels. 27 On the whole, probiotics and prebiotics are safe products. However, high doses of prebiotics increase the risk of bloating, flatulence and GI discomfort which might widely vary from person to person depending on the type of food. 47 Our study encounters some basic limitations. Using Q statistics and I2, the included studies showed significant heterogeneity. Subgroup analyses were conducted to detect the source of heterogeneity. However, such heterogeneity still remained in most subgroups, except for quality of studies. One limitation of the meta-analysis is that some of the studies included in the meta-analysis are not independent. Seven studies of ten studies are from the same country (Iran). They are different publications, but the data seem to originate from the same groups of subjects. Another limitation of the present meta-analysis is the fact that there are no included trials with T1DM patients. Therefore, the findings and their interpretations are limited to T2DM patients. Clinical heterogeneity between studies can lead to statistical heterogeneity in their results. In addition, this meta-analysis indicated possible publication bias in LDL but not in HDL. It is maybe because we included the studies, which were conducted with the same population (country and geographical region). Publication bias has been reported in several large metaanalyses published in major medical journals; significant and positive results are more probable to be published and this is the main reason for such reported bias. Our meta-analysis included some methodologically low quality studies, which is another key source of bias. Since smaller studies need larger treatment effects to be published, they are more prone to such noted biases. In subgroup analyses conducted based on study quality, stronger beneficial effects were found in treatment group in comparison with control one. Based on this finding, we can conclude that either heterogeneity or true treatment effect could be the cause of publication bias. Conclusion Conclusively, our meta-analysis found that diets supplemented with either prebiotics or synbiotics can result in improvements in lipid metabolism and glucose homeostasis in patients with T2DM. Even though the overall analysis did not show significant changes for TC and LDL-c, subgroup analyses could find more noticeable changes in these markers. Considering the limitations for individual trials, prebiotics/synbiotics cannot be prescribed as alternative medicine T2DM, but these patients might benefit from these components as a complementary advise besides medicine and lifestyle modifications. More research are suggested with larger sample sizes, to determine the effective and also safe dose, duration and the best combinations of probiotics and prebiotics to reach a maximum positive effect. Ethical Issues Not applicable. Conflict of Interest No potential conflict of interest relevant to this article was reported.
2019-01-22T22:25:22.262Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "bb0d9ef9d1f104edf6d18fb63519ff7a73142d61", "oa_license": "CCBY", "oa_url": "https://apb.tbzmed.ac.ir/PDF/apb-8-565.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb0d9ef9d1f104edf6d18fb63519ff7a73142d61", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119155172
pes2o/s2orc
v3-fos-license
Quasi-cluster algebras from non-orientable surfaces With any non necessarily orientable unpunctured marked surface (S,M) we associate a commutative algebra, called quasi-cluster algebra, equipped with a distinguished set of generators, called quasi-cluster variables, in bijection with the set of arcs and one-sided simple closed curves in (S,M). Quasi-cluster variables are naturally gathered into possibly overlapping sets of fixed cardinality, called quasi-clusters, corresponding to maximal non-intersecting families of arcs and one-sided simple closed curves in (S,M). If the surface S is orientable, then the quasi-cluster algebra is the cluster algebra associated with the marked surface (S,M) in the sense of Fomin, Shapiro and Thurston. We classify quasi-cluster algebras with finitely many quasi-cluster variables and prove that for these quasi-cluster algebras, quasi-cluster monomials form a linear basis. Finally, we attach to (S,M) a family of discrete integrable systems satisfied by quasi-cluster variables associated to arcs in the quasi-cluster algebra and we prove that solutions of these systems can be expressed in terms of cluster variables of type A. Introduction Cluster algebras were initially introduced by Fomin and Zelevinsky in order to study total positivity and dual canonical bases in algebraic groups [FZ02]. Since then, cluster structures have appeared in various areas of mathematics like Lie theory, combinatorics, representation theory, mathematical physics or Teichmüller theory. The deepest connections between cluster structures and Teichmüller theory is found in the work of Fock and Goncharov [FG06]. This latter work led Fomin, Shapiro and Thurston to introduce a particular class of cluster algebras, called cluster algebras from surfaces [FST08]. Such a cluster algebra A (S,M) is associated to Date: January 20, 2013. 1 a so-called marked surface (S, M), that is a 2-dimensional oriented Riemann surfaces S with a set M of marked points. These cluster algebras carry a rich combinatorial structure which was studied in detail, see for instance [MSW09,CCS06,BZ10]. Moreover, it turns out that these combinatorial structures actually reflect geometric properties of the surfaces at the level of the corresponding decorated Teichmüller space in the following sense : cluster variables in A (S,M) correspond to λ-lengths of arcs in (S, M) and relations between these cluster variables correspond to geometric relations between the corresponding λ-lengths, see [FT08] or [GSV10, Section 6.2]. Therefore, the framework of cluster algebras provide a combinatorial framework for studying the Teichmüller theory associated to the marked surface (S, M). A key ingredient in the construction of A (S,M) by Fomin, Shapiro and Thurston is the orientability of the surface S. If it is not orientable, then it is in fact not possible to define an exchange matrix and thus an initial seed for the expected cluster algebra. However, relations between λ-lengths of arcs in (S, M) can still be described. Using this approach, we associate to any 2-dimensional Riemann marked surface (S, M), orientable or not, and without punctures, a commutative algebra A (S,M) . This algebra is endowed with a distinguished set of generators, called quasi-cluster variables, gathered into possibly overlapping sets of fixed cardinality, called quasi-clusters, defined by a recursive process called quasi-mutation. In this context, the set of quasi-cluster variables is in bijection with the set of arcs and one-sided simple closed curves in (S, M). The quasi-clusters correspond to maximal collections of arcs and simple one-sided closed curves without intersections, referred to as quasi-triangulations, and the notion of quasi-mutation generalises the classical notion of flip (sometimes called Whitehead move) of a triangulation. As in the orientable case, the algebra A (S,M) imitates the relations for the λ-lengths of the corresponding curves on the decorated Teichmüller space. And if the surface (S, M) is orientable, then the quasi-cluster algebra A (S,M) coincides with the usual cluster algebra associated to the choice of any orientation of (S, M). We initiate a systematic study of these algebras in the spirit of the study of cluster algebras arising from surfaces. In order to enrich the structure of the quasi-cluster algebra, we first establish numerous identities between the λ-lengths of curves in any marked surface. In particular, Theorem 3.3 proves analogues of so-called "skein relations" for arbitrary curves in a non-necessarily orientable marked surface, see also [MW] for an alternative approach in the orientable case. We prove that if (S, M) is non-orientable, then the structure of A (S,M) can be partially studied through the classical cluster algebra associated to the double cover of (S, M). However, not all the structure of A (S,M) is encoded in this double cover and A (S,M) provides a new combinatorial setup. We prove in Theorem 6.2 that the quasi-cluster algebras with finitely many quasi-cluster variables are those which are associated either with a disc or with a Möbius strip with marked points of the boundary. In this case, we prove a non-orientable analogue of a classical result of Caldero and Keller [CK08] (see also [MSW]) stating that the set of monomials in quasi-cluster variables belonging all to a same quasi-cluster form a linear basis in a quasi-cluster algebra of finite type (Theorem 6.5). Finally, with any unpunctured marked surface (S, M), we associate in a uniform way a family of discrete integrable systems satisfied by the quasi-cluster variables corresponding to arcs in (S, M). This construction does not depend on the orientability of the surface and allows to realise quasi-cluster variables in any quasi-cluster algebra A (S,M) associated to a marked surface (S, M) as analogues of cluster variables of type A. 1. Preliminaries 1.1. Bordered surfaces with marked points. In [FST08], Fomin, Shapiro and Thurston defined the notion of a bordered surface with marked points (S, M) where S is a 2-dimensional Riemann surface with boundary. Implicitly in their definition, the surface S is orientable. We extend the definition to include non-orientable surfaces as well. Recall that a closed (without boundary or puncture) non-orientable surface is homeomorphic to a connected sum of k projective planes RP 2 . The number k is called the non-orientable genus of the surface or simply the genus when no confusion arises. A classical result states that the connected sum of a closed non-orientable surface of genus k with a closed orientable surface of genus g is homeomorphic to a closed non-orientable surface of genus 2g + k, see [Mas77]. The Euler characteristic of a non-orientable surface S of genus k is given by χ(S) = 2 − k. Let S be a 2-dimensional manifold with boundary ∂S. Fix a non-empty set M of marked points in the closure of S, so that there is at least one marked point on each connected component of ∂S. Marked points in the interior of S are called punctures. Up to homeomorphism, (S, M) is defined by the following data : • the orientability of the manifold S; • the genus g of the manifold; • the number n of boundary components; • the integer partition (b 1 , . . . , b n ) corresponding to the number of marked points on each boundary component; • the number p of punctures. In the rest of this article, we will only deal with unpunctured surfaces, namely p = 0. We also want to exclude trivial cases where (S, M) does not admit any triangulation by a non-empty set of arcs with endpoints at M, consequently we do not allow (S, M) to be a an unpunctured monogon, digon or triangle. 1.2. Quasi-arcs. In non-orientable surfaces, the closed curves are classified into two disjoint sets that will play an important role in this article. Definition 1.1. A closed curve on S is said to be two-sided if it admits a regular neighborhood which is orientable. Else it is said to be one-sided. Any one-sided curve will reverse the local orientation. Hence a surface contains a one-sided curve if and only if the surface is non-orientable. In the orientable case, we do not worry about such curves. An arc is a simple two-sided curve in (S, M) joining two marked points. We denote by A(S, M) the set of arcs in (S, M). A quasi-arc in (S, M) is either an arc or a simple one-sided closed curve in the interior of S. We denote by A ⊗ (S, M) the set of quasi-arcs in (S, M). Note that if (S, M) is orientable, then A ⊗ (S, M) = A(S, M). We denote by B(S, M) the set of connected components of ∂S \ M, which we call boundary segments. To draw non-orientable surfaces, we use the identification of RP 2 , as the quotient of the unit sphere S 2 ⊂ R 3 by the antipodal map. When cutting the sphere along the equator, we see that the projective plane is homeomorphic to a closed disc with opposite points on the boundary identified, which is called a crosscap. Hence a closed non-orientable surface of genus k is identified with a sphere where k open discs have been removed and the opposite points of each boundary components identified. A crosscap is represented as a circle with a cross inside, see Figure 1. 1.3. Decorated Teichmüller space. The classical definitions of Teichmüller spaces and decorated Teichmüller spaces can easily be extended to include non-orientable surfaces as well, see [Pen87] for a complete exposure. Fix a hyperbolic structure on S, namely an element in T (S, M). For any curve c joining two punctures, there is a unique geodesic in its homotopy class. We call this element the geodesic representative of c, and by a slight abuse of notation, we will also denote it c. Likewise, every closed curve can be represented by a unique geodesic representative on the surface. Recall that any element of the fundamental group π 1 (S) gives rise to the homotopy class of a closed curve, and hence a geodesic representative. Given a decorated hyperbolic structure on (S, M), we recall the definition of Penner's λlengths of a decorated ideal arc and extend it to closed curves. • Let a be a decorated ideal arc in A(S, M) or in B(S, M). The λ-length of a is defined as λ σ (a) = exp l(a) 2 where l(a) is the signed hyperbolic distance along a between the two horocycles at either end of a. • Let b be a two-sided closed curve. The λ-length of b is defined as where l(b) is the hyperbolic length of the geodesic representative of b. • Let d be a one-sided closed curve. The λ-length of d is defined as where l(d) is the hyperbolic length of the geodesic representative of d. Remark that this definition does not need the arcs or curves to be simple. In fact we can extend this definition to a finite union of arcs and closed curves. Definition 1.5. A multigeodesic α is a multiset based of the set {a 1 , . . . , a n } where a i is an ideal arc or any closed curve. Each element of the set has a multiplicity m i . The λ-length of such a multigeodesic is given by : For a given multigeodesic α, one can view the λ-length as a positive function on the decorated Teichmüller space T (S, M) in the following sense : Let σ ∈ T (S, M). The holonomy map ρ σ of the underlying hyperbolic structure σ ∈ T (S, M) defines a homeomorphism from T (S, M) to a connected component of the moduli space where G is the group PGL(2, R) of isometries of the hyperbolic plane. The set Hom(π 1 (S), G) is the set of morphism ρ : π 1 (S) → G and the G-action is by conjugation. Hence any decorated hyperbolic structure σ ∈ T (S, M) gives rise to a conjugacy class of representations [ρ σ ] : π 1 (S) → G. For any element of PGL(2, R), the absolute value of the trace is well-defined. Let b be a closed curve (one or two-sided) corresponding to an element b ∈ π 1 (S, M). Let σ ∈ T (S, M) be a hyperbolic structure and ρ σ be a representative of the conjugacy class [ρ σ ]. The trace is invariant under conjugation, and hence the value of | tr(ρ σ (b))| is well-defined and does not depend on the choice of ρ σ . Moreover, ρ σ (b) is hyperbolic and a classical result in hyperbolic geometry states that : λ σ (b) = | tr(ρ σ (b))|. Quasi-cluster complexes associated with non-orientable surfaces Let (S, M) be a bordered marked surface without punctures orientable or not. Two elements in A ⊗ (S, M) are called compatible if they are distinct and do not intersect each other. Definition 2.1. A quasi-triangulation of (S, M) is a maximal collection of compatible elements in A ⊗ (S, M). A quasi-triangulation is called a triangulation if it consists only of elements in A(S, M). Proposition 2.2. Let T ∈ T ⊗ (S, M) be a quasi-triangulation. Then T cuts S into a finite union of triangles and annuli with one marked point. The number of annuli is the number of one-sided curves in the quasi-triangulation T . Proof. Cut the surface S open along all arcs and curves of T . This splits the surface into a finite union of connected components. Let K be one of these components. Then K is bordered by at least one boundary component which has at least one marked point. As T is a maximal set of arcs and curves, K does not have any interior quasi-arc. First, we notice that K cannot be non-orientable. Indeed, non-orientability would imply that there exists a one-sided simple closed curve in K, which would be a non-trivial interior quasi-arc. Assume that K has only one boundary component ∂K. Let m be the number of marked points on ∂K. If m = 1 then the boundary arc is trivial which is excluded. If m = 2 then the two boundary arcs are homotopic which is excluded. And if m ≥ 4, then K would admit interior non-trivial arcs as diagonal of the m-gone. Hence, we infer that m = 3 and K is a triangle. Then suppose that K has two boundary components. If both components have marked points, then the curve joining the marked points of each boundary components would be a non-trivial interior arc of K. Hence, necessarily one of the boundary is unmarked. Moreover, if the marked boundary component has more than one marked point then the non trivial curve joining one marked point to itself going around the unmarked boundary component will not be homotopic to a boundary segment of K and hence will be a non-trivial interior quasi-arc. Finally, K cannot have more than three boundary components, as an arc from the marked point to itself going around one unmarked boundary but not the other one would be a non-trivial interior arc. So K is either a triangle or an annuli with one marked point which proves the first part of the proposition. For the second part of the proposition, we simply notice that an unmarked boundary component can only be obtain by cutting along a simple closed curve. Hence, the number of annuli is exactly the number of one-sided curves on the surface S. Quasi-mutations. Definition 2.3. An anti-self-folded triangle is any triangle of a quasi-triangulation with two edges identified by an orientation-reversing isometry. Proposition 2.4. Let (S, M) be an unpunctured marked surface and let T be a quasi-triangulation of (S, M). Then for any t ∈ T , there exists a unique t ′ ∈ A ⊗ (S, M) such that t ′ = t and such that µ t (T ) = T \ {t} ⊔ {t ′ } is a quasi-triangulation of (S, M). Proof. If t is an arc separating two different triangles, then this is standard : the two triangles define a quadrilateral with t as a diagonal and t ′ is the unique other diagonal. If t is an arc which is an edge of a single triangle ∆, then either ∆ is a self-folded triangle or an anti-self-folded triangle. As we have excluded punctured surfaces, ∆ is necessarily an anti-self-folded triangle. Denote the third side of ∆ by c. Then c is an arc bounding a Möbius strip N and t is the only non-trivial arc in N . There is a unique non-trivial simple closed curve t ′ in N corresponding to the core of the Möbius strip. The curve t ′ and the arc t intersect once, and hence t ′ is the desired element of A ⊗ (S, M). Similarly, if t is a one-sided simple closed curve, then t lies inside a Möbius strip N bounded by an arc c. And t ′ is the only non-trivial arc inside N . If t is an arc separating a triangle from an annuli, then we are in the situation given in Figure 2. The mutation is exactly a quasi-flip in the sense of Penner (see [Pen04]) which gives the unicity of the arc t ′ . Finally, if t is an arc separating two annuli, then necessarily S is a once-punctured Klein bottle which we have excluded from our hypotheses on (S, M). Definition 2.5. With the notation of Proposition 2.4, the quasi-triangulation µ t (T ) is called the quasi-mutation of T in the direction t and the element t ′ in A ⊗ (S, M) is called the quasi-flip of t with respect to T . If both t and t ′ are arcs, then µ t is called a mutation and t ′ is called the flip of t with respect to T . Example 2.6. Figure 3 depicts examples of two quasi-mutations in the Möbius strip M 2 with two marked points. The quasi-mutation µ c b is a mutation whereas the quasi-mutation µ b is not. Proposition 2.7. Let (S, M) be a marked surface without puncture. Then the number of elements in a quasi-triangulation does not depend on the choice of the quasi-triangulation and is called the rank of the surface (S, M). Proof. For triangulations, this is easily done by consideration on the Euler characteristic of the surface (see for instance [FG07]) and the number of interior arcs for a non-orientable surface of genus k with p punctures and n boundary components having each b i marked points is given by For quasi-triangulation containing one or more one-sided simple closed curves, we can associate to each one-sided curve the unique arc given by the Proposition 2.4. Therefore to each quasitriangulation corresponds a triangulation which has the same number of elements. x t µ t t ′ x Figure 4. Quasi-mutation at a non-mutable arc in a triangulation. Example 2.8. For any n ≥ 1, we denote by M n the Möbius strip with n marked points on the boundary. It is a non-orientable surface of rank n. Corollary 2.9. Let (S, M) be an unpunctured marked surface and let T be a triangulation of (S, M). Then for any t ∈ T , the quasi-mutation in the direction t of T is a mutation if and only if t is not the internal arc of an anti-self-folded triangle in T . In this case, we say that t is mutable with respect to T . Proof. If t is not the internal arc of an anti-self-folded triangle in T , then removing t in T delimits a quadrilateral Q in which t is a diagonal and t ′ is the other diagonal. In particular, t ′ is an arc and µ t is a mutation. Conversely, if t is the internal arc of an anti-self-folded triangle, then there is an arc x in T such that locally around t, the triangulation looks like the situation depicted in Figure 4. Thus the quasi-flip t ′ of t is an element in A ⊗ (S, M) \ A(S, M) and µ t is not a mutation. Definition 2.11. The dual graph of ∆ ⊗ (S, M) is denoted by E ⊗ (S, M) and is called the quasiexchange graph of (S, M). Its vertices are the quasi-triangulations of (S, M) and its edges correspond to quasi-mutations. The dual graph of ∆(S, M) is denoted by E(S, M) and is called the exchange graph of (S, M). Its vertices are the triangulations of (S, M) and its edges correspond to mutations. Proposition 2.12. E ⊗ (S, M) is a connected n-regular graph. Proof. According to Proposition 2.4, every element in a quasi-triangulation can be quasi-mutated and quasi-mutations in distinct directions give rise to distinct quasi-triangulations. It thus follows that E ⊗ (S, M) is n-regular. Proving that E ⊗ (S, M) is connected is equivalent to proving that two quasi-triangulations are connected by a sequence of quasi-mutations. It is well-known that two triangulations of (S, M) are related by a sequence of mutations. Now it is enough to observe that each quasi-triangulation T which is not a triangulation can be related to a triangulation by a sequence of quasi-mutations, one at each one-sided curve in T . Therefore, any two quasi-triangulations are related by a sequence of quasi-mutations, which proves the proposition. The quasi-cluster complex E ⊗ (M 3 ) is obtained from the cluster complex by adding the unique one-sided curve in M 3 as a vertex of the complex and by connecting the six "external" vertices of the cluster complex to this unique one-sided curve. Therefore, the quasi-exchange graph ∆ ⊗ (M 3 ) is a polytope with 22 vertices and whose faces are three squares, six pentagons and four hexagons. Remark 2.15. Note that if (S, M) is not orientable, then E(S, M) is not regular, as it appears for instance in Figure 5. Relations between quasi-arcs 3.1. Hyperbolic geometry in the upper half-plane. We use throughout this paper the upper half-plane model of the hyperbolic plane endowed with the Riemannian metric Geodesics in H 2 are given either by circles perpendicular to the real axis or by lines parallel to the imaginary axis. The points of the boundary ∂H 2 are elements of R ∪ {∞}. The group PGL(2, R) can be defined as the quotient of two-by-two matrices with determinant plus or minus one, by the group {±I}. Note that the sign of the determinant is still well-defined on the quotient. It acts on H 2 by Möbius and anti-Möbius transformations. The group of isometries of the hyperbolic plane is naturally identified with PGL(2, R). An element with determinant one will correspond to an orientation-preserving isometry, and an element with determinant minus one will correspond to an orientation-reversing isometry. An horocycle in the upper half-plane is an euclidean circle parallel to the real axis, or a horizontal line parallel to the real axis. Hence a horocycle U is defined by its center u ∈ R ∪ {∞} and its diameter h ∈ R >0 (for a horocycle centered at ∞ its diameter is the height of the parallel), and is denoted U = (u, h). A decorated geodesic is a geodesic joining two points u and v on R ∪ {∞} together with horocycles U and V centered at u and v, and is denoted by (U, V ). For horocycles U = (u, h) and V = (v, k) with distinct centers u, v ∈ R, one can express the λ-length of the decorated geodesic as As shown by Penner [Pen87], we have λ(U, V ) = exp(δ/2) where δ is the signed hyperbolic distance between the two horocycles along the geodesic. 3.2. Decorated Teichmüller space. The main purpose of λ-lengths is to provide coordinates on the decorated Teichmüller space of an orientable surface, see [Pen87]. We extend this result to include non-orientable surfaces as well using quasi-arcs and quasi-triangulations. First we have to settle the case of a Möbius strip with one marked point on the boundary in the following proposition : There exists a unique isometry class of triple of horocycles (U, V, W ) such that : Proof. Let (U, V, W ) be a triple of horocycles. If there is an isometry φ such that φ(U ) = W and φ(W ) = V then we have that λ(U, W ) = λ(W, V ). For any a ∈ R >0 , there exists a unique isometry class of horocycles (U, V, W ) such that λ(U, V ) = c and λ(U, W ) = λ(W, U ) = a. Now let D be the unique orientation reversing isometry such that D(U ) = W and D(W ) = V . Up to conjugacy and rescaling we can assume that D is represented by a matrix of the form If we denote the three horocycles by U = (u, h), V = (v, k) and W = (w, l) then we have the following relations : This gives : We infer that We conclude that for any c, d > 0, there exists a unique isometry class of triple of horocycles with λ-length (c, c/d, c/d). This isometry class satisfies the property that an orientationreversing isometry sending one of the side of length c/d on the other one, has a trace of absolute value d. Theorem 3.2. For any quasi-triangulation T ∈ T ⊗ (S, M), the natural mapping is a homeomorphism. Proof. For an orientable surface with boundaries, this is the classical result of Penner on coordinates for the decorated Teichmüller space [Pen04]. If (S, M) is a non-orientable surface and T is a triangulation (without one-sided closed curves), then this theorem is a straightforward generalisation of Penner's result. We give here the argument that differs and we refer to [Pen87] for the sake of completeness. Recall that the idea of the original proof is to produce an inverse for the map Λ T . So suppose there is a positive real number assigned to each arc in a triangulation T . From the triangulation of the surface S, we get a triangulation of the universal cover S. From this, we get a corresponding triangulation of the hyperbolic plane H 2 , constructed by induction on the set of triangles. This gives a homeomorphism φ : S → H 2 , which is the developing map for the hyperbolic structure. The holonomy map ρ : π 1 (S) → PGL(2, R) defined by the developing map φ sends one-sided curves to orientation-reversing isometries. These isometries are elements of PGL(2, R) that are not in PSL(2, R). The group PGL(2, R) acts transitively on triples of horocycles, whereas PSL(2, R) acts transitively only on positively oriented triples of horocycles. So we can get anti-Möbius transformations in addition to Möbius transformations between two identified triangles in H 2 . This is the only slight difference with the orientable case and this does not change the other arguments of Penner's proof. The only thing that is left to show, is that the theorem still holds for quasi-triangulations containing one-sided simple closed curves. For any one-sided closed curve in a quasi-triangulation, we have a unique corresponding arc that bounds a Möbius strip. Suppose there is only one such curve d and cut the surface along the corresponding arc c. We get a subsurface S ′ with c as a boundary arc, and the surface S is obtained by gluing a Möbius strip along c. The quasitriangulation of S restricted to S ′ is a triangulation and we can apply the preceding arguments to construct the unique hyperbolic structure on S ′ defined by the λ-lengths. Then we use the Proposition 3.1 to show that the λ-length of the one-sided curve d together with the λ-length of c uniquely define a hyperbolic structure on the Möbius strip. There is no restriction when gluing back this Möbius strip to the surface S ′ . Hence we have defined a unique hyperbolic structure on the whole surface S. Figure 8. Resolving an intersection of a multigeodesic. 3.3. Intersections. The following theorem generalises the well-known Ptolemy relations for arcs to the case of arbitrary curves in (S, M). An interesting consequence is that, by "resolving intersections" recursively, it allows one to write the λ-length of a multigeodesic with intersections as a linear combination of λ-lengths of multigeodesics consisting of pairwise compatible simple curves. This will be crucial in the proof of Theorem 6.5. Note that in the orientable case, a similar result will appear in [MW]. Theorem 3.3. Let α be a multigeodesic with an intersection point p. Then we can write where β and γ are the two multigeodesics obtained by resolving the intersection at p, and ε 1 , ε 2 ∈ {−1, 1} are functions depending only on the topological type of α, β and γ (see Figure 8). This theorem is a generalisation of both the Ptolemy relation between simple arcs and the trace identities for matrices in SL(2, C). The generalisations are probably well-known to the specialists, however there are several cases for which there seems to be no reference in the literature. Moreover, in order to keep things self-contained, we give a complete proof even for classical situations. Proof. First notice that the resolution of an intersection at a point p only modifies the elements of the multigeodesic crossing at p. Hence, we only have to show the relation for multigeodesics with only one or two elements, and the general result will hold by induction. For the rest of the proof, let σ ∈ T (S) be a decorated hyperbolic structure. We will omit the subscript and write λ σ (a) = λ(a). 3.3.1. Two distinct arcs. Let α = {a, b} with a and b be two different arcs with endpoints a 0 , a 1 , b 0 , b 1 (not necessarily all disjoint), intersecting at some point p ∈ S. Choose a lift of p ∈ S = H 2 and denote by a and b the two unique ideal decorated geodesics that pass through p. This defines four different endpoints that we denote a 0 , a 1 , b 0 , b 1 . These four points are necessarily disjoint (even if they are lifts of the same point in the surface) so this gives rise to a quadrilateral with sides The diagonals of this quadrilateral are a and b. The Ptolemy relation in H 2 gives When returning to the surface, the arc c is the projection of c. It is homotopic to the arc starting at a 0 following a until it reaches p and then following b until it reaches b 0 . The same applies to the arcs d, e and f . By definition of the λ-length of the arcs a, b, c, d, e, f we have The resolution of the intersection gives β = {c, e} and γ = {d, f }. 3.3.2. Two closed curves. Let α = {a, b} with a and b two distinct geodesic curves intersecting at some point p ∈ S. We denote a and b the corresponding elements of the fundamental group π 1 (S, p) of the surface based at p up to a choice of orientation of the curves. The holonomy map of the hyperbolic structure sends a and b to elements A and B of PGL(2, R). We take matrix representatives in GL(2, R) such that | det(A)| = | det(B)| = 1 and tr(A), tr(B) > 0. The following formula holds for all such matrices : The λ-length of α is given by λ(α) = | tr(A)|| tr(B)|. The matrices AB and AB −1 correspond to the holonomy of the curves a * b and a * b −1 . These curves are exactly the ones given by the resolution of the intersection at p. So we have λ(β) = | tr(AB)| and λ(γ) = | tr(AB −1 )|. So it is clear that there exists ε 1 and ε 2 in {−1, 1} such that The only thing to show is that the elements ε 1 and ε 2 do not depend on the choice of the decorated hyperbolic structure σ. To do that we use a continuity argument. The functions tr(AB) and tr(AB −1 ) are continuous on the decorated Teichmüller space. For any given hyperbolic structure σ and any element c ∈ π 1 (S), we have tr(ρ(c)) = 0 where ρ is the holonomy representation. Indeed, if c is a two-sided curve, then the ρ(c) is a hyperbolic or parabolic isometry, and hence we have | tr(ρ(c))| ≥ 2. If c is a one-sided curve, then ρ(c) is a glide-reflection. A glide-reflection with zero trace corresponds to a plain reflection which is an involution. This would contradict the faithfulness of the holonomy representation. As T (S, M) is connected, the signs of tr(AB) and tr(AB −1 ) are constant. And hence ε 1 and ε 2 only depend on the geometric type of α, β and γ. 3.3.3. One non-simple closed curve. Let α = {c} with c a non-simple closed curve having an auto-intersection at the point p ∈ S. We can see c as an element of the fundamental group based at p. The curve c can henceforth be written as a * b with a and b be the two parts of the curve when removing the point p. This corresponds to one of the resolution, so set β = {a, b}. The other resolution of the intersection is the curve γ = a * b −1 which has at least one self-intersection less than c. A simple permutation of the terms of the preceding case gives : 3.3.4. One arc and one curve. Let α = {a, b} with a an arc and b a closed curve intersecting each other at p ∈ S. The curve b corresponds to an element b ∈ π 1 (S). We lift everything in the universal cover H 2 . • First assume that b is a two-sided curve. Up to conjugacy and rescaling the isometry ρ(b) is given by the following matrix : The axis of such an isometry is the vertical axis x = 0 and the direction is given by the positive direction in y. Let p be a lift of p on the axis x = 0, and let a be the unique decorated geodesic that is a lift of a passing through p. We denote by U = (u, h) and V = (v, k) the horocycles defining this decorated geodesic. The geodesic crosses the vertical axis x = 0 and hence u and v will be disjoint from 0 and ∞ and without loss of generality we can choose u < 0 and v > 0, so that we have : The image of the horocycles U and V under the isometry B = ρ(b) are given by It is easy to check that the λ-length of the decorated geodesic (B(U ), B(V )) is still λ(a). Let e and f be the decorated geodesics corresponding to (U, B(V )) and (B(U ), V ) respectively. These geodesics correspond to arcs e and f on S. We have These arcs correspond to the resolution of the intersection at p, hence we can note β = e and γ = f . We then have the following relation : • Now, if b is a one-sided curve. Up to conjugacy the isometry ρ(b) is given by the following matrix Again, let p be a lift of p on the axis x = 0, and let a be the unique decorated geodesic that is a lift of a passing through p. We denote by U = (u, h) and V = (v, k) the horocycles defining this decorated geodesic with u < 0 and v > 0, so that we have : The image of the horocycles U and V under the isometry B = ρ(b) are given by . Let e and f be the decorated geodesics corresponding to (U, B(V )) and (B(U ), V ) respectively. These geodesics correspond to arcs e and f on S. We have So finally we have the relation : 3.3.5. One non-simple arc. Let α = a with a a non-simple arc with a self-intersection at a point p ∈ S. Then we can define a closed curve b based at the point p which correspond to the loop created by a. Let b be the corresponding element of π 1 (S, p). • If b is two-sided, then up to conjugacy we have Let a be the decorated geodesic corresponding to a lift of the arc and let U = (u, h) and V = (v, k) be two horocycles such that a = (U, B(V )) and choose v > u. In this setting we have necessarily that the decorated geodesic a − corresponding to (B −1 (U ), V ) intersect the geodesic a at a point p − and similarly the geodesic a + intersect a at p + . This implies that the geodesic a does not cross the vertical axis x = 0 and hence u and v are of the same sign. Without loss of generality, we may assume that u, v > 0. Define c − to be the decorated geodesic (U, V ) and c + to be the decorated geodesic (B(U ), B(V )). Clearly, these two geodesics are lifts of the same arc c in S. Define also d to be the decorated geodesic (V, B(U )). So we have : The resolutions at point p are given by the multigeodesic β = b ⊔ c and γ = d. Calculations similar to the preceding case show that • If b is one-sided, then up to conjugacy we have Let a be the decorated geodesic corresponding to a lift of the arc and let U = (u, h) and V = (v, k) be two horocycles such that a = (U, B(V )) and choose v > u. In this setting we have necessarily that the decorated geodesic a − corresponding to (B −1 (U ), V ) intersect the geodesic a at a point p − and similarly the geodesic a + intersect a at p + . This implies that the geodesic a cross the vertical axis x = 0 and hence u and B(v) are of different sign. As B is orientation reversing, we have that v and B(v) are of different sign, and hence without loss of generality, we may assume that v > u > 0. Define c − to be the decorated geodesic (U, V ) and c + to be the decorated geodesic (B(U ), B(V )). Clearly, these two geodesics are lifts of the same arc c in S. Define also d to be the decorated geodesic (V, B(U )). So we have Again, the resolutions at point p are given by the multigeodesic β = {b, c} and γ = d. Calculations similar to the preceding case show that Remark 3.5. The coefficient ε 1 and ε 2 are always +1 except in the case where the crossing involves only closed curves and no arcs. In this case, the coefficients cannot be both negative at the same time because the left term of the identity is necessarily positive. The computation of the coefficient for a given situation can be done by taking one example of a hyperbolic structure and computing the traces. By continuity and connexity argument, the value for one example will be the value for all Teichmüller space. For example if α = {a, b} with a and b two simple closed two-sided curves that intersect only once, then ε 1 = ε 2 = +1. This is proved using the fact that the commutator a * b * a −1 * b −1 bounds a one-holed torus embedded in S. Explicit examples of hyperbolic structure on a oneholed torus are classical and and using one particular hyperbolic structure we see that the signs are all positive. The case of a multigeodesic consisting of a unique one-sided curve with multiplicity more than one, has to be treated separately. Indeed, any two curves homotopic to a one-sided curve will have at least one intersection point. Recall that in the orientable case, two homotopic two-sided curves can always be made disjoint. Proposition 3.6. Let α = {d, d} where d is a one-sided closed curve corresponding to an element d ∈ π 1 (S). We have λ(α) = λ(e) − 2 where e is the two-sided closed curve corresponding to the element d 2 ∈ π 1 (S). Proof. Let σ ∈ T (S, M) and let D = ρ σ (d). Up to conjugacy and rescaling, the matrix D is given by The λ-length of α is given by λ(α) = λ(d) 2 = tr(D) 2 On the other hand, the λ-length of e given by λ(e) = | tr(ρ(d 2 ))| = | tr(d 2 )| and hence Remark 3.7. Slightly abusing notations, we can restate Proposition 3.6 by saying that for any one-sided closed curve d in (S, M), see Figure 9. This identity will be of particular use in the proof of Theorem 6.5. Quasi-cluster algebras associated with non-orientable surfaces In this section (S, M) is an unpunctured marked surface of rank n ≥ 1 with b ≥ 1 boundary segments and F is a field of rational functions in n+b indeterminates. To any boundary segment b in B(S, M) we associate a variable x b ∈ F such that {x b | b ∈ B(S, M)} is algebraically independent in F and we set which is referred to as the ground ring. Quasi-seeds and their mutations. Definition 4.1. A quasi-seed associated with (S, M) in F is a pair Σ = (T, x) such that : (1) T is a quasi-triangulation of (S, M) ; (2) x = {x t | t ∈ T } is a free generating set of the field F over ZP. The set {x t | t ∈ T } is called the quasi-cluster of the quasi-seed Σ. A quasi-seed is called a seed if the corresponding quasi-triangulation is a triangulation and in this case the quasi-cluster is called a cluster. Definition 4.2. Given t ∈ T , we define the quasi-mutation of Σ in the direction T as the pair (1) If t is an arc separating two different triangles with sides (a, b, t) and then the relation is simply given by the Ptolemy relation for arcs, that is, (2) If t is an arc in an anti-self-folded triangle with sides (t, t, a), as in the figure below, (3) If t is a one-sided curve in an annuli with boundary a as in the figure below, t a µ t t ′ a then the relation is (4) If t is an arc separating a triangle with sides (a, b, t) and an annuli with boundary t and one-sided curve d as in the figure below, Note that the quasi-mutation of a quasi-seed is again a quasi-seed. Two quasi-seeds Σ = (T, x) and Σ ′ = (T ′ , x ′ ) associated with (S, M) in F are called quasimutation-equivalent if Σ ′ can be obtained from Σ by a finite number of quasi-mutations. This defines an equivalence relation on the set of seeds associated with (S, M) in F whose equivalence classes are called quasi-mutation classes. Since (S, M) has rank n, every quasi-triangulation T in (S, M) has n elements. We can thus fix a labelling t 1 , . . . , t n of the elements of T . A quasi-seed Σ equipped with such a labelling is called a labelled quasi-seed. For any 1 ≤ i ≤ n, we define the mutation in the direction i of the labelled quasi-seed Σ as µ i (Σ) = µ ti (Σ) = (T ′ , x ′ ) equipped with the labelling T ′ = {t ′ 1 , . . . , t ′ n } where t ′ k = t k if k = i and t ′ i is the quasi-flip of t i with respect to T . Note that mutations of labelled quasi-seeds are involutive in the sense that µ i (µ i (Σ)) = Σ for any 1 ≤ i ≤ n. 4.2. Quasi-cluster algebras. Let T n denote the n-regular tree. At each vertex in T n , we label by {1, . . . , n} the n adjacent edges. is a labelled quasi-seed associated with (S, M) in F and where two adjacent quasi-seeds in T n are related by a single mutation in the sense that Definition 4.4. Let X : v → Σ v be a quasi-cluster pattern associated with (S, M) in F . The quasi-cluster algebra associated with X is the ZP-subalgebra A(X ) of F generated by the union of all the quasi-clusters of quasi-seeds appearing in the quasi-cluster pattern, that is, where v runs over the vertices in T n . The elements in the union of all the quasi-clusters of quasi-seeds appearing in the quasi-cluster pattern are called the quasi-cluster variables of the quasi-cluster algebra A(X ). Note that each labelled quasi-seed Σ associated with (S, M) in F determines entirely a quasicluster pattern X (up to a relabelling of the vertices in T n ) so that the quasi-cluster algebra A(X ) is entirely determined by Σ and is denoted by A Σ . Note also that different choices of labelling of a quasi-seed Σ associated with (S, M) in F give rise to canonically isomorphic quasi-cluster algebras so that we can associate a quasi-cluster algebra A Σ to any quasi-seed Σ associated with (S, M) in F . Finally, note that if Σ = (T, x) and Σ ′ = (T ′ , x ′ ) are two quasi-seeds associated with (S, M) in F , then the quasi-triangulations T ′ and T are quasi-mutation-equivalent so that there exists a seed Σ ′′ = (T ′ , x ′′ ) in the quasi-cluster pattern defined by Σ and the canonical automorphism of F sending x ′′ to x ′ induces an isomorphism of the quasi-cluster algebras A Σ and A Σ ′ . Thus, up to a canonical ring isomorphism, the quasi-cluster algebra A Σ only depends on the surface (S, M) and is denoted by A (S,M) . Note that the quasi-cluster of any quasi-seed Σ = (T, x) in A (S,M) is a free generating set of F over ZP so that each quasi-cluster variable x in A (S,M) can be expressed as a rational function with coefficients in ZP in the quasi-cluster x. This rational expression is called the Σ-expansion of x in A (S,M) . It follows from the definition of the quasi-cluster algebra A (S,M) that each quasi-cluster variable x in A (S,M) is associated with a quasi-arc in A (S,M) . If Σ = (T, x) is a quasi-seed in A (S,M) , we saw in Theorem 3.2 that the λ-lengths of arcs in T can be viewed as algebraically independent variables. Therefore, there is an isomorphism of Z-algebras : Lemma 4.6. Let (S, M) be an unpunctured marked surface, let Σ = (T, x) be a quasi-seed in A (S,M) and let x be a quasi-cluster variable in A (S,M) corresponding to a quasi-arc v in A ⊗ (S, M). Then the Σ-expansion of x is given by φ T (λ(v)). Proof. Let Σ ′ = (T ′ , x ′ ) be a quasi-seed in A (S,M) which is quasi-mutation-equivalent to Σ. We prove by induction on the minimal number d(Σ, Σ ′ ) of quasi-mutations to reach Σ ′ from Σ ′ that the result holds for any quasi-cluster variable in Σ ′ . If Σ = Σ ′ , then the result clearly holds. Otherwise, we can write . Therefore, the result holds for any quasi-cluster variable in Σ ′′ by induction hypothesis. Let denote by v ′ the quasi-flip of v with respect to the quasi-triangulation T ′′ . The quasi-mutation rules precisely imitate the relations for the λ-lengths of the corresponding arcs. This is clear for the first three cases considered in Definition 4.2 and for the fourth case, it follows from the resolution of the two intersections of the corresponding arcs and from the identity given in Proposition 3.6. As x v x v ′ = M 1 + M 2 where M 1 and M 2 are monomials in the variables corresponding to the quasiarcs in T ′′ , applying φ T to the corresponding relation for λ(v)λ(v ′ ) and using the induction . Note that using Lemma 4.6, we will usually identify quasi-cluster variables with λ-lengths of the corresponding quasi-arcs. Example 4.7. In Figure 10, we exhibit the quasi-variables in the quasi-cluster algebra A M2 expressed in a particular quasi-cluster which does not correspond to a triangulation. For simplicity, for any quasi-arc v in M 2 , we designated the quasi-cluster variable x v by v. 4.3. Orientable vs non-orientable. If (S, M) is orientable, Fomin, Shapiro and Thurston associated to (S, M) a cluster algebra in [FST08,FT08]. When the ground ring of the cluster algebra is the group ring of the free abelian group generated by variables associated to the boundary segments of (S, M), we say that this cluster algebra has coefficients associated with the boundary segments. The following proposition follows directly from the definitions and from the geometric interpretation of the cluster algebras from surfaces provided in [FT08] : Figure 10. The quasi-cluster complex of the Möbius strip with two marked points and the corresponding quasi-cluster variables, expressed in the quasicluster (c a , d). Quasi-cluster algebras and double covers We saw in Section 4.3 that if (S, M) is orientable, then the quasi-cluster algebra coincides with the cluster algebra associated with (S, M). The aim of this section is to prove that when (S, M) is non-orientable, part of the quasi-cluster algebra structure on A (S,M) can be found in the cluster algebra associated with the (orientable) double cover of (S, M). Nevertheless, as we shall see, mutations in the double cover do not allow to realise quasi-cluster variables corresponding to one-sided curves. Throughout this section, (S, M) will always denote a non-orientable unpunctured marked surface of rank n ≥ 1. Lifts of triangulations and double mutations. We recall that each non-orientable marked surface (S, M) admits a minimal orientable cover, its double cover (S, M), endowed with a free action of Z 2 = {1, τ } such that (S, M)/Z 2 ≃ (S, M). Each element a in A(S, M) (resp. in B(S, M)) admits exactly two lifts a and τ a in A(S, M) (resp. in B(S, M)). Remark 5.1. Note that a quasi-triangulation of (S, M) which is not a triangulation does not lift to a triangulation of (S, M). Indeed, a one-sided curve in (S, M) lifts to a non-contractible closed curve in (S, M) so that it is not an arc and thus it is not part of a triangulation of (S, M). Lemma 5.2. Let Σ = (T, x) be a seed associated with (S, M) in F and let t ∈ T be a mutable arc with respect to T . Then Proof. Since the arc is mutable with respect to t, there exist a, b, c, d ∈ T ⊔ B(S, M) distinct from t such that in (S, M) we have the following situation : and in the cluster x, all the variables are preserved except x t which is replaced by Let Σ = (T , x) be the lift of Σ. Then, in (S, M), we have the following two distinct quadrilaterals with where all the edges boundaries of the quadrilaterals are distinct from t and τ t : Therefore, the triangulations in the seeds µ t • µ τ t (Σ) and µ τ t • µ t (Σ) are given by : and the corresponding clusters are obtained from x by replacing respectively x t and x τ t ′ by is the lift of the seed µ t (Σ), which proves the lemma. Remark 5.3. Note that Lemma 5.2 does not hold if t is not mutable with respect to T . For instance, if we consider the Möbius strip M 1 with one marked point and the following triangulation T : Then t is not mutable with respect to T and the quasi-mutation gives the following quasitriangulation. In the double cover, which is the annulus C 1,1 with one marked point on each boundary component, the lift of T is the following triangulation : The sequence of mutations µ τ t • µ t gives the following triangulation of the double cover : Whereas the sequence µ t • µ τ t gives the following triangulation of the double cover : Therefore the mutations µ t and µ τ t do not commute and moreover, their products do not give rise to lifts of quasi-triangulations of the Möbius strip M 1 . Quotient map. Given a seed Σ = (T, x) associated with (S, M), we denote by Σ the seed (T , x) corresponding to a lift of T in (S, M). The group Z 2 acts naturally on the ambient field F of A (S,M) by τ x t = x τ t for any t ∈ T ⊔ B(S, M) and we consider the Z 2 -invariant ring epimorphism : π : Therefore, we get and thus π(x t ) = x t , which proves the lemma. Conversely, let T and T ′ be two distinct Z 2 -invariant triangulations of (S, M) lifting respectively the triangulations T and T ′ in (S, M). Assume that we can write T = T 0 ⊔ {a, τ a} and T ′ = T 0 ⊔ {b, τ b}. Note that a and τ a are not the internal arcs of an anti-self-folded otherwise we would necessarily have {a, τ a} = {b, τ b} and T = T ′ . Thus, it follows from Corollary 2.9 that a is mutable with respect to T and thus, it follows from Lemma 5.2 that T ′ = µ a (T ) so that T ′ = µ a (T ) and T ′ and T are joined by an edge in E(S, M). Let Σ = (T, x) be a seed associated with (S, M) in F . We fix an arbitrary orientation of the double cover (S, M) and we denote by B the matrix associated with the lift T of the triangulation T in (S, M). We recall that the entries of this matrix are indexed by the lifts of arcs of T and for any two arcs v, w in T , the entry corresponding to the lifts v and w is defined as the difference 5.4. where, for any arcs a and b, the number n T (a, b) is given by number of triangles in T bordered by a and b in such a way that the oriented angle formed by a and b in this triangle is positive, see [FST08, Section 4]. For any arc v ∈ T , we set . Note that even if the matrix B depends on the choice of the orientation of (S, M), the pair b + tv , b − tv is independent on this choice. Proposition 5.6. Let T be a triangulation of (S, M), let t ∈ T be mutable with respect to T and let t ′ denote its flip with respect to T . Then Proof. This is a direct consequence of Lemmas 5.2 and 5.4 and of the definition of the exchange relations for a cluster algebra of geometric type. Finite type classification Cluster algebras of finite type were defined in [FZ03] as cluster algebras with finitely many cluster variables and are classified by Dynkin diagrams. For cluster algebras coming from surfaces, the cluster algebras of finite type are those associated either with a disc with at least four marked points on the boundary (which correspond to Dynkin type A) or those associated with a disc with at least four marked points on the boundary and with one puncture (which correspond to Dynkin type D). In this section, we provide a similar classification for quasi-cluster algebras. (1) a disc with at least four marked points on the boundary, (2) a Möbius strip with at least one marked point on the boundary. Proof. If S is orientable, then the classification of finite type is classical. So assume that S is non-orientable. If S has two or more boundary components, then the boundary twist along one of the boundary component, which is the homeomorphism that sends the boundary to itself after a 2π rotation, generates an infinite cyclic subgroup of the mapping class group. The orbit of a simple arc joining this boundary component to another one is infinite, and hence we have an infinite number of quasi-arcs in S. If S is of non-orientable genus greater than two, then S contains a one-holed Klein bottle K. It is known that there exists an infinite number of one-sided simple closed curves in K, all in the orbit of a single element under by the action of the Dehn twist along the unique non-trivial two-sided simple closed curve in K. Hence we get an infinite number of quasi-arcs in S. Remark 6.3. For cluster algebras of finite type, it is known that the number of cluster variables is given by the number of almost positive roots of the corresponding Dynkin diagram, see [FZ03]. For the the Möbius strip M n with n ≥ 1 marked points on the boundary, an easy calculation shows that the number of quasi-arcs, and thus of cluster variables in A Mn is given by whereas the number of arcs is given by 6.1. Linear bases in quasi-cluster algebras of finite type. Throughout this section (S, M) denotes a non-oriented unpunctured marked surface of rank n ≥ 1. Definition 6.4. Let A (S,M) be a quasi-cluster algebra. A quasi-cluster monomial (resp. a cluster monomial ) in A (S,M) is a monomial in quasi-cluster variables belonging all to the same quasi-cluster (resp. cluster). We denote by W ⊗ (S, M) the set of weighted quasi-triangulations : and the set of weighted triangulations by and for any α ∈ W ⊗ (S, M), we set x ni ti . Thus, the set of quasi-cluster monomials in A (S,M) is The quasi-cluster algebra A (S,M) is naturally endowed with a structure of module over its ground ring ZP and a ZP-linear basis in A (S,M) is a free generating set of A (S,M) for this structure. In [CK08], Caldero and Keller proved that the set of cluster monomials form a Z-linear basis in any coefficient-free cluster algebra of finite type (in the sense of [FZ03]). Here we generalise this result to quasi-cluster algebras of finite type in the above sense. Similar methods will appear for cluster algebras associated to arbitrary orientable surfaces in [MSW]. Proof. As a ZP-module, the quasi-cluster algebra A (S,M) is generated by elements of the form m = x α where α runs over the set of multigeodesics consisting of quasi-arcs. Thus, in order to prove that quasi-cluster monomials form a generating set over the ground ring ZP, we only have to prove that each such monomial can be written as a ZP-linear combination of quasi-cluster monomials. Let thus α be a multigeodesic consisting of quasi-arcs. Resolving successively the intersections in α, we can write x α as a Z-linear combination of x γ where each γ is multigeodesic consisting of pairwise compatible simple geodesics. Let γ be one of these multigeodesics. We denote by β the subset of γ consisting of boundary segments and we set x γ = x β x γ ′ . If γ ′ ∈ W ⊗ (S, M), we are done. Otherwise, we are necessarily in the non-orientable case and it follows from Theorem 6.2 that (S, M) = M n for some n ≥ 1. We denote by d the unique one-sided simple closed curve in M n . We thus know that x γ ′ is either of the form x γ ′′ x d (x d 2 ) l or of the form x γ ′′ (x d 2 ) l for some l ≥ 0 and some γ ′′ ∈ W ⊗ (S, M) compatible with d (or equivalently with d 2 ). Now, it follows from Proposition 3.6 that (x d 2 ) l is a polynomial in x d with coefficients in Z so that x γ ′ is a Z-linear combination of elements of the form x γ ′′ x l d where l ≥ 0 and γ ′′ is compatible with d. In other words, x γ is a ZP-linear combination of elements of the form x γ ′ with γ ′ ∈ W ⊗ (S, M). We now need to prove that quasi-cluster monomials are linearly independent over the ground ring ZP. If (S, M) is orientable, then A (S,M) is a cluster algebra of type A and the result is well-known, see for instance [CK08,MSW]. We thus focus on the case where (S, M) is nonorientable so that (S, M) = M n for some n ≥ 1. The double cover (S, M) is therefore the annulus C n,n with n marked points on each boundary component which we endow with an arbitrary orientation. We chose a fundamental domain for the Z 2 -action in C n,n and for any t ∈ A(M n ) ⊔ B(M n ) we denote by t the lift of t in this fundamental domain. And for any multigeodesic α = {t 1 , . . . , t m } in M n , we set α = t 1 , . . . , t m the corresponding multigeodesic in C n,n . The λ-lengths being preserved by the Z 2 action on C n,n , we can naturally identify A Mn with a subalgebra of A Cn,n via the ring homomorphism ι sending the cluster variable x t ∈ A Mn to the cluster variable x t ∈ A Cn,n . The one-sided curve d in M n has a unique lift in C n,n , which we denote by d. According to Proposition 3.6, the corresponding λ-lengths are related by λ(d) = λ(d) 2 + 2 and we denote by x d the element in the cluster algebra A Cn,n corresponding to the image of x 2 d + 2 under ι. We have We denote respectively by M 0 and M 1 the ZP-modules which these two sets span in A (S,M) . We first prove that M 0 is linearly independent over ZP. For this, it is enough to prove that its image ι(M 0 ) under ι is linearly independent over the ground ring of A Cn,n . We have is a subset of the generic basis of A Cn,n , see [Dup08,MSW] and therefore it is linearly independent over the ground ring and so is M 0 . Assume now that there is some vanishing ZP-linear combination and thus each a l,α is zero since M 0 is linearly independent over ZP. Therefore, M 1 is also linearly independent over ZP. We now claim that M 0 ∩ M 1 = {0}. Indeed, assume that there are ZP-linear combinations such that with a l,α , b k,β ∈ ZP. Then, if we square this identity, the left-hand side is a ZP-linear combination of products of the form where α, α ′ ∈ W(M n ) are compatible with d and where l, l ′ ≥ 0. Using Theorem 3.3, the product x α x α ′ can be written as a ZP-linear combination of x α ′′ (x d 2 ) l ′′ where α ′′ ∈ W(M n ) is compatible with d 2 and thus with d and where l ′′ ≥ 0. Therefore, the square of the left-hand side is a ZP-linear combination of elements of the form x α x 2l d with α ∈ W(M n ) compatible with d and l > 0. Similarly, the square of the right-hand side is a ZP-linear combination of elements of the form x β x 2k d with β ∈ W(M n ) compatible with d and k ≥ 0. In particular, the square of each side of (1) is a ZP-linear combination of elements of M 0 , which is known to be linearly independent over ZP. Therefore, the coefficients of each x β x 2k d with k = 0 in the square of the right-hand side has to be zero and thus b 0,β = 0 for any β occurring in the right-hand side of (1). Therefore, we can divide both sides of (1) by the smallest power of x d arising on one of the two sides and by induction, it follows that a l,α = b k,β = 0 for any k, l ≥ 0 and α, β ∈ W(M n ). This finishes the proof of the theorem. Figure 11. Actions of Σ 0 and Σ 1 for compatible orientations. Integrable systems associated with unpunctured surfaces The aim of this section is to prove that with any unpunctured marked surface, we can naturally associate a family of discrete integrable systems satisfied by λ-lengths of curves in (S, M). In the case where the variables corresponding to boundary segments are specialised to 1, these integrable systems provide SL 2 -tilings of the plane, also called friezes in the literature see for instance [ARS10]. 7.1. AR-quivers for homotopy classes of curves. Let (S, M) be an unpunctured marked surface which is not necessarily oriented. We denote by C(S, M) the set of curves in (S, M) whose both endpoints are in M considered up to isotopy with respect to M. We define a onesided geodesic as a curve in (S, M) joining two marked points on the same boundary component such that its concatenation with a boundary component joining its two endpoints reverses the orientation of the surface. We fix two boundary components ∂ and ∂ ′ of (S, M) which are not necessarily distinct. Let H denote the homotopy class of oriented curves in (S, M) whose endpoints lie respectively on ∂ and ∂ ′ (but not necessarily on M). Finally, denote by C H the set of elements in C(S, M) ⊔ B(S, M) such that a representative of the isotopy class belongs to H. The boundary components ∂ and ∂ ′ are one-dimensional so that they can both be oriented. If (S, M) is oriented the boundary components ∂ and ∂ ′ are canonically oriented and we fix orientations ω and ω ′ respectively of ∂ and ∂ ′ which are induced by the orientation of (S, M). If (S, M) is not orientable, we fix arbitrary orientations such that ω = ω ′ if ∂ = ∂ ′ . We say that the orientations ω and ω ′ of ∂ and ∂ ′ are compatible with respect to H if H does not contain any one-sided geodesics. We say that ω and ω ′ are incompatible with respect to H otherwise and in this latter case, H consists only of one-sided geodesics. Note that if (S, M) is oriented then the orientations of ω and ω ′ are always compatible with respect to H. Let a ∈ C H , that is a continuous map a : [0, 1]−→ S such that a(0) ∈ ∂ ∩M and a(1) ∈ ∂ ′ ∩M or a(0) ∈ ∂ ′ ∩ M and a(1) ∈ ∂ ∩ M. We define Σ 0 a as the element of C H obtained by concatenating the boundary segment joining a(0) to the next marked point along the orientation of the boundary, with the curve a. If ω and ω ′ are compatible (or incompatible, respectively), we define Σ 1 a to be the curve obtained by concatenating a with the boundary segment joining a(1) to the next marked point (or the previous marked point, respectively) along the boundary, see Figures 11 and 12. We define Σ −1 0 a and Σ −1 1 a via the obvious inverse operations. Figure 12. Actions of Σ 0 and Σ 1 for non-compatible orientations. Definition 7.1. The AR-quiver Γ H is the oriented graph whose vertices are the elements of C H and whose arrows are given by Definition 7.3. We set τ a = Σ −1 0 Σ −1 1 a = Σ −1 1 Σ −1 0 a and τ −1 a = Σ 0 Σ 1 a = Σ 1 Σ 0 a and the map τ is called the AR-translation. Remark 7.4. The terminology AR-quiver stands for Auslander-Reiten quiver since when (S, M) is an orientable marked surface, the oriented graphs we just constructed describe connected components of the Auslander-Reiten quivers of the generalised cluster categories associated to the surface (S, M), see [CCS06,BZ10]. In this case, the AR-translation defined above acts on Γ H as the Auslander-Reiten translation functor on the corresponding connected component of the Auslander-Reiten quiver of the generalised cluster category. We now prove that the AR-translation endows the AR-quiver of a homotopy class of curves with the structure of a stable translation quiver. We recall that a pair (Γ, τ ) is called a stable translation quiver if τ is a bijection from the set Γ 0 of vertices in Γ to itself and if for any a ∈ Γ 0 it induces a bijection where Γ 1 denotes the set of arrows in Γ. For generalities on translation quivers we refer the reader to [Rie80]. We denote by ZA ∞ ∞ the quiver whose vertices are labelled by Z×Z and with arrows (i, j)−→ (i+ 1, j) and (i, j)−→ (i, j + 1) for any i, j ∈ Z. It is a translation quiver for the translation given by τ (i, j) = (i − 1, j − 1), with i, j ∈ Z. Proposition 7.5. Let (S, M) be an unpunctured marked surface and H denote the homotopy class of curves in (S, M) whose endpoints lie on boundary components of (S, M). Then Γ H is a stable translation quiver isomorphic to a quotient of ZA ∞ by a finite group of automorphisms. Proof. Assume first that H does not contain any one-sided geodesic. Then, Γ H is isomorphic as a translation quiver to a certain Γ H ′ where H ′ is a homotopy class of curve in an orientable marked surface. Therefore, it follows from [BZ10] that Γ H is isomorphic to a quotient of ZA ∞ ∞ by some automorphism group. The only new case to treat is when H contains a one-sided geodesic. In this case, we simply observe that the natural action of the free group generated by Σ 0 and Σ 1 is free and transitive on C H so that Γ H ≃ ZA ∞ ∞ . 7.2. A system of equations satisfied by λ-lengths. According to Proposition 7.5, we can naturally label the vertices in Γ H by couples (i, j) with i, j ∈ Z with the convention the g.(i, j) and (i, j) label the same vertex for any g in the automorphism group considered in Proposition 7.5. Moreover, Σ 0 and Σ 1 act as for any i, j ∈ Z where ǫ = 1 if ω and ω ′ are compatible with respect to H and ǫ = −1 otherwise, see Figure 14. If |M ∩ ∂| = p and |M ∩ ∂ ′ | = q, we can label the marked points on ∂ by Z/pZ and the marked points on ∂ ′ by Z/qZ in such a way that the curve corresponding to the couple (i, j) joins the marked point i (modulo pZ) in ∂ to the marked point j (modulo qZ) in ∂ ′ . For any pair (i, j) ∈ Z × Z, we denote by λ H (i,j) the λ-length of the curve in C H represented by the couple (i, j). We also denote by λ ∂ {i,i+1} the λ-length of the boundary segment of ∂ joining the marked point labelled by i (modulo pZ) to the marked point labelled by i + 1 (modulo pZ) and similarly for λ ∂ ′ {j,j+1} . We adopt the convention that λ ∂ {i,i} = λ ∂ ′ {j,j} = 1 for any i, j ∈ Z. With these notations, it follows from the resolutions given in Theorem 3.3 that for any i, j ∈ Z, the λ-lengths of arcs in C H satisfy the following system of equations : (2) λ H (i,j) λ H (i+1,j+ǫ) = λ H (i+1,j) λ H (i,j+ǫ) + λ ∂ {i,i+1} λ ∂ ′ {j,j+ǫ} . or equivalently j+ǫ} . If we are in the case where ∂ = ∂ ′ and curves in H are homotopic to the boundary ∂, then these λ-lengths are moreover subject to the boundary conditions λ H (i,i+1) = λ ∂ {i,i+1} and λ H (i,i) = 1. Remark 7.6. In the "coefficient-free" settings, that is, when λ-lengths λ ∂ {i,i+1} and λ ∂ ′ {j,j+1} of boundary segments are specialised to 1, equation (2) becomes 7.3. Integration and partial triangulations. In this section we prove that the solutions of the systems (2) are given by cluster variables in cluster algebras of type A equipped with an alternating orientation. This allows to express the λ-lengths of curves in C H in terms of λ-lengths of a partial triangulation of (S, M) consisting of arcs in C H . Let k ≥ 1 and m ≥ k − 1 be integers. We denote by Π m the disc with m marked points on the boundary. Marked points are labelled cyclically by Z/mZ. Arcs in Π m are parametrised by pairs {i, j} with i, j ∈ Z/mZ such that i = j and i = j ± 1. For such a pair {i, j}, we denote by x i,j the corresponding cluster variables in A Πm . For any i ∈ Z/mZ we denote by x i,i+1 the coefficient in A Πm corresponding to the boundary component {i, i + 1}. We consider the "zig-zag" triangulation of Π m given by arcs of the form {−i, i + 2} and {−i, i + 1} for i ∈ Z/mZ (see Figure 13 below). According to the Laurent phenomenon [FZ02] and to the positivity conjecture for cluster algebras of type A [ST09], he variable x 0,k can be written as a subtraction free Laurent polynomial in the coefficients and in the arcs of the zig-zag triangulation. More precisely, for any k ≥ 2, there exists a unique X k ∈ Z ≥0 [x 0,−1 , . . . , x 2−(k−1),2−k , x 2,3 , . . . , x k−1,k ][x ±1 0,2 , . . . , x ±1 2−k,k , x ±1 −1,2 , . . . , x ±1 2−(k−1),k ] such that x 0,k = X k . Theorem 7.7. Let (S, M) be an unpunctured marked surface and let H be a homotopy class of curves joining the boundary components ∂ and ∂ ′ . Then, for any i, j ∈ Z and any k ≥ 2 we have : For any homotopy class H as above, we denote A H the subalgebra of A (S,M) generated by the cluster variables x v where v runs over the arcs in C H . It follows from Theorem 7.7 that each algebra A H is either a cluster algebra of type A or is an infinite analogue of a cluster algebra of type A. Note that the algebra A H is independent on the choice of the orientations of the curves in H. where H runs over the possible homotopy classes of curves joining two marked points in (S, M) and where tensor products are taken over the integers. Proof. We first observe that the above mapping is a well-defined ring homomorphism since each term in the tensor product on the left-hand side is a sub-Z-algebra of A (S,M) . Let x ∈ A (S,M) . By definition, we can write x as a sum of terms of the form bx δ x η where b ∈ ZP, where δ is a multigeodesic consisting of one-sided simple closed curves and η is a multigeodesic consisting of arcs in (S, M). Let H 1 , . . . , H k be distinct homotopy classes of curves in (S, M) such that η = η 1 ⊔ · · · ⊔ η k where each geodesic in the multigeodesic η i belongs to H i . Therefore, bx δ x η is the image of b ⊗ x δ ⊗ x η1 ⊗ · · · ⊗ x η k under the canonical mapping so that the mapping is surjective. Remark 7.9. Note that unless (S, M) is a disc, the epimorphism given in Proposition 7.8 is not an isomorphism. Understanding the kernel of this map amounts to understanding the relations between λ-lengths of curves belonging to distinct homotopy classes. In the case of an annulus with all the boundaries specialised to 1, this situation was studied from a representationtheoretical point of view in [AD11]. In this case, this amounts to compare the cluster characters associated to objects belonging to distinct connected components of the Auslander-Reiten quiver of a cluster category.
2015-02-13T21:46:16.000Z
2011-05-08T00:00:00.000
{ "year": 2015, "sha1": "6793e558946f7a3d83fb9f37a24b869ff3edde5e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1105.1560.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6793e558946f7a3d83fb9f37a24b869ff3edde5e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
4478291
pes2o/s2orc
v3-fos-license
Effect of Progressive Weight Loss on Lactate Metabolism: a Randomized Controlled Trial Objective Lactate is an intermediate of glucose metabolism that has been implicated in the pathogenesis of insulin resistance. This study evaluated the relationship between glucose kinetics and plasma lactate concentration ([LAC]) before and after manipulating insulin sensitivity by progressive weight loss. Methods Forty people with obesity (BMI=37.9±4.3 kg/m2) were randomized to weight maintenance (n=14) or weight loss (n=19). Subjects were studied before and after 6 months of weight maintenance and before and after 5%, 11% and 16% weight loss. A hyperinsulinemic-euglycemic clamp procedure in conjunction with [6,6-2H2]glucose tracer infusion was used to assess glucose kinetics. Results At baseline, fasting [LAC] correlated positively with endogenous glucose production rate (r=0.532, p=0.001) and negatively with insulin sensitivity, assessed as the insulin-stimulated glucose disposal (r=−0.361, p=0.04). Progressive (5% through 16%) weight loss caused a progressive decrease in fasting [LAC], and the decrease in fasting [LAC] after 5% weight loss was correlated with the decrease in endogenous glucose production (r=0.654, p=0.002) and the increase in insulin sensitivity (r=−0.595, p=0.007). Conclusion This study demonstrates the inter-relationships among weight loss, hepatic and muscle glucose kinetics, insulin sensitivity, and [LAC], and suggests that [LAC] can serve as an additional biomarker of glucose-related insulin resistance. INTRODUCTION The use of plasma glucose as fuel involves its transport into cells where it is metabolized to pyruvate in the cytosol before entry into the mitochondria for complete oxidation in the tricarboxylic acid cycle. Pyruvate that does not enter the mitochondria is converted to lactic acid, which rapidly dissociates to lactate and hydrogen. Therefore, lactate is a product of incomplete glucose metabolism. During resting conditions, plasma lactate concentration ([LAC]) increases when flux through glycolysis exceeds the rate of mitochondrial oxidation. Accordingly, an increase in [LAC] could be an indicator of impaired glucose metabolism. In fact, fasting [LAC] is higher in people with obesity (1, 2) and type 2 diabetes (3,4,5) than in healthy lean people. An increase in [LAC] can also influence glucose metabolism by providing a gluconeogenic precursor to the liver and by disrupting muscle insulin signaling and insulin-mediated muscle glucose uptake (6,7,8,9). Therefore, an increase in [LAC] can be both a biomarker and a cause of impaired glucose metabolism. Weight loss improves multi-organ insulin sensitivity and insulin-mediated glucose metabolism in people with obesity (10). The therapeutic effect of weight loss on glucose metabolism suggests weight loss should decrease fasting [LAC], which could contribute to the beneficial effect of weight loss on metabolic function. However, the effect of weight loss on lactate metabolism is not clear because of conflicting results among studies. Weight loss has been shown to decrease fasting [LAC] in people with obesity and metabolic syndrome (2), but not in metabolically healthy people with obesity or people with obesity and type 2 diabetes (11,12). The reason(s) for the inconsistency among studies is not clear, but could be related to differences in sample size and health status of the participants. The purpose of this study was to evaluate the relationship between glucose kinetics and [LAC], and determine whether weight loss-induced changes in insulin sensitivity and endogenous glucose production are associated with changes in lactate metabolism. To this end, we evaluated [LAC] and glucose kinetics during basal conditions and during glucose and insulin infusion before and after progressive (5%, 11% and 16%) weight loss in people with obesity. Subjects Forty men and women (BMI=37.9 ± 4.3 kg/m 2 , 44 ± 12 years old) were enrolled in this study. The assessment of lactate metabolism was made while subjects participated in a study that involved evaluating insulin sensitivity during progressive weight loss, conducted from January 2011 until October 2015 (10). No subject had diabetes or other serious illnesses, were taking medications known to interfere with insulin action or lactate metabolism, or consumed excessive alcohol (>21 drinks/week for men and >14 drinks/week for women). Written informed consent was obtained from all subjects before their participation in this study, which was approved by the Institutional Review Board of Washington University School of Medicine in St. Louis, MO. Study protocol Subjects were randomly assigned to weight maintenance (n=14 [5 withdrew after being informed of their randomization and 1 dropped out], 11 women and 3 men) or diet-induced weight loss (n=19 [1 dropped out], 16 women and 3 men) using a computerized randomization list provided by the statistician of the study (Figure 1). The characteristics of each group have been previously reported (10). Subjects in the weight loss group attended weekly individual behavior education sessions and dietary counseling sessions, and were prescribed a low-calorie diet (50-55% of energy as carbohydrate, 30% of energy as fat, and 15-20% of energy as protein) of self-prepared foods to achieve 5% weight loss. After 5% weight loss was achieved, solid and liquid meal replacements were provided as needed to achieve the 10% and 15% weight loss targets. All subjects in the weight loss group were studied before and after 5% loss; 9 subjects continued to lose weight and were studied again after ~11% and ~16% weight loss. After subjects achieved each weight loss target, a weight maintenance diet was prescribed to maintain a stable body weight (<2% change) for at least 3 weeks before repeat testing was performed to avoid the potential effect of energy imbalance on our outcome measures. Subjects in the weight loss group were studied before and after a median (quartiles) of 3.5 (2.9, 4.6), 6.8 (6.0, 8.6) and 10.4 (9.6, 10.4) months for 5%, 11% and 16% weight loss, respectively. Subjects randomized to weight maintenance were studied at baseline and after 6 months. Body fat mass and fat-free mass were determined by using dual-energy X-ray absorptiometry (13). A hyperinsulinemic-euglycemic clamp procedure, in conjunction with [6,6-2 H 2 ]glucose tracer infusion, was used to assess glucose kinetics during basal and insulin-stimulated conditions (10). The rate of insulin infusion (50 mU/m 2 body surface area/min) was designed to achieve typical postprandial plasma insulin concentrations (14). Blood samples were obtained from an indwelling radial arterial catheter. Sample analyses and calculation of glucose kinetics [LAC] was measured on a Beckman DxC600 autoanalyzer, using reagents also from Beckman (Brea, CA) (15). Plasma glucose tracer-to-tracee ratio (TTR) was determined by using gas chromatography-mass spectroscopy (16). Glucose rate of appearance (R a ) in plasma during basal conditions provides an index of hepatic glucose production rate, and was calculated by dividing the glucose tracer infusion rate by the average plasma glucose TTR during the last 30 min of the basal period. During the clamp procedure, glucose rate of disappearance (R d ) from plasma provides an index of insulin-stimulated muscle glucose uptake, and was calculated as the sum of endogenous glucose R a and the rate of infused (exogenous) glucose. Insulin sensitivity was determined as the relative increase in glucose R d during insulin infusion. Statistical Analysis A two-way repeated measures ANOVA with group (weight loss versus maintenance) and time (pre-versus post-intervention) as factors was used to evaluate the effects of 5% weight loss on [LAC]; and significant interactions were followed by Tukey's post hoc procedure. A one-way repeated measures ANOVA was used to assess the effect of progressive weight loss on [LAC]. Effects of time were followed by simple contrasts to assess differences from baseline and trend analysis to assess the linear, quadratic, and cubic components of the overall time-related change. Pearson's correlation was used to evaluate the relationship between [LAC] and glucose kinetics. Results are shown as means ± SD. A P-value of 0.05 or less was considered statistically significant. The sample size was based on the primary outcome of the original study (change in insulin sensitivity during weight loss) as reported previously (10). Statistical analyses were performed by using SPSS (version 24, IBM, Armonk, NY). Inter-relationships among [LAC], glucose production rate, and insulin sensitivity At baseline, there was a three-fold range in [LAC] from 0.6 mmol/L to 1.9 mmol/L. However, fasting [LAC] correlated positively with glucose R a ( Figure 2A) and negatively with skeletal muscle insulin sensitivity, assessed as the relative increase in glucose R d during insulin infusion ( Figure 2B). Insulin infusion had a variable effect on [LAC], which ranged from a 76% decrease to a 100% increase; the insulin-induced change in [LAC] was positively correlated with the insulin-stimulated increase in glucose R d ( Figure 2C). The relationship between fasting [LAC] and glucose Rd during clamp or the absolute increase in glucose Rd during insulin infusion was not statistically significant (r=−217, p= 0.225 and r= −266, p= 0.134, respectively). Effects of weight loss on lactate and glucose metabolism Compared with subjects randomized to weight maintenance, 5% weight loss caused a decrease in fasting [LAC] ( Figure 3A) and an increase in the relative change in [LAC] induced during insulin infusion ( Figure 3B). The relative decrease in fasting [LAC] after 5% weight loss correlated with both the relative decrease in basal glucose R a ( Figure 3C) and the relative increase in insulin sensitivity (assessed as a relative increase in glucose R d during a hyperinsulinemic-euglycemic clamp procedure) ( Figure 3D). Progressive 5% to 16% weight loss caused a progressive decline in fasting [LAC] ( Figure 4A) and a progressive increase in the relative change in [LAC] induced by insulin infusion (Figure 4B), in conjunction with a progressive increase in muscle insulin sensitivity reported previously (10). DISCUSSION In this study, we evaluated the relationship between [LAC] and glucose kinetics and the effect of weight loss on lactate metabolism in people with obesity. We found fasting [LAC] was positively correlated with the rate of endogenous glucose production, but negatively correlated with insulin sensitivity, assessed as the relative increase in glucose R d during the clamp procedure. To further investigate the relationship between [LAC] and insulin action, we assessed [LAC] in a subset of participants after weight loss-induced manipulation of insulin sensitivity. Progressive weight loss (5% through 16%) caused a progressive decrease in fasting [LAC], and the change in fasting [LAC] after 5% weight loss was directly correlated with a relative decrease in basal glucose R a and relative increase in insulinsensitivity. These data support the inter-relationship between hepatic and skeletal muscle glucose metabolism and [LAC], and suggest fasting [LAC] is a biomarker of glucose-related insulin resistance. Our finding that fasting [LAC] is associated with insulin resistance is consistent with data from previous studies that have shown fasting [LAC] is higher in people with obesity and those with type 2 diabetes than in people who are lean and healthy (4,5,17,18,19). The design of our study is not able to determine the precise mechanism(s) responsible for the observed correlation between fasting [LAC] and insulin sensitivity and the progressive decline in fasting [LAC] with progressive weight loss. There is no storage depot for lactate, so circulating lactate represents a balance between production and plasma clearance. During postabsorptive, resting conditions, skeletal muscle (20) and adipose tissue (1,21) are major sources of whole-body lactate production, whereas the splanchnic bed (liver and gut) is likely an important source of postprandial lactate production (22). Circulating lactate can be excreted by the kidneys or taken up by specific tissues and converted to glucose (primarily by liver and kidneys) (9, 23), or oxidized for fuel (primarily by heart and skeletal muscle) (24,25). Presumably, during basal conditions, the weight loss-induced increase in insulin sensitivity and decrease in endogenous glucose production caused a decrease in the delivery of glucose to skeletal muscle and adipose tissue and increased the proportion of muscle and adipose tissue glucose uptake that was completely oxidized or converted to glycogen, thereby decreasing whole-body lactate production and fasting [LAC]. In contrast with the relationship between [LAC] and insulin sensitivity during postabsorptive conditions, [LAC] in response to either glucose ingestion or glucose infusion is lower in people who have insulin-resistant glucose metabolism than in those who are insulin sensitive (18,26,27,28). The precise mechanism explaining this observation is not known, but it is likely that the impairment in tissue glucose uptake associated with insulin resistance limits the availability of glucose for metabolism to lactate (1,5,29,30,31), whereas the increase in tissue glucose uptake associated with insulin sensitivity increases glycolytic flux and lactate production. In our subjects, the relative change in [LAC] during the hyperinsulinemic-euglycemic clamp procedure progressively increased with progressive weight loss and the accompanying increase in insulin sensitivity, demonstrating that the impairment in postprandial lactate production associated with obesity is corrected by weight loss. However, we are not able to determine which tissues were responsible for the weightloss induced increase in lactate production during the clamp procedure, which could involve any of the insulin-sensitive tissues that produce lactate, such as skeletal muscle, adipose tissue, liver, intestine, and kidney. It is possible that the relationship we observed between [LAC] and insulin resistance represents an adverse effect of lactate on insulin action. Data from studies conducted in rodent models demonstrate that hyperlactemia affects cellular insulin signaling and impairs insulin-stimulated glucose uptake in skeletal muscle (6,7,8). However, these findings have not been confirmed in human subjects; experimental sodium lactate infusion, in conjunction with sodium hydroxide infusion to prevent lactic acidosis, did not have adverse effects on glucose disposal or insulin sensitivity in healthy lean people (32,33). The reason for the discrepancy between studies in rodents and people is not clear, but it is possible the changes in plasma pH and [LAC] achieved during lactate infusion in people were not adequate to affect insulin action. Additional studies are needed to determine whether [LAC] is involved in the pathogenesis of insulin resistance in people with obesity. This study demonstrates the inter-relationships among weight loss, hepatic and muscle glucose kinetics, insulin sensitivity, and [LAC] in people with obesity, and suggests fasting [LAC] can serve as an additional biomarker of glucose-related insulin resistance. Additional studies are needed to identify the precise mechanisms responsible for relationship between [LAC] and insulin action and to clarify the importance of circulating lactate in the pathogenesis of insulin resistance in people. What is already known about this subject? • Lactate is an intermediate of glucose metabolism that has been implicated in the pathogenesis of insulin resistance and type 2 diabetes. • People with obesity and diabetes have higher fasting plasma lactate concentration ([LAC]) compared with people who are lean and healthy. What does your study add? • This study demonstrates that fasting [LAC] is positively associated with the rate of endogenous glucose production, and negatively associated with insulin sensitivity, assessed as the relative increase in glucose disposal during a hyperinsulinemic-euglycemic clamp procedure. • Progressive (5% through 16%) weight loss caused a progressive decrease in fasting [LAC], whereas the relative decrease in fasting [LAC] in response to 5% weight loss was directly correlated with the relative decrease in endogenous glucose production and the relative increase in insulin-stimulated glucose disposal. • Fasting [LAC] is a potential biomarker of insulin resistance with respect to glucose metabolism.
2018-04-03T00:00:38.021Z
2018-01-16T00:00:00.000
{ "year": 2018, "sha1": "493d89f8860492dbabd03d922a53c05cc8b424a0", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc5866193?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "493d89f8860492dbabd03d922a53c05cc8b424a0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
49864500
pes2o/s2orc
v3-fos-license
Introgression of regulatory alleles and a missense coding mutation drive plumage pattern diversity in the rock pigeon Birds and other vertebrates display stunning variation in pigmentation patterning, yet the genes controlling this diversity remain largely unknown. Rock pigeons (Columba livia) are fundamentally one of four color pattern phenotypes, in decreasing order of melanism: T-check, checker, bar (ancestral), or barless. Using whole-genome scans, we identified NDP as a candidate gene for this variation. Allele-specific expression differences in NDP indicate cis-regulatory divergence between ancestral and melanistic alleles. Sequence comparisons suggest that derived alleles originated in the speckled pigeon (Columba guinea), providing a striking example of introgression. In contrast, barless rock pigeons have an increased incidence of vision defects and, like human families with hereditary blindness, carry start-codon mutations in NDP. In summary, we find that both coding and regulatory variation in the same gene drives wing pattern diversity, and post-domestication introgression supplied potentially advantageous melanistic alleles to feral populations of this ubiquitous urban bird. Introduction Vertebrates have evolved a vast array of epidermal colors and color patterns, often in response to natural, sexual, and artificial selection. Numerous studies have identified key genes that determine variation in the types of pigments that are produced by melanocytes (e.g., Hubbard et al., 2010;Manceau et al., 2010;Roulin and Ducrest, 2013;Domyan et al., 2014;Rosenblum et al., 2014). In contrast, considerably less is known about the genetic mechanisms that determine pigment patterning throughout the entire epidermis and within individual epidermal appendages (e.g., feathers, scales, and hairs) (Kelsh, 2004;Protas and Patel, 2008;Kelsh et al., 2009;Lin et al., 2009;Kaelin et al., 2012;Lin et al., 2013;Eom et al., 2015;Poelstra et al., 2015;Mallarino et al., 2016). In birds, color patterns are strikingly diverse among different populations and species, and these traits have profound impacts on mate-choice, crypsis, and communication (Hill and McGraw, 2006). The domestic rock pigeon (Columba livia) displays enormous phenotypic diversity among over 350 breeds, including a wide variety of plumage pigmentation patterns that also vary within breeds Domyan and Shapiro, 2017). Some of these pattern phenotypes are found in feral and wild populations as well (Johnston and Janiga, 1995). A large number of genetic loci contribute to pattern variation in rock pigeons, including genes that contribute in an additive fashion and others that epistatically mask the effects of other loci (Van Hoosen Jones, 1922;Hollander, 1937;Sell, 2012;Domyan et al., 2014). Despite the genetic complexity of the full spectrum of plumage pattern diversity in pigeons, classical genetic experiments demonstrate that major wing shield pigmentation phenotypes are determined by an allelic series at a single locus (C, for 'checker' pattern) that produces four phenotypes: T-check (C T allele, also called T-pattern), checker (C), bar (+), and barless (c), in decreasing order of dominance and melanism ( Figure 1A) (Bonhote and Smalley, 1911;Hollander, 1938aHollander, , 1983bLevi, 1986;Sell, 2012). Bar is the ancestral phenotype (Darwin, 1859;Darwin, 1868), yet checker and T-check can occur at higher frequencies than bar in urban feral populations, suggesting a fitness advantage in areas of dense human habitation (Goodwin, 1952;Obukhova and Kreslavskii, 1984;Johnston and Janiga, 1995;Č anády and Mosˇanský, 2013). Color pattern variation is associated with several important life history traits in feral pigeon populations. For example, checker and T-check birds have higher frequencies of successful fledging from the nest, longer (up to year-round) breeding seasons, and can sequester more toxic heavy metals in plumage pigments through chelation (Petersen and Williamson, 1949;Lofts et al., 1966;Murton et al., 1973;Janiga, 1991;Chatelain et al., 2014;. Relative to bar, checker and T-check birds also have reduced fat storage and, perhaps as a consequence, lower overwinter adult survival rates in harsh rural environments (Petersen and Williamson, 1949a;Jacquin et al., 2012). Female pigeons prefer checker mates to bars, so sexual selection probably influences the frequencies of wing pigmentation patterns in feral populations as well (Burley, 1977;1981;Johnston and Johnson, 1989). In contrast, barless, the recessive and least melanistic phenotype, is rarely observed in feral pigeons (Johnston and Janiga, 1995). In domestic populations, barless birds have a higher frequency of vision defects, sometimes referred to as 'foggy' vision (Hollander and Miller, 1981;Hollander, 1983b;Mangile, 1987), which could negatively impact fitness in the wild. In this study, we investigate the molecular basis and evolutionary history underlying wing pattern diversity in pigeons. We discover both coding and regulatory variation at a single candidate gene, eLife digest The rock pigeon is a familiar sight in urban settings all over the world. Domesticated thousands of years ago and still raised by hobbyists, there are now more than 350 breeds of pigeon. These breeds have a spectacular variation in anatomy, feather color and behavior. Color patterns are important for birds in species recognition, mate choice and camouflage. Pigeon fanciers have long observed that color patterns can be linked to health problems, such as lighter birds suffering more often from poor vision. In addition, pigeons with certain pigment patterns are more likely to survive and reproduce in urban habitats. But despite centuries of pigeon-breeding and the abundance of rock pigeons in urban spaces, how pigeons generate such different feather color patterns, is still largely a mystery. Vickrey et al. sequenced the genomes of pigeons with different patterns and found that a gene called NDP played an important role in wing pigmentation. In birds with darker patterns (called checker and T-check) the gene NDP was expressed more in their feathers, but the gene itself was not altered. The lightest colored birds (barless patterned), however, had a mutation in the NDP gene itself that led to less pigmentation. The NDP mutation found in barless pigeons is the same as one that is sometimes found in the human version of NDP, where it is linked to hereditary blindness. Vickrey et al. also discovered that the darker patterns most likely arose from breeding of the rock pigeon with a different species, the African speckled pigeon, something pigeon fanciers have suspected for some time. The findings could help to parse out the different functions of the NDP gene in both pigeons and humans. Mutations in the NDP gene in humans typically cause a range of neurological problems in addition to loss of sight, but in barless pigeons, the mutation appears to cause only vision defects. These findings suggest that a specific part of the gene is particularly important for vision in birds and humans, and shed light on the surprisingly complex evolutionary history of the rock pigeon. Figure 1. A single genomic region is associated with rock pigeon (C. livia) wing pigmentation pattern. (A) Four classical wing pattern pigmentation phenotypes, shown in decreasing order of genetic dominance and melanism (left to right): T-check, checker, bar, and barless. Photos courtesy of the Genetics Science Learning Center (http://learn.genetics.utah.edu/content/pigeons). (B) Whole-genome pFst comparisons between the genomes of bar (n = 17) and checker (n = 24) pigeons. Dashed red line marks the genome-wide significance threshold (9.72e-10). (C) Detail of pFst peak shows region of high differentiation on Scaffold 68. Five genes within the region are shown in red. Blue shading marks the location of the smallest shared haplotype common to all checker and T-check birds. Haplotype homozygosity in the candidate region extends further for checker and T-check birds (blue trace) than for bar birds (gray), a signature of positive selection for the derived alleles. Extended haplotype homozygosity (EHH) was measured from focal position 1,751,072 following the method of Sabeti et al. (2007). DOI: https://doi.org/10.7554/eLife.34803.003 The following figure supplements are available for figure 1: and a polymorphism linked with pattern variation within and between species that likely resulted from interspecies hybridization. Results and discussion A genomic region on Scaffold 68 is associated with wing pattern phenotype To identify the genomic region containing the major wing pigmentation pattern locus, we used a probabilistic measure of allele frequency differentiation (pFst; Domyan et al., 2016) to compare the resequenced genomes of bar pigeons to genomes of pigeons with either checker or T-check patterns ( Figure 1A). Checker and T-check birds were grouped together because these two patterns are sometimes difficult to distinguish, even for experienced hobbyists. Checker birds are typically less pigmented than T-check birds, but genetic modifiers of pattern phenotypes can minimize this difference (see Figure 1-figure supplement 1 for examples of variation). A two-step wholegenome scan (see Materials and methods; Figure 1B and C, Figure 1-figure supplement 2) identified a single~103 kb significantly differentiated region on Scaffold 68 that was shared by all checker and T-check birds (position 1,702,691-1,805,600 of the Cliv_1.0 pigeon genome assembly, ; p=1.11e-16, genome-wide significance threshold = 9.72e-10). The minimal shared region was defined by haplotype breakpoints in a homozygous checker and a homozygous bar bird, and is highly differentiated from the same region in bar (63.28% mean sequence similarity at informative sites). This region is hereafter referred to as the minimal checker haplotype. As expected for the well-characterized allelic series at the C locus, we also found that a broadly overlapping region of Scaffold 68 was highly differentiated between the genomes of bar and barless birds (p=3.11e-15, genome-wide significance threshold = 9.71e-10; Figure 1-figure supplement 2). Together, these whole-genome comparisons identified a single genomic region corresponding to the wing-pattern C locus. A copy number variant is associated with variation in melanistic wing patterns To identify genetic variants associated with the derived checker and T-check phenotypes, we first compared annotated protein-coding genes throughout the genome. We found a single, predicted, fixed change in EFHC2 (Y572C, Figure 1-figure supplement 3) in checker and T-check birds relative to bar birds (VAAST; Yandell et al., 2011). However, this same amino acid substitution is also found in Columba rupestris, a closely related species to C. livia that has a bar wing pattern. Thus, the Y572C substitution is not likely to be causative for the checker or T-check pattern, nor is it likely to have a strong impact on protein function (MutPred2 score 0.468, no recognized affected domain; PolyPhen-2 score 0.036, benign; Adzhubei et al., 2010;Pejaver et al., 2017). Next, we examined sequence coverage across the checker haplotype and discovered a copy number variable (CNV) region (approximate breakpoints at Scaffold 68 positions 1,790,000 and 1,805,600). Based on normalized read-depths of resequenced birds, we determined that the CNV region has one, two, or four copies per chromosome. Bar birds (n = 12) in our resequencing panel always had a total of two copies in the CNV region (one on each chromosome), but most checker (n = 5 of 7) and T-check (n = 2 of 2) genomes examined had additional copies of the CNV (Figure 2A). Using a PCR assay to amplify across the breakpoints in birds with more than one copy per chromosome, we determined that additional copies result from tandem repeats. We found no evidence that the checker haplotype contains an inversion based on mapping of paired-end reads at the CNV breakpoints (WHAM; Kronenberg et al., 2015). In addition, we were able to amplify unique PCR products that span the outer CNV breakpoints (data not shown), suggesting that there are no inversions within the CNV region. Consistent with the dominant inheritance pattern of the phenotype, all checker and T-check birds had at least one copy of the checker haplotype. However, the fact that some checker birds had only . A copy number variant (CNV) in the candidate region is associated with T-check and checker phenotypes. (A) Normalized read depths from resequenced birds are plotted in the candidate region between EFHC2 and NDP on Scaffold 68. Thickened portions of gene models represent exons and thin portions are introns. Representative individual read depth traces are shown for the following: black for bar C. livia, grey for checker C. livia individuals without additional copies of the CNV, blue for checker C. livia individuals with additional copies of the CNV region, red for T-check C. livia. (B) CNV quantification for 94 birds (20 bar, 56 checker, and 18 T-check). Checker and T-check phenotypes (as reported by breeders) were associated with increased copy numbers (p=2.1e-05). (C) CNV and phenotype quantification for an additional 84 birds, including 26 feral pigeons. Increased copy number was associated with an increase in dark area on the wing shield (r 2 = 0.46, linear regression). Points are colored by reported phenotype and origin: bar, black; checker, blue; T-check, red; domestic breeds, filled circle points; ferals, cross points. DOI: https://doi.org/10.7554/eLife.34803.008 The following source data and figure supplement are available for figure 2: Source data 1. Taqman copy number assay results represented in Figure 2B. DOI: https://doi.org/10.7554/eLife.34803.010 Source data 2. Taqman copy number assay and phenotype quantification results represented in Figure 2C. one copy of the CNV region on each chromosome demonstrates that a copy number increase is not necessary to produce melanistic phenotypes. Pedigree analysis of a laboratory cross also confirmed perfect co-segregation of the checker haplotype and phenotype (Figure 1-figure supplement 4, Supplementary file 1). Thus, a checker haplotype on at least one chromosome appears to be necessary for the dominant melanistic phenotypes, but additional copies of the CNV region are not. In a larger sample of pigeons, we found a significant association between copy number and phenotype (TaqMan assay; pairwise Wilcoxon test, p=2.1e-05). Checker (n = 40 of 55) and T-check (n = 15 of 18) patterns are usually associated with expansion of the CNV, but pigeons with the bar pattern (n = 20) never had more than two copies in total (one copy on each chromosome; Figure 2B). Although additional copies of the CNV only occurred in checker and T-check birds, we did not observe a consistent number of copies associated with either phenotype. This could be due to a variety of factors, including modifiers that darken genotypically-checker birds to closely resemble T-check (Van Hoosen Jones, 1922;Sell, 2012) and environmental factors such as temperaturedependent darkening of the wing shield during feather development (Podhradsky, 1968). Due to the potential ambiguity in categorical phenotyping, we next measured the percent of pigmented area on the wing shield and tested for associations between copy number and the percentage of pigmented wing-shield area. We phenotyped and genotyped an additional 63 birds from diverse domestic breeds as well as 26 feral birds, and found that estimated copy number in the variable region was correlated with the amount of dark pigment on the wing shield (nonlinear least squares regression, followed by r 2 calculation; r 2 = 0.46) ( Figure 2C). This correlation was a better fit to the regression when ferals were excluded (r 2 = 0.68, Figure 2-figure supplement 1), possibly because numerous pigmentation modifiers (e.g., sooty and dirty) are segregating in feral populations (Hollander, 1938a;Johnston and Janiga, 1995). Together, our analyses show that the minimal checker haplotype is associated with increased pigmentation on the wing shield plumage, resulting in qualitative variation between bar and checker (including T-check) phenotypes. Furthermore, copy number variation is found only in checker haplotypes, and higher numbers of copies are associated with quantitative pigmentation increases in checker and T-check birds only. NDP is differentially expressed in feather buds of different wing pattern phenotypes The CNV that is associated with wing pattern variation resides between two genes, EFHC2 and NDP. EFHC2 is a component of motile cilia, and mouse mutants have juvenile myoclonic epilepsy (Linck et al., 2014). In humans, allelic variation in EFHC2 is also associated with differential fear responses and social cognition (Weiss et al., 2007;Blaya et al., 2009;Startin et al., 2015; but see Zinn et al., 2008). However, EFHC2 has not been implicated in pigmentation phenotypes in any organism. NDP encodes a secreted ligand that activates WNT signaling by binding to its only known receptor FZD4 and its co-receptor LRP5 (Smallwood et al., 2007;Hendrickx and Leyns, 2008;Deng et al., 2013;Ke et al., 2013). Notably, NDP is one of many differentially expressed genes in the feathers of closely related crow subspecies that differ, in part, by the intensity of plumage pigmentation (Poelstra et al., 2015). Furthermore, FZD4 is a known melanocyte stem cell marker (Yamada et al., 2010). Thus, based on expression variation in different crow plumage phenotypes, and the expression of its receptor in pigment cell precursors, NDP is a strong candidate for pigment variation in pigeons. NDP is a short-range signal (Niehrs, 2004), so we suspect that this ligand is secreted by melanocytes themselves or by cells in close proximity to them. The CNV in the intergenic space between EFHC2 and NDP in the candidate region, coupled with the lack of candidate coding variants between bar and checker haplotypes, led us to hypothesize that the CNV region might contain regulatory variation that could alter expression of one or both neighboring genes. To test this possibility, we performed qRT-PCR on RNA harvested from regenerating wing shield feathers of bar, checker, and T-check birds. EFHC2 was not differentially expressed between bar and either checker or T-check patterned feathers (p=0.19, pairwise Wilcoxon test, p-value adjustment method: fdr), although expression levels differed slightly between the checker and T-check patterned feathers (p=0.046, Figure 3A). Expression levels of other genes adjacent to the minimal checker haplotype region also did not vary by phenotype (Figure 3-figure supplement 1). In contrast, expression of NDP was significantly increased in checker feathers -and even higher in T-check feathers -relative to bar feathers ( Figure 3A) (bar-checker comparison, p=1.9e-05; bar-T- Figure 3. Expression differences in NDP, but not EFHC2, indicate cis-regulatory differences associated with pigmentation phenotypes. (A) qRT-PCR assays demonstrate higher expression of NDP in regenerating feathers of checker and T-check birds than in bar birds. Expression levels of EFHC2 are indistinguishable between bar and melanistic phenotypes (p=0.19), although checker and T-check differed from each other (p=0.046). (B) Allele-specific expression assay in regenerating feathers from heterozygous bar/checker birds for NDP and EFHC2. Copies of the CNV region on the checker chromosome were quantified using a custom Taqman assay. Boxes span the first to third quartiles, bars extend to minimum and maximum observed values, black line indicates median. Expression of EFHC2 alleles were not significantly different, and checker alleles of NDP showed higher expression than the bar allele; p=0.0028 for two-sample t-test between 1 vs. 4 copies, p=1.84e-06 for glm regression. DOI: https://doi.org/10.7554/eLife.34803.012 The following source data and figure supplements are available for figure 3: Source data 1. qRT-PCR source data represented in Figure Source data 2. Allele-specific expression assays source data represented in Figure 3B and check, p=1.0e-08; checker-T-check, p=0.0071; pairwise Wilcoxon test, all comparisons were significant at a false discovery rate of 0.05). Moreover, when qRT-PCR expression data for checker and T-check feathers were grouped by copy number instead of categorical phenotype, the number of CNV copies was positively associated with NDP expression level (Figure 3-figure supplement 2). Thus, expression of NDP is positively associated with both increased melanism (categorical pigment pattern phenotype) and CNV genotype. The increase in NDP expression could be the outcome of at least two molecular mechanisms. First, one or more regulatory elements in the CNV region (or elsewhere on the same DNA strand) could increase expression of NDP in cis. Such changes would only affect expression of the allele on the same chromosome (Wittkopp et al., 2004). Second, trans-acting factors encoded within the minimal checker haplotype (e.g., EFHC2 or an unannotated feature) could increase NDP expression, resulting in an upregulation of NDP alleles on both chromosomes. To distinguish between these possibilities, we carried out allele-specific expression assays (Domyan et al., 2014; on the regenerating wing shield feathers of birds that were heterozygous for bar and checker alleles in the candidate region (checker alleles with one, two, or four copies of the CNV). In the common trans-acting cellular environment of heterozygous birds, checker alleles of NDP were more highly expressed than bar alleles, and these differences were further amplified in checker alleles with more copies of the CNV ( Figure 3B) (p=0.0028 for two-sample t-test between 1 vs. 4 copies, p=1.84e-06 for generalized linear model regression; ratios of checker:bar expression for 1-and 4-copy checker alleles were significantly different than 1:1, p 0.002 for each comparison). In comparison, transcripts of EFHC2 from checker and bar alleles were not differentially expressed in the heterozygote background ( Figure 3B) (p=0.55 for two-sample t-test between 1 vs. 4 copies, p=0.47 for linear regression; ratios of checker:bar expression for 1-and 4-copy checker alleles were not significantly different than 1:1, p>0.3 for each comparison). Checker alleles of NDP were also more highly expressed in feathers from other body regions (tail and dorsum, Figure 3-figure supplement 3), even though the pigment pattern on these regions is generally similar in bar and checker birds (e.g., both phenotypes have a dark band on the tail). Together, our expression studies indicate that a cis-acting regulatory change drives increased expression of NDP in pigeons with more melanistic plumage patterns, but does not alter expression of EFHC2 or other nearby genes. Furthermore, because NDP expression increases with additional copies of the CNV, the regulatory element probably resides within the CNV itself. To search for known enhancers in the CNV region, we mapped elements from the VISTA (Visel et al., 2007) and REPTILE (He et al., 2017) enhancer datasets to the pigeon genome. We found no hits within the minimal haplotype from the VISTA dataset and 12 hits from the REPTILE dataset (Supplementary file 2). Of these 12, one hit was within the CNV region (Scaffold 68: 1,795,453-1,795,511). However, this lone mouse enhancer (ENSMUSR00000084784, http://uswest. ensembl.org/Mus_musculus/) is not known to regulate EFHC2 or NDP in mice, and is located on a mouse chromosome that is not orthologous to pigeon Scaffold 68. Further functional work will be required to assess whether this or other sequences in the CNV region act as regulatory elements in C. livia. A missense mutation at the start codon of NDP is associated with barless In humans, mutations in NDP can result in Norrie disease, a recessively-inherited disorder characterized by a suite of symptoms including vision deficiencies, intellectual and motor impairments, and auditory deficiencies (Norrie, 1927;Warburg, 1961;Holmes, 1971;Chen et al., 1992;Sims et al., 1992). Protein-coding mutations in NDP, including identical mutations segregating within singlefamily pedigrees, result in variable phenotypic outcomes, including incomplete penetrance (Meindl et al., 1995;Berger, 1998;Allen et al., 2006). Intriguingly, barless pigeons also have an increased incidence of vision deficiencies and, as in humans with certain mutant alleles of NDP, this phenotype is not completely penetrant (Hollander, 1983b). Thus, based on the known allelism at the C locus, the nomination of regulatory changes at NDP as candidates for the C and C T alleles, and the vision-related symptoms of Norrie disease, NDP is also a strong candidate for the barless phenotype (c allele). To test this prediction, we used VAAST to scan the resequenced genomes of 9 barless pigeons and found that all were homozygous for a nonsynonymous protein-coding change at the start codon of NDP that was perfectly associated with the barless wing pattern phenotype (Figure 4, Figure 1figure supplement 2). We detected no other genes with fixed coding changes or regions of significant allele frequency differentiation (pFst) elsewhere in the genome. We genotyped an additional 14 barless birds and found that all were homozygous for the same start-codon mutation. The barless mutation is predicted to truncate the amino terminus of the NDP protein by 11 amino acids, thereby disrupting the 24-amino acid signal peptide sequence (www.uniprot.org, Q00604 NDP_Human). NDP is still transcribed and detectable by RT-PCR in regenerating barless feathers; therefore, we speculate that the start-codon mutation might alter the normal secretion of the protein into the extracellular matrix (Gierasch, 1989). In humans, coding mutations in NDP are frequently associated with a suite of neurological deficits. In pigeons, however, only wing pigment depletion and vision defects are reported in barless homozygotes. Remarkably, two human families segregating Norrie disease have only vision defects, and like barless pigeons, these individuals have start-codon mutations in NDP (Figure 4) (Isashiki et al., 1995). Therefore, signal peptide mutations might affect a specific subset of developmental processes regulated by NDP, while leaving other (largely neurological) functions intact. NDP is critical for retinal vascular formation (Xu et al., 2004) and hedgehog-dependent retinal progenitor proliferation (McNeill et al., 2013) in mammals, and we speculate that one or both of these processes is affected by the start codon mutations in pigeons as well. In summary, wing pattern phenotypes in pigeons are associated with the evolution of both regulatory (checker, T-check) and coding (barless) changes in the same gene, and barless pigeons share a partially-penetrant visual deficiency with human patients who have start-codon substitutions. Future work will test whether the barless (and human) start-codon mutations affect extracellular secretion of NDP, and how NDP expression directly or indirectly regulates melanocyte activity. Sharp boundaries define the heavily pigmented areas of checker feathers (Figure 1, Figure 1-figure supplement 1), similar to intra-feather patterns in other species that are mediated by both activity of melanocytes and the topological distribution of their progenitors (Lin et al., 2013;Chen et al., 2015). Considerably more is known about the molecular control of plumage structure and color than pigmentation pattern, based in part on experiments to manipulate gene expression in vivo by viral infection and in explants by protein misexpression (Harris et al., 2002;Yu et al., 2002;Harris et al., 2005;Chen et al., 2015;Boer et al., 2017). We expect the identification of NDP as a patterning Signatures of introgression of the checker haplotype Pigeon fanciers have long hypothesized that the checker pattern in the rock pigeon (Columba livia) resulted from a cross-species hybridization event with the speckled pigeon (C. guinea, Figure 5D), a species with a checker-like wing pattern (G. Hochlan, G. Young, personal communication) (Hollander, 1983b). We estimate that C. livia and C. guinea diverged 4-5 million years ago (MYA): columbid species (pigeons and doves) diverge from each other in mitochondrial cytochrome b nucleotide sequence at 1.96% per MY (Weir and Schluter, 2008), and C. livia and C. guinea differ at this gene by 8.0%. Divergence date estimates for these two species based on nuclear genome sequences range between 3.2 and 6.7 MYA (K.P.J., unpublished results). Despite this divergence time of several MY, inter-species crosses between C. livia and C. guinea can produce fertile hybrids (Whitman, 1919;Irwin et al., 1936;Taibel, 1949;Miller, 1953). Moreover, hybrid F 1 and backcross progeny between C. guinea and bar C. livia have checkered wings, much like C. livia with the C allele (Whitman, 1919;Taibel, 1949). Taibel (1949) showed that, although hybrid F 1 females were infertile, two more generations of backcrossing hybrid males to C. livia could produce checker offspring of both sexes that were fully fertile. In short, Taibel introgressed the checker trait from C. guinea into C. livia in just three generations. To evaluate the possibility of an ancient introgression event, we sequenced an individual C. guinea genome to 33X coverage and mapped the reads to the C. livia reference assembly. We calculated four-taxon D-statistics ('ABBA-BABA' test; Durand et al., 2011) to test for deviations from expected sequence similarity between C. guinea and C. livia, using a wood pigeon (C. palumbus) genome as an outgroup (Supplementary file 3). In this case, the null expectation is that the C candidate region will be more similar between conspecific bar and checker C. livia than either will be to the same region in C. guinea. That is, the phylogeny of the candidate region should be congruent with the species phylogeny. However, we found that the D-statistic approaches one in the candidate region (n = 10 each for bar and checker C. livia), indicating that checker C. livia are more similar to C. guinea than to conspecific bar birds in this region ( Figure 5A). The mean genome-wide D-statistic was close to zero (0.021), indicating that bar and checker sequences are more similar to each other throughout the genome than either one is to C. guinea. This similarity between C. guinea and checker C. livia in the pattern candidate region was further supported by sequence analysis using HybridCheck (Ward and van Oosterhout, 2016). Outside of the candidate region, checker birds have a high sequence similarity to conspecific bar birds and low similarity to C. guinea ( Figure 5B). Within the candidate region, however, this relationship shows a striking reversal, and checker and C. guinea sequences are most similar to each other. In addition, although the genome-wide D-statistic was relatively low, the 95% confidence interval (CI) was greater than zero (0.021 to 0.022), providing further evidence for one or more introgression events from C. guinea into checker and T-check genomes. Unlike in many checker and T-check C. livia, we did not find additional copies of the CNV region in C. guinea. This could indicate that the CNV expanded in C. livia, or that the CNV is present in a subset of C. guinea but has not yet been sampled. Taken together, these patterns of sequence similarity and divergence support the hypothesis that the candidate checker haplotype in rock pigeons originated by introgression from C. guinea. While post-divergence introgression is an attractive hypothesis to explain the sequence similarity between checker C. livia and C. guinea, another formal possibility is that sequence similarity between these groups is due to incomplete lineage sorting. In an analogous example, light-and dark-pigmentation alleles of tan probably segregated in the ancestor of Drosophila americana and D. novamexicana, and the light allele subsequently became fixed in the latter species (Wittkopp et al., 2009). However, light and dark alleles continue to segregate in D. americana, and the light allele in this species has the same ancestral origin as the one that is fixed in D. novamexicana. Similarly, we wanted to test if the minimal checker haplotype might have been present in the last common ancestor of C. guinea and C. livia, but now segregates only in C. livia. We measured nucleotide differences among different alleles of the minimal haplotype and compared these counts to polymorphism rates expected to accumulate over the 4-5 MY divergence time between C. livia and C. guinea ( Figure 5C, purple bar, see Materials and methods).We found that polymorphisms between bar C. livia and C. guinea approached the number expected to accumulate in this region in 4-5 MY (59.90% sequence similarity at segregating sites, SD = 2.6%, 1708 ± 109 mean SNPs, Figure 5C), but so did intraspecific comparisons between bar and checker C. livia (63.28%, SD = 2.3%, 1564 ± 99). In contrast, C. guinea and C. livia checker sequences had significantly fewer differences than would be expected to accumulate between the two species (90.96%, SD = 0.13%, 384 ± 6, p<2.2e-16, t-test). These results support an introgression event from C. guinea to C. livia, rather than a shared allele inherited from a common ancestor prior to divergence. Among 11 checker haplotype sequences, we found remarkably high sequence similarity (99.39%, SD = 0.18%, 26 ± 8 mean differences), corresponding to a haplotype divergence time of 89 ± 27 thousand years (KY), based on mutation rate. The rock pigeon reference genome contains the checker haplotype, which could bias the discovery of SNPs in our resequenced genomes. We therefore performed de novo assemblies using Illumina shotgun reads from C. guinea and high-coverage bar and checker individuals, then compared nucleotide sequences in regions of the minimal haplotype where all three assemblies overlapped (92,199 of 102,909 bp, or 89.6%). We found similar patterns of divergence between the de novo assemblies and the resequenced genomes that were mapped to the reference, indicating that that SNP discovery was not heavily biased by our short-read mapping approach (Figure 5-figure supplement 1). Based on pairwise polymorphisms between the checker reference and the de novo checker assembly (11 differences), the haplotype divergence time is 42 KY. This figure is more recent than our estimate based on more individuals, but the key results are that both estimates are roughly 2 orders of magnitude more recent than the divergence time between species, and the similarity between checker and C. guinea sequences is characteristic of within-species rather than betweenspecies variation. Lastly, to date the putative introgression event(s), we estimated the age of the minimal checker haplotype based on the pattern of linkage disequilibrium decay (Voight et al., 2006). Using a recombination rate calculated for rock pigeon (Holt et al., 2018), the checker haplotype originated in C. livia between 429 and 857 years ago, assuming one to two generations per year. The corresponding 95% confidence intervals are 267 to 716 years ago assuming one generation per year and 534 to 1,432 years ago assuming two generations per year. Together, these multiple lines of evidence support the hypothesis that the checker haplotype was introduced from C. guinea into C. livia after the domestication of the rock pigeon (~5000 years ago). The four-taxon D-statistic values approach one at the NDP locus ( Figure 5A), indicating that checker C. livia is far more closely related to C. guinea than to bar C. livia at this locus. Additionally, the pairwise differences between C. guinea and checker haplotypes are incompatible with incomplete lineage sorting (Figure 5C), assuming a 4-5 MY species divergence time and no subsequent gene flow. The lack of single nucleotide diversity among checker haplotypes, with only 26 ± 8 mean differences and an estimated gene tree divergence of 89 KY, is unusually low for the diversity typically observed in large, free-living pigeon populations . The differences between the mutation-based (89 KY) and LD-based (0.4 to 0.9 KY) estimates of the checker haplotype age are an expected consequence of crossbreeding and artificial selection given that the former is an estimate of the age of the most recent common ancestor in the source population while the latter is a lower bound estimate for the date of introgression. Inconsistencies of this magnitude are unexpected in the absence of introgression. Additionally, the genome-wide D-statistic comparing C. guinea and bar to C. guinea and checker is low but significantly greater than 0, indicating that gene flow from C. guinea to checker has been higher than from C. guinea to bar throughout the genome. The following source data and figure supplement are available for figure 5: Source data 1. Numbers of SNPs between different pairwise combinations of homozygous bar, checker, and C. guinea represented in Figure 5C. DOI: https://doi.org/10.7554/eLife.34803.021 Figure supplement 1. Expected (purple bar) and observed proportion of shared segregating sites out of 1,458 SNPs in the minimal haplotype region for different pairwise comparisons between de novo genome assemblies from short-read resequencing data for bar, checker, and C. guinea. DOI: https://doi.org/10.7554/eLife.34803.020 Notably, the non-zero D-statistic result holds when the NDP locus is excluded from this calculation. These results are expected if the checker haplotypes were recently introduced into C. livia by pigeon breeders, and interbreeding between checker and bar populations has not been completely random. Consistent with this expectation, non-random mating is observed in feral populations, and pigeon breeders often impose color pattern selection on their birds (Darwin, 1868;Burley, 1977;1981;Johnston and Johnson, 1989;National_Pigeon_Association, 2010). Finally, the upper bound of the LD-based age estimate of the checker haplotype of 1,432 years ago indicates that checker haplotype was introduced into C. livia well after the domestication of rock pigeons. Because the ranges of C. livia and C. guinea overlap in northern Africa (del Hoyo et al., 2017), it is possible that introgression events occurred in free-living populations. However, the more likely explanation is that C. guinea haplotypes were introduced into C. livia by pigeon breeders. Once male hybrids are generated, this can be accomplished in just a few generations (Taibel, 1949).Thus, humans might have intentionally selected this phenotype, which is linked to life history traits that are advantageous in urban environments, and then built ideal urban habitats for them to thrive (Jerolmack, 2008). Introgression and pleiotropy Adaptive traits can arise through new mutations or standing variation within a species, and a growing number of studies point to adaptive introgressions among vertebrates and other organisms (Hedrick, 2013;Martin and Orgogozo, 2013;Harrison and Larson, 2014;Zhang et al., 2016). In some cases, introgressed loci are associated with adaptive traits in the receiving species, including high-altitude tolerance in Tibetan human populations from Denisovans (Huerta-Sánchez et al., 2014), resistance to anticoagulant pesticides in the house mouse from the Algerian mouse (Song et al., 2011;Liu et al., 2015), and beak morphology among different species of Darwin's finches (Lamichhaney et al., 2015). Among domesticated birds, introgressions are responsible for skin and plumage color traits in chickens and canaries, respectively (Eriksson et al., 2008;Lopes et al., 2016). Alleles under artificial selection in a domesticated species can be advantageous in the wild as well, as in the introgression of dark coat color from domestic dogs to wolves (Anderson et al., 2009) (however, color might actually be a visual marker for an advantageous physiological trait conferred by the same allele; Coulson et al., 2011). In this study, we identified a putative introgression into C. livia from C. guinea that is advantageous both in artificial (selection by breeders) and free-living urban environments (sexual and natural selection). A change in plumage color pattern is an immediately obvious phenotypic consequence of the checker allele, yet other traits are linked to this pigmentation pattern. For example, checker and T-check pigeons have longer breeding seasons, up to year-round in some locations (Lofts et al., 1966;Murton et al., 1973), and C. guinea breeds year-round in most of its native range as well (del Hoyo et al., 2017). Perhaps not coincidentally, NDP is expressed in the gonad tissues of adult C. livia (MacManes et al., 2017) and the reproductive tract of other amniotes (Paxton et al., 2010). Abrogation of expression or function of NDP or its receptor FZD4 is associated with infertility and gonad defects (Luhmann et al., 2005;Kaloglu et al., 2011;. Furthermore, checker and T-check birds deposit less fat during normally reproductively quiescent winter months. In humans, expression levels of FZD4 and the co-receptor LRP5 in adipose tissue respond to varying levels of insulin (Karczewska-Kupczewska et al., 2016), and LRP5 regulates the amount and location of adipose tissue deposition (Loh et al., 2015;Karczewska-Kupczewska et al., 2016). Therefore, based on its reproductive and metabolic roles in pigeons and other amniotes, NDP is a viable candidate not only for color pattern variation, but also for the suite of other traits observed in free-living (feral and wild) checker and T-check pigeons. Indeed, the potential pleiotropic effects of NDP raise the possibility that reproductive output and other physiological advantages are secondary or even primary targets of selection, and melanistic phenotypes are honest genetic signals of a cluster of adaptive traits controlled by a single locus. Adaptive cis-regulatory change is also an important theme in the evolution of vertebrates and other animals (Shapiro et al., 2004;Miller et al., 2007;Wray, 2007;Carroll, 2008;Chan et al., 2010;Wittkopp and Kalay, 2011;O'Brown et al., 2015;Signor and Nuzhdin, 2018). This theme is especially prominent in studies of color variation in Drosophila, in which regulatory variation impacts both the type and pattern of pigments on the body and wings (Gompel et al., 2005;Prud'homme et al., 2006;Rebeiz et al., 2009). In some cases, the evolution of multiple regulatory elements of the same gene can fine-tune phenotypes, such as mouse coat color and trichome distribution in fruit flies (McGregor et al., 2007;Linnen et al., 2013). In cases of genes that have multiple developmental roles, introgression can result in the simultaneous transfer of multiple advantageous traits (Rieseberg, 2011). The potential role of NDP in both plumage and physiological variation in pigeons could represent a striking example of pleiotropic regulatory effects. Wing pigmentation patterns that resemble checker are present in many wild species within and outside of Columbidae including Patagioenas maculosa (Spot-winged pigeon), Spilopelia chinensis (Spotted dove), Geopelia cuneata (Diamond dove), Gyps rueppelli (Rü ppell's vulture), and Pygiptila stellaris (Spot-winged antshrike). Based on our results in pigeons, NDP and its downstream targets can serve as initial candidate genes to ask whether similar molecular mechanisms generate convergent patterns in other species. Ethics statement Animal husbandry and experimental procedures were performed in accordance with protocols approved by the University of Utah Institutional Animal Care and Use Committee (protocols 10-05007, 13-04012, and 16-03010). DNA sample collection and extraction Blood samples were collected in Utah at local pigeon shows, at the homes of local pigeon breeders, from pigeons in the Shapiro lab, and from ferals that had been captured in Salt Lake City, Utah. Photos of each bird were taken upon sample collection for our records and for phenotype verification. Tissue samples of C. rupestris, C. guinea, and C. palumbus were provided by the University of Washington Burke Museum, Louisiana State University Museum of Natural Science, and Tracy Aviary, respectively. Breeders outside of Utah were contacted by email or phone to obtain feather samples. Breeders were sent feather collection packets and instructions, and feather samples were sent back to the University of Utah along with detailed phenotypic information. Breeders were instructed to submit only samples that were unrelated by grandparent. DNA was then extracted from blood, tissue, and feathers as previously described (Stringham et al., 2012). Determination of color and pattern phenotype of adult birds Feather and color phenotypes of birds were designated by their respective breeders. Birds that were raised in our facility at the University of Utah or collected from feral populations were assigned a phenotype using standard references (Levi, 1986;Sell, 2012). Genomic analyses BAM files from a panel of previously resequenced birds were combined with BAM files from eight additional barless birds, 23 bar and 23 checker birds (22 feral, 24 domestics), a single C. guinea, and a single C. palumbus. SNVs and small indels were called using the Genome Analysis Toolkit (Unified Genotyper and LeftAlignAnd TrimVariants functions, default settings; McKenna et al., 2010). Variants were filtered as described previously (Domyan et al., 2016) and the subsequent variant call format (VCF) file was used for pFst and ABBA-BABA analyses as part of the VCFLIB software library (https://github.com/vcflib) and VAAST (Yandell et al., 2011) as described previously . pFst was first performed on whole-genomes of 32 bar and 27 checker birds. Some of the checker and bar birds were sequenced to low coverage (~1X), so we were unable to confidently define the boundaries of the shared haplotype. To remedy this issue, we used the core of the haplotype to identify additional bar and checker birds from a set of birds that had already been sequenced to higher coverage . These additional birds were not included in the initial scan because their wing pattern phenotypes were concealed by other color and pattern traits that are epistatic to bar and check phenotypes. For example, the recessive red (e) and spread (S) loci produce a uniform pigment over the entire body, thereby obscuring any bars or checkers (Van Hoosen Jones, 1922;Hollander, 1938a;Sell, 2012;Domyan et al., 2014). Although the major wing pattern is not visible in these birds, the presence or absence of the core checker haplotype allowed us to characterize them as either bar or checker/T-check. We then re-ran pFst using 17 bar and 24 checker/T-check birds with at least 8X mean read depth coverage ( Figure 1B) and found a minimal shared checker haplotype of~100 kb (Scaffold 68 position 1,702,691-1,805,600), as defined by haplotype breakpoints in a homozygous checker and a homozygous bar bird (NCBI BioSamples SAMN01057561 and SAMN01057543, respectively; BioProject PRJNA167554). pFst was also used to compare the genomes of 32 bar and nine barless birds. New sequence data for C. livia are deposited in the NCBI SRA database under BioProject PRJNA428271 with the BioSample accession numbers SAMN08286792-SAMN08286844. New sequence data for C. guinea and C. palumbus are deposited in the NCBI SRA database under accession numbers SRS1416880 and SRS1416881, respectively. Pedigree of an F 2 intercross segregating checker and bar We genotyped and phenotyped a laboratory intercross that segregates bar and checker patterns in the F 2 generation. We generated a pedigree from this family for F 2 individuals whose phenotypes we could identify as bar or checker (n = 62). We could not determine bar or checker phenotypes for all individuals because other pigment patterns that epistatically mask bar and checker -almond (St locus), spread (S), and recessive red (E) -are also segregating in the cross. F 2 individuals were excluded from the analysis if they had one of these masking phenotypes, but F 1 parents were retained if they produced F 2 offspring with checker or bar phenotypes. We used primers that amplify within the minimal haplotype (AV17 primers, see Supplementary file 1) to genotype all F 2 individuals, their F 1 parents (n = 26), and the founders (n = 4) by Sanger sequencing for the checker haplotype to assess whether the checker haplotype segregated with wing pattern phenotype. CNV breakpoint identification and read depth analysis The approximate breakpoints of the CNV region were identified at Scaffold 68 positions 1,790,000 and 1,805,600 using WHAM in resequenced genomes of homozygous bar or checker birds with greater than 8x coverage (Kronenberg et al., 2015). For 12 bar, seven checker, and 2 T-check resequenced genomes, Scaffold 68 gdepth files were generated using VCFtools (Danecek et al., 2011). Two subset regions were specified: the first contained the CNV and the second was outside of the CNV and was used for normalization (positions 1,500,000-2,000,000 and 800,000-1,400,000, respectively). Read depth in the CNV was normalized by dividing read depth in this region by the average read depth from the second (non-CNV) region, then multiplying by two to normalize for diploidy. Taqman assay for copy number variation Copy number variation was estimated using a custom Taqman Copy Number Assay (assay ID: cnvtaq1_CC1RVED; Applied Biosystems, Foster City, CA) for 93 birds phenotyped by wing pigment pattern category and 89 birds whose pigmentation was quantified by image analysis. After DNA extraction, samples were diluted to 5 ng/mL. Samples were run in quadruplicate according to the manufacturer's protocol. Quantification of pigment pattern phenotype At the time of blood sample collection, the right wing shield was photographed (RAW format images from a Nikon D70 or Sony a6000 digital camera). Using Photoshop software (Adobe Systems, San Jose, CA), the wing shield including the bar (on the secondary covert feathers) was isolated from the original RAW file. Images were adjusted to remove shadows and the contrast was set to 100%. The isolated adjusted wing shield image was then imported into ImageJ (imagej.nih.gov/) in JPEG format. Image depth was set to 8-bit and we then applied the threshold command. Threshold was further adjusted by hand to capture checkering and particles were analyzed using a minimum pixel size of 50. This procedure calculated the area of dark plumage pigmentation on the wing shield. Total shield area was calculated using the Huang threshold setting and analyzing the particles as before (minimum pixel size of 50). The dark area particles were divided by total wing area particles, and then multiplied by 100 to get the percent dark area on the wing shield. Measurements were done in triplicate for each bird, and the mean percentages of dark area for each bird were used to test for associations between copy number and phenotype using a non-linear least squares regression. qRT-PCR analysis of gene expression Two secondary covert wing feathers each from the wing shields of 8 bar, seven checker, and 8 T-check birds were plucked to stimulate feather regeneration for qRT-PCR experiments. Nine days after plucking, regenerating feather buds were removed, the proximal 5 mm was cut longitudinally, and specimens were stored in RNAlater (Qiagen, Valencia, CA) at 4˚C for up to three days. Next, collar cells were removed, RNA was isolated, and mRNA was reverse-transcribed to cDNA as described previously (Domyan et al., 2014). Intron-spanning primers (see Supplementary file 1) were used to amplify each target using a CFX96 qPCR instrument and iTaq Universal Syber Green Supermix (Bio-Rad, Hercules, CA). Samples were run in duplicate and normalized to b-actin. The mean value was determined and results are presented as mean ± S.E. for each phenotype. Results were compared using a Wilcoxon Rank Sum test and expression differences were considered statistically-significant if p<0.05. Allele-specific expression assay SNPs in NDP and EFHC2 were identified as being diagnostic of the bar or checker/T-check haplotypes from resequenced birds. Heterozygous birds were identified by Sanger sequencing in the minimal checker haplotype region (AV17 primers, see Supplementary file 1). Twelve checker and T-check heterozygous birds were then verified by additional Sanger reactions (AV54 for NDP and AV97 for EFHC2, see Supplementary file 1) to be heterozygous for the diagnostic SNPs in NDP and EFHC2. PyroMark Custom assays (Qiagen, Valencia, CA) were designed for each SNP using the manufacturer's software (Supplementary file 1). Pyrosequencing was performed on gDNA derived from blood and cDNA derived from collar cells from 9 day regenerating wing shield feathers using a Pyro-Mark Q24 instrument (Qiagen, Valencia, CA). Additional pyrosequencing was performed for 9 of the 12 of the original birds from 9 day regenerating dorsal and tail feathers following the same protocol. Signal intensity ratios from the cDNA samples were normalized to the ratios from the corresponding gDNA samples to control for bias in allele amplification. Normalized ratios were analyzed by Wilcoxon Rank Sum tests. We compared the expression ratios of 1-copy checker:bar to 4-copy checker: bar to determine whether additional copies of the CNV were associated with higher checker:bar allele expression. We also compared 1-copy checker:bar expression ratios and four copy checker:bar expression ratios to a 1:1 ratio (equal expression of both alleles) using the Wilcoxon Rank Sum test to determine whether the measured checker:bar ratios were significantly different from the null hypothesis of equal expression of bar and checker alleles. The 2-copy checker:bar ratio was not compared in these analyses because there was only one sample. Allele expression ratios were analyzed together for 1, 2, and 4-copies using a glm regression to determine whether CNV copy number was associated with increased checker allele expression. Results were considered significant if p<0.05. Enhancer sequence search VISTA (https://enhancer.lbl.gov/) (Visel et al., 2007) and REPTILE (He et al., 2017) enhancer datasets were mapped to the pigeon reference genome using bwa-mem (Li and Durbin, 2009). BAM output files were filtered for high quality orthologous regions and further filtered for alignments within the minimal checker haplotype on Scaffold 68 (Supplementary file 2). NDP genotyping and alignments NDP exons were sequenced using primers in Supplementary file 1. Primers pairs were designed using the rock pigeon reference genome (Cliv_1.0) . PCR products were purified using a QIAquick PCR purification kit (Qiagen, Valencia, CA) and Sanger sequenced. Sequences from each exon were then edited for quality with Sequencher v.5.1 (GeneCodes, Ann Arbor, MI). Sequences were translated and aligned with SIXFRAME and CLUSTALW in SDSC Biology Workbench (http://workbench.sdsc.edu). Amino acid sequences outside of Columbidae were downloaded from Ensembl (www.ensembl.org). D-statistic calculations Whole genome ABBA-BABA (https://github.com/vcflib) was performed on 10 Â 10 combinations of bar and checker (Supplementary file 3) birds in the arrangement: bar, checker, C. guinea, C. palumbus. VCFLIB (https://github.com/vcflib) was used to smooth raw ABBA-BABA results in 1000 kb or 100 kb windows for whole-genome or Scaffold 68 analyses respectively. For each 10 Â 10 combination. We calculated the average Dstatistic across the genome. These were then averaged to generate a whole genome average of D = 0.0212, marked as the dotted line in Figure 5A. Confidence intervals were generated via moving blocks bootstrap (Kunsch, 1989). Block sizes are equal to the windows above, with D-statistic values resampled with replacement a number of times equal to the number of windows in a sample. In Figure 5A, three representative ABBA-BABA tests are shown for different combinations of bar and checker birds. The checker and bar birds used in each representative comparison are: ARC-STA, SRS346901 and SRS346887; MAP-ORR, SRS346893 and SRS346881; IRT-STA, SRS346892 and SRS346887 respectively. ARC, MAP, and IRT are homozygous for the checker haplotype. STA and ORR are homozygous for the bar haplotype. Pairwise SNP comparisons Phased VCF files for 16 homozygous bar, 11 homozygous checker, and 1 C. guinea were subsetted to the minimal checker haplotype region (positions 1,702,691-1,805,600) with tabix (Li, 2011). The vcf-compare software module (VCFtools, (Danecek et al., 2011) was used to run pairwise comparisons between bar, checker, and C. guinea birds (176 bar-checker, 16 bar-guinea, and 11 checkerguinea comparisons) as well as among bar and checker birds (120 bar-bar and 55 checker-checker comparisons). The total number of differences for each group was compared to the number of differences that are expected to accumulate during a 4-5 MY divergence time in a 102,909 bp region (the size of the minimal checker haplotype) with the mutation rate m = 1.42e-9 using the coalescent equation: Time= #SNPs/(2xmx length of the region). The observed pairwise differences and the expected number of differences were evaluated with two-sample t-tests and all groups were considered statistically different from the 4-5 MY expectation (1169.05-1461.31). There were 4261 total segregating sites in the minimal haplotype region between all birds used for pairwise comparisons. Means and standard deviations for each group were calculated in R (R_Development_Core_Team, 2008). SNP comparisons in de novo assemblies of bar, checker, and C. guinea genomes To ensure that SNP calling was not biased by using a reference that has the checker haplotype, we performed de novo assemblies of one bar (SRS346895), one checker (SRS346878), and one C. guinea (SRS1416880) individual using CLC Genomics Workbench (Qiagen, Valencia, CA). These C. livia individuals were chosen because they had the highest genome-wide mean read depth coverage for each phenotype at 14X (bar) and 15X (checker; the C. guinea sample was sequenced to 33X). Whole-genome assemblies were mapped to the reference genome and variants (single nucleotide variants, structural variants, indels) were called by SMARTIE-SV (https://github.com/zeeev/smartiesv), which uses the BLASR aligner (Chaisson and Tesler, 2012), using default parameters. We identified regions where all three new assemblies intersected with the reference assembly. We then counted SNPs across the minimal haplotype where all three assemblies intersected (92,199 of 102,909 bp; 12 intersecting contigs ranging in length from 678 to 21565 bp, median = 5047.5). Variants identified in the de novo assemblies for checker, bar, or C. guinea individuals were manually filtered to remove variants where the alternate allele was 'N' or a series of 'N' base pairs. Variants spanning multiple base pairs in each individual file were identified and manually split into multiple single nucleotide polymorphisms. Filtered and split tab-delimited variant calls between each de novo assembly and the reference genome were read into R v.3.3.2 (R_Development_Core_-Team, 2008). For each variant call file, the start position was extracted. Pairwise comparisons of positions for checker, bar, and C. guinea de novo assemblies were made using the 'setdiff' command to generate lists of variants that were only observed in one individual out of any given pair (checker vs. bar, checker vs. C. guinea, bar vs. C. guinea). These lists of positions were then used to subset the original variant call files and assemble lists of pairwise differences. For example, SNPs that differ between checker and bar would include variants that differ from the reference in checker, but not bar, plus variants that differ from the reference in bar, but not checker. Additionally, the 'intersect' command was used to identify variants in multiple de novo assemblies. For variants that appeared in more than one de novo assembly, alternative alleles for each assembly were compared. In the majority of cases, both de novo assemblies showed the same alternative allele, and thus did not differ from one another. We found 1458 total SNP positions based on comparison of the three de novo assemblies. In the comparison described above and shown in Figures 5C, 362 SNPs were identified in the same region. This higher number of SNPs was driven by the much larger sample size and haplotype diversity among the 16 bar birds. Transcript amplification of barless allele of NDP In order to determine whether the barless allele of NDP is transcribed and persists in collar cells, or is degraded (e.g., by non-sense mediated decay), we designed a PCR assay to amplify NDP mRNA transcripts. Feathers from four barless, 2 bar, two checker, and 2 T-check birds were plucked to stimulate regeneration. We then harvested regenerated feathers after 9 days, extracted RNA from collar cells, and synthesized cDNA as described above. We then generated amplicons from each sample using intron-spanning primers (AV200 primers, see Supplementary file 1). Primers were anchored in the exon containing the barless start-codon mutation and the exon 3' to it, so this assay tested for both the presence of transcripts and consistent splicing among alleles and phenotypes. Recombination rate estimation Recombination frequency estimates were generated from a genetic map based an F2 cross of two divergent C. livia breeds, a Pomeranian Pouter and a Scandaroon (Domyan et al., 2016). Briefly, for genetic map construction, genotyping by sequencing (GBS) data were generated, trimmed, and filtered as described (Domyan et al., 2016), then mapped to the pigeon genome assembly (Holt et al., 2018) using Bowtie2 (Langmead and Salzberg, 2012). Genotypes were called using Stacks (Catchen et al., 2011), and genetic map construction was performed using R/qtl (www.rqtl. org) (Broman et al., 2003). Pairwise recombination frequencies were calculated for all markers based on GBS genotypes. Within individual scaffolds, markers were filtered to remove loci showing segregation distortion (Chi-square, p<0.01) or probable genotyping error. Specifically, markers were removed if dropping the marker led to an increased LOD score, or if removing a non-terminal marker led to a decrease in length of >10 cM that was not supported by physical distance. Individual genotypes with error LOD scores > 5 (Lincoln and Lander, 1992) were also removed. Pairwise recombination frequencies for markers flanking the candidate region that were retained in the final linkage map were used to estimate the age of the introgression event between C. guinea and C. livia (Scaffold 68, marker positions 1,017,014 and 1,971,666; Supplementary file 4). Minimal haplotype age estimation The minimal haplotype age was estimated following Voight et al. (2006). We assume a star-shaped phylogeny, in which all samples with the minimal haplotype are identical to the nearest recombination event, and differ immediately beyond it. Choosing a variant in the center of the minimal haplotype, we calculated EHH, and estimated the age using the largest haplotype with a probability of homozygosity just below 0.25. Note that where r is the genetic map distance, and g is the number of generations since introgression / onset of selection. Therefore g ¼ À 100 log Pr homoz ½ ð Þ 2r The confidence interval around g was estimated assuming The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. Ethics Animal experimentation: Animal experimentation: This study was performed in accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All of the animals were handled and housed according to approved University of Utah institutional animal care and use committee (IACUC) protocols 10-05007, 13-04012, and 16-03010.
2018-07-21T05:31:26.715Z
2018-07-17T00:00:00.000
{ "year": 2018, "sha1": "844ff02e2a254329f5f9c62784eb0ad6ec2ddd85", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.34803", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "521e2f1c8cd0f7fe96d37432de78a9e0a57a0dd9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
213257974
pes2o/s2orc
v3-fos-license
THE DEVELOPMENTS AND PROBLEMS OF MUSLIMS IN AUSTRALIA This paper shows that historians have different views about the early arrival of Islam in Australia; some argue that Islam entered Australia in the 9th century BC. Those that believe in the 10th century BC were brought by Arab traders. Besides, some mention below by Muslim Bugis fishermen who traveled by sailboat to collect taripang (a kind of sea slug) on the Gulf of Carpentaria in the 17th century BC. While the development of Islam in Australia started appears from 1976 to 1986 the Muslim community in Australia rose to a three-fold. Increasing the quantity of Muslims in Australia is generally dominated by immigrants from the countries of the Muslim majority. Activity and religious activity continues to writhe mainly due to the support and role of Islamic organizations. As for the problems faced by Muslims in Australia is coming from a non-Muslim society of Australia; Persistence of Muslims to practice their religion, sometimes considered a fanatic attitude and could not cooperate. Another problematic faced by Muslims is related to a misunderstanding of Islam. Most of Australian non-Muslims regard that Islam is a violent religion. This perspective is connected by the collapse of World Trade Center (WTC). The method of this research is descriptive-analytic qualitative study that utilizes library resources to acquire, manage and analyze data. Background History has recorded that the victors of World War I (1815) were European countries. The consequence of this victory was the vast countries both in Asia and in Africa were ruled by European countries. World War I was considered the final round of the conquest of the West against the Islamic states or Muslim-majority countries. In this period also ended the Caliphate in the Islamic government. What's interesting about this event, after World War I, there was a massive migration of Muslims. The immigrants were motivated to leave their home, their country to seek a better life in the various corners of the world. The author considers that the Muslim migration was inspired by the prophet Muhammad's movement (hijrah) to the city of Yastrib which was renamed as Madinatul Munawwarah or better known as Madinah. 1 Rasulullah SAW decided to move along with the Muslims to Medina, because Muslims in Mecca faced intimidation and boycott in both political and economic sectors by Quraish. As experienced by the Muslims at the time of the Prophet Muhammad in Mecca, Muslim-majority countries after World War I were also experiencing political and economic uncertainty. Furthermore the increased escalation of regional conflicts. Then a number of these countries migrate or emigrate to Western countries. Therefore, after World War I, Muslim immigrants can easily be found in several countries, such as the United State of America, England, Germany, Norwegia, Italy, New Zealand, Canada, and Australia. These countries are known as the sizeable Muslim-minority countries. According to the analysis of authors, Muslim immigrants have a major role in the development of Muslims in Western and European countries. In specific, the Australian population is approximated 16,849,496 people. From that number, 50 % are Christian and the rest converted to other religions such as Islam that is placed in the second place and later Anglican, Taoism, Shinto, and others. As a Muslim-minority country, the Muslim community in Australia would strive or struggle that Islam has a strong presence in Australia. The struggle of the Australian Muslims of Australia to do the missionary endeavor of Islam or (syi'ar Islam) should be appreciated. Thus Islam has progressed from time to time. The Developments and Problems Syamzan, Syamhi, Syarifah of Muslims in Australian On the other hand, the problems faced by the Muslims as the majority religion are also unavoidable. Therefore stressing the discussion of this paper is the development and problems of Muslims in Australia. The Early History of Islam in Australia It seems that historians have different views about the early arrival of Islam in Australia. Some argue that Islam came to Australia in the 9th century BC. Some say that Islam entered Australia in the 10th century BC were brought by Arab traders through the Australian coast. 2 Others state that Islam came to Australia in the 15th century by fishermen from Indonesia, especially from Makassar and Maluku. In addition, some mention that Islam entered Australia under the Muslim Bugis fishermen who traveled by sailboat to collect taripang (a kind of sea slug) on the Gulf of Carpentaria in the 17th century BC. The span of contact with Aboriginal (Native Australia), Bugis fishermen got along and had an influence on art, culture and including religion. 3 According to the writer, the thought of Islam had entered into Australia since the 17th century, and already there was an interaction between Islam and the Indigenous peoples of Australia (Aborigines). On the other hand it seems the presence of Islam around four centuries ago had a significant impact related to the quantity of the Australian community who embraced Islam. Thus the increase in the quantity of the Muslim community in Australia is generally dominated by the immigrants. In 1860, when Dost Muhammad, a camel trader from Karachi-Pakistan brought 24 camels for sale in the country. No data found, why explorers Muslim choose Australia as a place to trade camels. But when viewed from its geographical, Australia is the desert region that dominates the area. The authors considered that the camel is an animal that is needed in Australia. Perhaps this is the reason Dost Muhammad traded camels while preaching Islam in Australia. 4 In the next period, following the arrival of immigrants from Albania, Yugoslavia, Turkey, Cyprus Palestine, China, Syria, Saudi Arabia, India Bangladesh, Malaysia, Indonesia, and others. Muslim immigrants who come to Australia generally work as laborers in mining, factories, and plantations as well as some who work as traders. 5 The presence of Muslim immigrants in Australia would give an effect on the development of Islam there. They are self-erecting settlements and mosques. They The Development of Islam in Australia According to Sheppard, the Muslims have already lived in Australia for more than half a century, but the Islamic community was not developing in Australia until 1950. That is after World War I as the wave of immigration from the Middle East countries. The subsequent years, the number of Muslim were increased to immigrate. They were including the Turks Muslims who came with the support of the government of Turkey. Similarly, Lebanese Muslims, they fled the country because of avoiding civil war after 1975. So between 1976 and 1986 the Muslim community in Australia rose to a three-fold. Noted Lebanese Muslims in Australia reached 80% from the Muslims population in Australia and 20% of them are Muslims from Arab countries, South Asia, Southeast Asia, and other countries. 6 Before the 1980s, the population of Muslims in Australia were around 41.470 from Turkey, 21 080 from Indonesia, 18,500 from Egypt, 5,950 from Syria and 5,370 from Pakistan, and a fewer number of countries such as Yugoslavia, Malaysia, and Singapore. Based on these data, the Muslim population of Indonesia ranks second after Muslim Turkey. Thus it can be stated that the Muslim community from Indonesia chose an important war in spreading Islam in Australia. 7 Based on the census, the number of Muslims in Australia since 1986 were about 10 523 inhabitants. Notwithstanding the Australian Federation of Islamic Councils claimed that in the 1980s, at least the Muslim community had reached 250,000. This figure when is confirmed into the population of Australia, it reached 0.7% of the Muslim community, and of course, this number continues to increase. Islam is considered a rapidly expanding religion. According to the analysis of the writer, the significant growth of Islam is caused of the consideration as a religion that can answer the challenges of modern society such as the role and function of the family in the Islamic perspective, the concept of birr al-walidain, the concept of gender in Islamic perspective and others. Therefore, since 1981, Islam had become the second-largest religion after Christianity. The great progress of Islam in this Kangaroo State cannot be separated from the role of Islamic Da'wah organizations, Islamic education, and other channels. As a result of the Islamic development in Australia, the waves of religious activity continue to squirm. It cannot be separated from the role of AFIC and its supporting organizations or partner organizations. Such as the Islamic Council, and The Developments and Problems Syamzan, Syamhi, Syarifah of Muslims in Australian Muslim Women of Australia that have given a great influence on the development of Islam. These organizations contribute to provide the facilities and infrastructure needed by the Muslim community in particular to support the implementation of Islamic teachings. For instance, the mosques and educational facilities continue to grow over the years. Mosque was built in many corners, there are more than 100 mosques. In 1997, a mosque in six Agnes Street, Burada, had been inaugurated. This mosque is a big and grand mosque. The history of the existence of this mosque is also interesting because it is a former church that was bought independently by Muslims for 165,000 dollars. The government of Australia has also contributed to the development of Islam in Australia. the government has authorized the Muslims to practice her faith for not disturbing other religions (religious tolerance). Western countries and Europe known as countries that uphold human rights, including the right to worship according to their beliefs. At this point, the quantity of Muslims is a minority, has given space to pro-actively and effectively carry out the teachings of Islam such as daily prayers, charity and fasting without constraints, even the Hajj can be implemented through an Australian visa. In carrying out the teachings of Islam to its adherents in Australia, there was no dichotomy of government. Muslims as religious minorities in Australia but it seems Muslims in doing syi'ar Islam are not flashy, but on the other side of the mosques that exist in Australia remains crowded with Muslims when azan reverb or echo, especially during Ramadhan. This method may cause sympathetic to Australians. So later, many residents of Australia who are interested in Islam. The rise of Islam in Australia is not only seen in the observance and obedience of Muslim communities in carrying out Islamic teachings. But it also marked the emergence of various Islamic institutions. Islamic institutions make a wide range of sporting nuanced Islamic. So the Muslim community feels proud and confident even though they are in Muslim minority countries. The use of Islamic symbols also be an interesting thing to be assessed in Australia's Muslim minority communities; many non-Muslim communities of Australia are interested in Islam. The converts change their name with Islamic names so that Islamic symbolism is also attached to the converts. By using the name of Islam influence the behavior of converts, such as away from disobedience and back on obedience and piety, good relationships and doing silaturrahmi with a friendly social interaction. Problems Faced by Muslims in Australia Although Muslims have the passion and motivation in performing their religious rituals, another problem encountered is the coming from non-Muslim Australian society. Persistence Muslims practice her faith considered a fanatic and unbiased attitude to compromise. Hijab (veil) which is a symbol for Muslim women may result in limited access in public domain such as the labor market, care, and services. The Developments and Problems of Muslims in Australian Syamzan, Syamhi, Syarifah Samina Yasmen, exemplifies a case that an Australian woman in her youth, recognizing that hijab or headscarf is an obstacle in getting jobs. A veiled Muslim woman can only work as a secretary or seller. In this work, the hijab is an obstacle in the workplace because it can cause customers or customer fear. Very often Muslim women who applied for a job rejected because of wearing hijab. Another problematic faced by Muslims in Australia is related to a misunderstanding of Islam. Most non-Muslim Australians regard that Islam is a violent religion. This perspective connects with the character of Osama bin Laden as well as several cases of terrorism such as the Bali Bombing Tragedy masterminded by radical groups that occurred in October 2002. In this tragedy, 60% of the victims were Australians died. It was the sorrow for the people of Australia, so they do not easily forget. Likewise, alarmed by the collapse of the WTC buildings on September 11. 8 The Bali bombing tragedy, causing a strong reaction from the Australian government and assume that the Muslims are terrorists. The anti-terror movement emerged. 9 Muslims Australia when it was facing the problem of highly complicated. Newspapers and televisions make this tragedy in the headlines. Internet sites also raised the news that Muslims in Australia "panic". Australian Muslims were really in danger, they are not free to travel in and out of the house. Because they were subjected to insults, hatred and anger Australians. Post-Tragedy Bali bombings, Australian police implement a search and examination of a number of Muslims, especially citizens of Indonesia because they suspect the involvement of Jami'ah Islamiyah (JI). Sunday Herald Sun, one of newspaper in Australia reported attacks against Muslims in Victoria. The attack was not only aimed at Muslim men but also to women, especially women who wear headscarves. Even Muslims are not given access to use public transport (bus), while the Muslims cars vandalized, burnt and then the owner mistreated, humiliated and other violent acts. According to Yasser Soliman, the accidents were experienced by Australian Muslims, such as insults, harsh treatment and other acts of violence are likened to an iceberg. Persecution and violence were reported to the police or what informed by the mass media is only a small part. There are still many events or an act of violence that is not published. In handling various cases of violence, there was not found an advocacy from Islamic associations, whether in national or international. In terms of their involvement in the protection of the minority Muslim population is needed, in order to realize peace and tranquility of Muslims minorities. In this context, it is in fact that Australia has established the organization Society of Muslims. In 1965, a national organization founded in Australia and evolved in 1975 into a three-level structure consisting of local associations, eight state assembly. A new organization called The Australian Federation of Islamic Councils (AFIC) as mentioned previously, provides the educational services, cultural and religious, although local associations still have autonomy. 10 The function of Australian Federation of Islamic Councils (AFIC) is to represent the Muslims of Australia, both in agencies in Australia itself and abroad. It also provides kosher certificates on livestock. Certainly, the organizations or associations are chaired by Muslims and received support from countries of the Middle East. 11 It is therefore natural that Australians feel suspicion and negative thinking of the existence of Islam in Australia. The fact also shows that the association of Muslim organizations received financial support from Saudi Arabia to build mosques in Australia. The assistance of the Middle East country is increasingly emerging by the tendency of Australians to alienate, eliminate and erase the Islamic identity. The other problems faced by Muslims in Australia are the issues related to the family environment. Living in a country predominantly non-Muslim and secular country, Australian Muslims are demanded to actively defend their Islamic identity. In addition, the Australian cultures, such as wearing irreligious dress and clubbing with forbidden drink, and being lead to disobedience association, are becoming the challenge for Muslim families in educating their children. Conclusion Based on the previous descriptions, the authors formulated the followinxg conclusions: 1. Historians have different views about the early arrival of Islam in Australia; some argue that Islam entered Australia in the 9th century BC. Those that believe in the 10th century BC were brought by Arab traders through the Australian coast. Another opinion said that in the 15th century by fishermen from Indonesia, especially from Makassar and Maluku. Besides some mention below by Muslim Bugis fishermen who traveled by sailboat to collect taripang (a kind of sea slug) on the Gulf of Carpentaria in the 17th century BC. The span of contact with Aboriginal (Native Australia), Bugis fishermen hang out and give effect to the arts, culture and including religion. 2. From 1976 to 1986 the Muslim community in Australia rose to a three-fold. Increasing the quantity of Muslims in Australia is generally dominated by immigrants from the countries of Muslim majority; from Lebanon reached 80% of the Muslim population in Australia and 20% of them are Muslims from Arab countries, South Asia, Southeast Asia, and other countries. Along with the pace of development of Islam in Australia, activities and religious activities continue to stretch mainly due to the support and role of Islamic organizations such as AFIC. and the Islamic Council, Muslim Women of Australia. These organizations contribute in providing the facilities and infrastructure needed by Muslims in particular to support the implementation of Islamic teachings. Such as mosques and educational facilities that continue to grow from year to year. 3. Problems faced by Muslims in Australia is coming from a non-Muslim society of Australia; Persistence of Muslims to practice their religion, sometimes considered a fanatic and unbiased attitude to compromise. Another problematic faced by Muslims is related to a misunderstanding of Islam. Most non-Muslim Australians regard that Islam is a violent religion. This perspective is connected with the character of Osama bin Laden and Tragedy in the October 2002 Bali bombings.
2020-01-09T09:12:34.717Z
2019-12-30T00:00:00.000
{ "year": 2019, "sha1": "63b9cf8fa94af480d222579d37c402a07b72c3f5", "oa_license": null, "oa_url": "http://journal.uin-alauddin.ac.id/index.php/rihlah/article/download/11858/7786", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5614dc431fb24c983d29d30f57cd9d9cfbc327e5", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Political Science" ] }
119486733
pes2o/s2orc
v3-fos-license
Fisher Matrix Based Predictions for Measuring the z = 3.35 Binned 21-cm Power Spectrum using the Ooty Wide Field Array (OWFA) We use the Fisher matrix formalism to predict the prospects of measuring the redshifted 21-cm power spectrum in different $k$-bins using observations with the upcoming Ooty Wide Field Array (OWFA) which will operate at $326.5 {\rm MHZ}$. This corresponds to neutral hydrogen (HI) at $z=3.35$, and a measurement of the 21-cm power spectrum provides an unique method to probe the large-scale structures at this redshift. Our analysis indicates that a $5 \sigma$ detection of the binned power spectrum is possible in the $k$ range $0.05 \leq k \leq 0.3 \, {\rm Mpc}^{-1}$ with $1,000$ hours of observation. We find that the Signal-to-Noise ratio (${\rm SNR}$) peaks in the $k$ range $0.1- 0.2\, {\rm Mpc}^{-1}$ where a $10 \sigma$ detection is possible with $2,000$ hours of observations. Our analysis also indicates that it is not very advantageous to observe much beyond $1,000$ hours in a single field of view as the ${\rm SNR}$ increases rather slowly beyond this in many of the small $k$-bins. The entire analysis reported here assumes that the foregrounds have been completely removed. Introduction The redshifted 21-cm emission from the discrete, unresolved neutral hydrogen (HI) sources in the post-reionization era (z < 6) appears as a faint diffuse background radiation in low frequency observations below 1420 MHz. This provides us a useful tool to explore the large scale structure of the universe in the post-reionization era, using the fluctuations in the diffuse background radiation to trace the HI power spectrum (Bharadwaj, Nath & Sethi 2001;. In addition to probing the HI power spectrum (Bharadwaj & Pandey 2003;Bharadwaj & Srikant 2004), fluctuations in the diffuse background radiation also probe of the bispectrum (Ali et al. 2006;Guha Sarkar & Hazra 2013). In recent years, a considerable amount of work has been done to explore the prospects of detecting the 21-cm HI signal from the post-reionization era (Visbal et al. 2009;Bharadwaj et al. 2009;Wyithe & Loeb 2009;Seo et al. 2010;Mao 2012;Ansari et al. 2012;Bull et al. 2014). In the post-reionization era, the bulk of the HI 21-cm emission originates from the dense pockets of self-shielded HI regions, which are identified as damped Lyα (DLA) systems in quasar observations. The fluctuations in the HI 21-cm emission which are in general quantified through HI power spectrum, is expected to trace the matter power spectrum with a possible bias (Bharadwaj, Nath & Sethi 2001;. Wyithe & Loeb (2009) have shown that the complications in the HI power spectrum arising due to the modulation of the ionizing field are less than 1%. Bagla, Khandai & Datta (2010) have used semi-numerical simulations to predict the HI bias. They have used three different prescriptions to assign HI mass to the dark matter haloes and have found the HI bias to be scale-independent on large scales (k ≤ 1 Mpc −1 ). Guha and Vilaescusa Navarro et al. (2014) have used simillar simulations and their results are also found to be consistent with a scale-independent HI bias on large scales. Recently, Sarkar, Bharadwaj & Anathpindika 2016 have used semi-numerical simulations to model the HI bias and have provided a fitting formula for the HI bias b HI (k, z) in both k and z across 0.01 ≤ k ≤ 10 Mpc −1 and 0 ≤ z ≤ 6. The measurement of the HI power spectrum holds the possibility of constraining the background cosmological model through the Baryon Acoustic Oscillations (BAO) Chang et al. 2008). Seo et al. (2010) have studied the possibility of measuring BAO using the HI power spectrum with a ground-based radio telescope. The measurement of the HI power spectrum can also be used to constrain cosmological parameters independent of the BAO (Bharadwaj et al. 2009;Visbal, Loeb & Wyithe 2009). The measurement of the HI power spectrum can also be used to constrain the neutrino mass (Loeb & Wyithe 2008). Villaescusa Navarro et al. (2015) have used hydrodynamical simulations to study the signatures of massive neutrinos on the HI power spectrum and put constraints on the neutrino mass. In a recent work, Pal & Guha Sarkar (2016) have studied the prospects of measuring the neutrino mass using the HI 21-cm and the Ly-α forest cross-correlation power spectrum. Measurements of the HI power spectrum are sensitive to both the mean neutral hydrogen fraction Ω HI (z) and the HI bias b HI . Several measure-ments have been carried out in past years to measure the value of the Ω HI both at low and at high redshifts. Measurements of the Ω HI at low redshifts (z ≤ 1) come from HI galaxy surveys (Zwaan et al. 2005;Martin et al. 2010;Delhaize et al. 2013), DLAs observations (Rao et al. 2006;Meiring at al. 2011) and HI stacking (Lah et al. 2007;Rhee at al. 2013), while measurements of Ω HI at high redshifts (1 < z < 6) are from DLAs studies (Prochaska & Wolfe 2009;Norterdaeme et al. 2012;Zafar et al. 2013). Measurement of the HI power spectrum can provide astrophysical information about the HI distribution. Efforts have been made to measure the Ω HI at redshifts z < 1. Chang et al. (2010) and Masui et al. (2013) have studied the cross-correlation of the HI intensity and the galaxy surveys, while Switzer et al. (2013) have studied the autocorrelation of HI intensity to measure the Ω HI . Ghosh et al. (2012) have used the GMRT observations to place a upper limit on the value of Ω HI at z = 1.33. Several low frequency radio interferrometric arrays (CHIME 1 , Bandura et al. 2014;BAOBAB 2 , Pober et al. 2013) are planned to measure the BAO using the 21-cm signal from z ≤ 2.55. Shaw et al. (2014) present theoretical estimates for the sensitivity of CHIME to constrain the line-of-sight and angular scale of the BAO. The Giant Meterwave Radio Telescope (GMRT 3 , Swarup et al. 1991) which operates at frequencies corresponding to HI in the redshift range 0 ≤ z ≤ 8.5. The GMRT is currently being upgraded. The prospects of detecting the HI power spectrum from the post reionization era for the upgraded GMRT (uGMRT) has been studied in Chatterjee et al. (2016). The proposed future telescopes SKA1-mid and SKA1-low 4 , both hold the prospect of measuring the post-reionization HI power spectrum at a high level of precision (Guha Sarkar & Datta 2015;Santos et al. 2015). There is a rich literature on the sensistivity estimates for various low frequency radio telescopes. Morales et al. (2005) present a general method to calculate the Epoch of Reionization (EOR) power spectrum sensitivity for any radio-interferrometric array. Harkar et al. (2016) have used the Fisher matrix formalism to predict the sensitivity with which it will be possible to constrain reionization and X-ray heating models with the future HERA and SKA phase I. Here, we discuss the prospects of measuring the HI power spectrum at z ∼ 3, using the upgraded Ooty radio Telescope (ORT). The ORT consists of a 530 m long and 30 m wide parabolic cyllindrical reflector, which is placed in the north-south direction on a hill having the same slope as the latitude (11 o ) of the station (Swarup et al. 1971;Sarma et al. 1975). It is possible to observe the same part of the sky through a single rotation of the long axis, which is aligned with earth's rotation axis. The entire feed system of the ORT has 1056 dipoles, spaced 0.47 m apart from each other, which are placed along the focal line of the telescope. The cyllindrical Ooty Radio Telescope (ORT) is currently being upgraded (Prasad & Subrahmanya 2011a, b;Marthi & Chengalur 2014, Subrahmanya 2017a to function as a linear radio interferrometric array the Ooty Wide Field Array (OWFA). The OWFA works at a nominal frequency of ν 0 = 326.5 MHz, which corresponds to HI radiation from the redshift z = 3.35. The OWFA can operate in two independenet interferrometric modes -Phase I and Phase II. In this work, we have considered Phase II only. The Phase II has 264 antenna elements, where each antenna element consists of 4 dipoles. Each antenna has a rectangular aperture of dimension 1.92 m × 30 m. The field-of-view of OWFA Phase II is highly assymetric in dimension, 27.4 o × 1.75 o . The operating bandwidth for the Phase II is 40 MHz. The field-of-view and the observing bandwidth of the OWFA Phase II allow to observe the universe over a real space volume ∼ 0.3 Gpc 3 . We now report some recent works related to OWFA. Calibration is an important issue for OWFA and it has been addressed in Marthi & Chengalur (2014). Gehlot & Bagla (2017) have followed the approach of Ali & Bharadwaj (2014) to predict the HI signal expected at OWFA. Marthi (2017) presents a programmable emulator for simulating OWFA observations for which foreground modelling and predictions are presented in Marthi et al. (2017). present simulations of the HI signal expected at OWFA. Ali & Bharadwaj (2014) (hereafter, Paper I) have studied the prospects of detecting the 21-cm signal using OWFA. In this paper, We have also made detailed foreground predictions for OWFA. In a recent study, Bharadwaj, Sarkar & Ali (2015)(hereafter, Paper II) have used the Fisher matrix analysis to make predictions for hours of observations to measure the HI power spectrum. We showed that the dominant contribution to the OWFA HI signal is from the k-range 0.02 ≤ k ≤ 0.2 Mpc −1 . It was found that a 5σ detection of the HI power spectrum is possible with ∼ 150 hours of observations using the Phase II. In this study, we have also explored the possibility of measuring the redshift space distortion parameter β. We found that the non-uniform sampling of the k-modes does not make OWFA suitable for measuring β. The predictions for OWFA, mentioned earlier, have all assumed that the HI power spectrum is related to the ΛCDM power spectrum with a scaleindependent linear HI bias. All of these studies have focussed on measuring the amplitude of the HI power spectrum assuming that the shape of the matter power spectrum (Eisenstein & Hu 1998) is precisely known. It is interesting and worthwhile to consider a situation where both amplitude and the shape of the HI power spectrum is unknown. There are several astrophysical processes which could in principle, change the shape of the HI power spectrum without affecting the matter power spectrum. Further, uncertainties in the background cosmological model would also be reflected as changes in the observed HI power spectrum through various effects like redshift space distortion and Alcock-Paczynski (AP) effect. In this paper, we have considered the possibility of measuring the HI power spectrum using OWFA. For this purpose, we have divided the k-range into several bins and employed the Fisher matrix analysis to make predictions for measuring the HI power spectrum in each of these k-bins. Throughout our analysis, we have used the ΛCDM cosmology with PLANCK+WMAP9 best-fit cosmological parameters (Ade at al. 2014). The paper is structured as follows. In section 2, we present the theoretical HI model which was used for calculating the signal and noise covariance. Here we also show the Fisher matrix technique which was employed for estimating the binned HI power spectrum. In section 3, we use the results from the Fisher matrix analysis to make predictions for measuring the binned 21-cm power spectrum. We end with summary and conclusions in Section 4. Visibility covariance & Fisher matrix OWFA measures visibilities V(U a , ν n ) at given baselines U a and frequency channel ν n . The baseline configuration of the OWFA is one-dimensional. It consists of 264 antennas, arranged in a linear array along the length of the cylinder. Assuming the x-axis to be along the length of the cylinder, the baselines of the OWFA can be written as follows, i.e, where a denotes the baseline number, d = 1.92 m, is the distance between two consecutive antennas and λ is the wavelength corresponding to the central observing frequency ν 0 . OWFA has a high degree of redundancy in baselines. For OWFA, any given baseline U a occurs (264 − a) times in the array. This can be used to both calibrate the antenna gains (independent of the sky model) as well as to estimate the true visibilities (marthi & Chengalur 2014). In reality baselines U a change as frequency varies across the observing bandwidth (B). This is an extremely important factor that needs to be taken into account in the actual data analysis. The expected fractional variation in the baseline ∆U/U , about the central frequency ν 0 over the bandwidth of observation is ∆U/U = B/2ν 0 ∼ 4.5% for B = 30 MHz. This is not significant enough to consider in our analysis and we have kept the baselines fixed at the value, corresponding to the central frequency ν 0 . The actual bandwidth may be somewhat larger than B = 30 MHz. We express the telescope's observing frequency bandwidth as B = N c ∆ν c where N c is the number of the frequency channels and ∆ν c is the channelwidth. For our analysis, We have used N c = 300 with ∆ν c = 0.1 MHz. The measured Visibilities V(U a , ν n ) can be expressed as of sum of the HI signal S(U a , ν n ) and the noise N (U a , ν n ), i.e., assuming that foregrounds have been completely removed from the data. For the Fisher matrix analysis, it is of convenience to decompose the visibilities V(U a , ν n ) into delay channels τ m (Morales 2005) rather than frequency channels ν n , i.e., where τ m is the delay channel which is defined as follows, i.e., The visibilities v(U a , τ m ) and v(U b , τ n ) are uncorrelated for m = n (Paper II). It is therefore necessary to only consider the visibility correlations with m = n for which we define the visibility covariance matrix The visibility covariance matrix C ab (m) can be expressed in terms of the redshifted HI 21-cm brightness temperature power spectrum P T (k ⊥ , k ) as (eq. (5) of Paper II) where the first and the second terms refer to the signal and noise covariance respectively. Here r ν is the co-moving distance between the observer and the region of space from where the HI radiation originated, r ′ ν = drν dν gives the conversion factor from frequency to co-moving distance (r ν = 6.85 Gpc and r ′ ν = 11.5 Mpc MHz −1 for OWFA) andÃ(U) is the Fourier transform of the OWFA primary beam pattern (eq. (6) of Paper I)). The factor 2k B λ 2 gives the conversion from brightness temperature to specific intensity (where k B is the Bultzmann constant), P T (k ⊥ , k ) is the redshifted HI 21-cm brightness temperature power spectrum, k ⊥ = π(U a + U b )/r ν and k = 2πτ m /r ′ ν respectively refer to the perpendicular and parallel components of the wavevector k with k = k 2 ⊥ + k 2 (Bharadwaj, Sarkar & Ali 2015). The rms. noise of the measured visibilities has the contribution from the system noise and has a value σ N = 6.69 Jy for 16 s integration time (Table 1 of Paper I) and the factor (264 − a) −1 in the noise contribution accounts for the redundancy in the baseline distribution for OWFA. For OWFA, the visibilities at any two baselines U a and U b are uncorrelated (C ab (m) = 0) if |a − b| > 1 ie. the visibility at a particular baseline U a is only correlated with the visibilities at the same baselines or the adjacent baselines U a±1 . Thus, for a fixed m, C ab (m) is a symmetric, tridiagonal matrix where the diagonal represents the visibility correlation at the same baseline whereas the upper and lower diagonals represent the visibility correlation between the adjacent baselines. Figure 1 of Paper II shows the signal contribution for the diagonal and off-diagonal terms of C ab (m). The covariance at adjacent baselines is approximately one fourth of the covariance at the same baselines. Further, the noise contributes only to the diagonal terms and it does not figure in the off-diagonal terms. We have used the Fisher matrix (eq. (8) of Paper II) to predict the accuracy with which it will be possible to constrain the value of various parameters using observations with OWFA. The indices α, γ here refer to the different parameters whose values we wish to constrain. The inverse of the Fisher matrix F αγ provides an estimate of the error-covariance (Dodelson 2003) for these parameters. In eq. (7), the indices a, b, c, d are to be summed over all baselines. We have used eq. (6) to calculate the data covariance matrix C ab (m) and also its derivatives [C ab (m)] ,α with respect to the parameters whose values we wish to constrain. A discussion of the parameters considered for the present analysis follows in the next section. Modelling and Binning the HI power spectrum The redshifted HI 21-cm brightness temperature power spectrum P T (k) is the quantity that will be directly measured by any cosmological 21-cm experiment. This quantifies the fluctuations in the brightness temperature originating from two different sources -1. the intrinsic fluctuations of the HI density in real co-moving space and 2. the peculiar velocities which introduce brightness temperature fluctuations through redshift space distortion. For the purpose of this work, we have modelled P T (k) as where P r T (k) is the power spectrum of the HI 21-cm brightness temperature fluctuations in real space and the factor (1 + βµ 2 ) 2 quantifies the the effect of linear redshift space distortion due to peculiar velocities (Kaiser 1987;Bharadwaj, Nath & Sethi 2001;Ali & Bharadwaj 2014). Here β is the redshift space distortion parameter and µ = k /k. OWFA will probe the k ⊥ and k -range 1.9 × 10 −3 ≤ k ⊥ ≤ 5 × 10 −1 Mpc −1 and 1.8 × 10 −2 ≤ k ≤ 2.73 Mpc −1 respectively, thereby covering the k-range 1.82× 10 −2 ≤ k ≤ 2.73 Mpc −1 . The present work focuses on making predictions for the accuracy with which it will be possible to measure P r T (k) using OWFA. For this purpose, we have divided the entire k-range probed by OWFA into 20 equally spaced logarithmic k-bins, and we use k i and [P r T ] i (with 1 ≤ i ≤ 20) to refer respectively to the average k and P r T (k) value for each bin. We have used ln([P r T ] i ) and ln(β) as the parameters for the Fisher matrix analysis (eq. (7)) which gives an estimate of the precision with which it will be possible to measure these parameters. Results and Discussions We need a fiducial model for the P r T (k) and β to carry out the Fisher matrix analysis. We model the P r T (k) assuming that it traces the underlying matter power spectrum P (k) with a linear bias b HI as also assumed by Bharadwaj, Nath & Sethi (2001), Bharadwaj & Sethi (2001), Wyithe & Loeb (2009), wherex HI is the mean neutral hydrogen fraction. The characteristic HI brightness temperatureT is defined as (Bharadwaj & Ali 2005) where Y P is the helium mass fraction and the other symbols in the above equation have their usual meanings. The parameter β in the observed P T (k) (eq. 9) is defined as β = f (Ω)/b HI where f (Ω) quantifies the growth rate of the matter density perturbations, whose value is specified by the background ΛCDM cosmological model. Note that the various terms used in eq. (9) correspond to the redshift, z = 3.35 where HI radiation originated. We have usedx HI = 0.02 for our analysis which corresponds to the neutral gas mass density parameter Ω gas = 10 The value of β can be estimated by sampling the Fourier modes k with a fixed magnitude k which are, however, oriented at different directions to the line-of-sight. In other words, µ = k /k should uniformly span over the entire range −1 ≤ µ ≤ 1. The minimum value of k probed by OWFA is approximately 10 times larger than the minimum value of k ⊥ . The maximum value of k is also ∼ 4 times larger than the maximum value of k ⊥ (Table II of Paper II). In addition to this, the sampling width for k is roughly ∼ 20 times larger than that of k ⊥ . These disparities lead to a non-uniform distribution where the k modes are largely concentrated around µ = 1 (see Figure 3 of Paper II). This anisotropic distribution of the k modes does not make OWFA very suitable for measuring β, and we do not consider this in our analysis. We have considered two different cases for error predictions. In the first situation, we consider the Conditional errors σ ic for the measurement of the binned HI power spectrum [P r T ] i . The conditional error σ ic represents the error on the measurement of [P r T ] i in a situation where the values of all the other parameters are precisely known. Here, we calculate σ ic for the i-th bin by assuming that the values of β and [P r T ] j are precisely known for all the other bins. We use σ ic = 1/ √ F ii to compute the conditional error for the i-th bin. In the second situation, we have considered the Marginalized errors σ im for the measurement of [P r T ] i . The marginalized error σ im gives the error on the measurement of [P r T ] i without assuming any prior information about the other parameters. While estimating the error for the i-th bin, we have marginalized over the values of β and [P r T ] j in the other bins. In our previous work (Paper II), we have calculated the marginalized error on the measurement of the amplitude of the HI power spectrum with a prior on β in the range 0.329 ≤ β ≤ 0.986. In the present work, we have not imposed any prior on β and we have marginalized ln([P r T ] i ) and ln(β) over the entire range −∞ to +∞. We use σ im = [F −1 ] ii to calculate the marginalized error for the i-th bin. The conditional and the marginalized errors here represent the two limiting cases, and the error estimates would lie somewhere in between σ ic and σ im if we impose priors on the value of β or any of the other parameters. In Paper II, we have shown that a 5σ detection of the amplitude of the P r T is possible with ∼ 150 hours of observations. We therefore need to consider an observing time t > 150 hours for measuring the [P r T ] i in different k-bins. Figure 1 shows both the conditional (σ ic ) and marginalized (σ im ) errors for 1000 hours of observation. Here σ ic and σ im are respectively the conditional and marginalized errors for different ln([P r T ] i ) which are the parameters for the Fisher matrix analysis. Here σ ic and σ im represent the two limiting cases for the error estimates. We expect the error estimates to lie between these two limiting values in case we impose a prior on the value of β (Paper II). We find that the values of σ ic and σ im agree within 15%, except at the k-bins lying in the range 0.06 ≤ k ≤ 0.3 Mpc −1 where the difference is ∼ 20 − 35%. This suggests that σ ic = 1/ √ F ii and σ im = [F −1 ] ii are not significantly different, indicating that the contribution from the off-diagonal terms of F ij are small. We therefore conclude that the measurements of P r T in different k-bins are by and large uncorrelated. In the subsequent analysis, we have used σ ic for predicting errors on the measurements of [P r T ] i in different k-bins. Figure 2 shows the binned HI power spectrum [P r T ] i with the 1σ errors ∆[P r T ] i = σ ic ×[P r T ] i for 1, 000 hours of observation . The error ∆[P r T ] i on the measurement of the P r T in a given k-bin is the combination of contributions from the system noise and the cosmic variance. The noise term in eq.(6) is suppressed by the factor (264 − a) −1 due to the redundancy of the OWFA baselines. We see that the noise contribution goes up as the baseline number a is increased. The small k-bins which correspond to small baselines, have smaller noise contribution than the larger k-bins which correspond to large baselines. Here we have used logarithmic binning where the bin width and the number of k-modes in a bin increase with k. The cosmic variance in a given k-bin goes down with number of k-modes in that bin. We therefore expect the cosmic variance to be maximum at the smallest k-bin and decrease with increasing k. As a whole the errors at smaller k-bins are dominated by the cosmic variance whereas at larger k-bins, the errors are dominated by the system noise. We can see from Figure 1 that σ ic = ∆[P r T ] i /[P r T ] i , which is the relative error on the binned power spectrum, is minimum in the range k ∼ 0.1 − 0.2 Mpc −1 . The cosmic variance dominates the relative error at smaller values of k (< 0.1 Mpc −1 ) whereas the system noise dominates at larger k values (> 0.2 Mpc −1 ). We also see that the relative error is lower than 0.2 in the range 0.05 ≤ k ≤ 0.3 Mpc −1 where our results predict a 5σ detection of the binned power spectrum (Figure 2). We have so far considered the errors on the measurement of [P r T ] i with a given hours of observing time (1, 000 hours). We shall now try to understand how the errors σ ic vary with observation time t. The time dependence of the visibility covariance C ab (m) (eq. (6)) comes in through the rms. noise of the measured visibilities σ N which scales inversely with √ t, ie. σ N ∼ 1/ √ t. We expect the visibility covariance C ab (m) to vary inversely with t, ie. C ab (m) ∼ 1/t for small observing times where the noise contribution is considerably larger than the signal, and we expect C ab (m) to have a constant value, independent of the observing time, for large values of t. the C ab (m) which appear in the Fisher Matrix (eq. (7)) are independent of t. It then follows that the Fisher matrix F αγ scales as F αγ ∼ t 2 for small observation times and F αγ has a constant value for large t. We therefore expect the relative errors σ ic to vary as σ ic ∼ 1/t for small observing times, and become independent of t for large observation times where the error is dominated by the cosmic variance. Figure 3 shows a contour plot of the Signal-to-Noise Ratio (SNR) as functions of the Fourier mode k and observation time t. We see that a statistically significant measurement ( 3σ) of the binned power spectrum is only possible for observations times greater than 200 hours. A 3σ detection of [P r T ] i is possible in the k-range 0.04 ≤ k ≤ 0.2 Mpc −1 with 200 − 300 hours of observation. Detection at a significance of 5σ is not possible with t ≤ 500 hours of observation. We find that a 5σ detection of [P r T ] i is possible for 0.05 ≤ k ≤ 0.3 Mpc −1 with 1, 000 hours of observing time. Note that the SNR peaks in the range k ∼ 0.1 − 0.2 Mpc −1 . In this k range the SNR continues to increase with t for the entire t range shown here and a 10σ detection is possible with 2, 000 hours of observation. At k < 0.1 Mpc −1 , the SNR stops increasing with t beyond a certain point. The SNR here becomes dominated by the cosmic variance as t is increased, and the SNR contour becomes parallel to the t axis. We see that irrespective of the observing time, a 5σ detection is not possible for k < 0.036 Mpc −1 if only one pointing is considered. For k > 0.2 Mpc −1 the error is system noise dominated, and the SNR continues to increase with increasing t. However, we see that a 5σ detection is not possible for k > 0.5 Mpc −1 within 2, 000 hours of observation. As mentioned earlier, we expect SNR ∝ t for small observing times when the error is system noise dominated, and we expect the SNR to saturate at a fixed value for large observing times where the cosmic variance dominates. Figure 4 shows how the SNR changes with observing time t for a few representative k-bins. The small k bins have a relatively large cosmic variance. We see that the SNR at the smallest k bin (0.036 Mpc −1 ) shown in this figure is nearly saturated at a very small observing time (t ∼ 300 hours), and increases very slowly for larger observing times. A 5σ detection in this bin requires ∼ 10, 000 hours of observation. The k-bin at 0.33 Mpc −1 shows the SNR ∝ t scaling for t ≤ 700 hours, beyond which the increase in SNR is slower. The two larger k-bins shown in the figure show the SNR ∝ t behaviour over the entire t range considered here. However, note that the largest k-bin with k i = 1.16 Mpc −1 shown in the figure has a rather low SNR, and a 5σ detection is only possible with 10, 000 hours of observation. As mentioned earlier, we expect the SNR to increase with time t, ie. SNR ∝ t for small observing times. The increase in the SNR slows down for larger observing times, and for even larger observing times (t ≥ t CV ) the SNR saturates at a fixed value which is determined by the cosmic variance in the particular k-bin. Here t CV referes to the observing time beyond which the SNR is determined by the cosmic variance, and we have estimated this for the different k bins such that the SNR increases slower than t 0.002 for (t ≥ t CV ). Figure 5 shows t CV for the different k-bins. For any particular bin, it is not possible to increase the SNR any further by increasing the observing time beyond t CV . We find that t CV increases approximately as t CV ∝ k 0.63 for k ≤ 1.0 Mpc −1 . The increase in the t CV is rather slow for k ≥ 1 Mpc −1 and saturates at t CV ∼ 15000 hr beyond k = 2.0 Mpc −1 . This behaviour is decided by a combination of several factors including the OWFA baseline redundancy, the sampling of the 3D Fourier modes and the logarithmic binning. The discussion till now has entirely focused on observations in a single field of view. As already mentioned, the SNR ceases to increase with observing time once t ∼ t CV . We see that t CV ∼ 1, 000 hr at the smallest k bin. The SNR in this bin will saturate for t > 1, 000 hr and it is necessary to observe multiple fields of view to increase the SNR any further. The SNR scales as SNR ∝ √ N , where N is the number of fields of view. A possible observational strategy for OWFA would be to observe multiple fields of view, with each field being observed for a duration of 1, 000 − 2, 000 hr. The 3σ contour in Figure 3 would correspond to 5.2σ for observations in N = 3 fields of view. We see that a 5σ detection is possible in nearly all the k bins if 3 fields of view are observed for 1, 500 hr each. Summary and Conclusions We have considered Phase II of OWFA to study the prospects of measuring the redshifted 21-cm power spectrum in different k-bins. The entire analysis is restricted to observations in a single field of view. We find that a 5σ detection of the binned power spectrum is possible in the k range 0.05 ≤ k ≤ 0.3 Mpc −1 with 1, 000 hours of observation. The SNR peaks in the k range 0.1 − 0.2 Mpc −1 where a 10σ detection is possible in 2, 000 hours of observation. Our study reveals that it is not very advantageous to observe much beyond 1, 000 hours as the error in measuring the power spectrum become cosmic variance dominated in several of the small k-bins, and the SNR in these bins increase rather slowly with increasing t. As discussed earlier, the variation of the baseline over the observing bandwidth is ∼ 5%. This makes both the diagonal and off-diagonal components of the Fisher matrix to change, which is expected to be not more than 5 − 10%. The redshifted 21-cm signal provides an unique way to measure the BAO in the post-reioinization era (z ≤ 6). This is perceived to be a sensitive probe of the dark energy. The BAO is a relatively small feature (∼ 10 − 15%) that sits on the HI power spectrum. The five successive peaks of the BAO span the k range 0.045 ≤ k ≤ 0.3 Mpc −1 , which is well within the k-range probed by OWFA. The detection of the BAO requires measuring the HI power spectrum at a significance of 50σ or more. From Figure 3, we find that such a sensitivity cannot be achieved in the relevant k range within t ∼ 2000 hours of observation. It is also clear that the required sensitivity cannot be achieved by considering observations in a few fields of view. For detecting the BAO it is necessary to consider a different observational strategy covering the entire sky (e.g. Shaw et al. 2013). We plan to address this in future work.
2017-03-07T06:07:24.000Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "0e6ff994f2725335a6b94c2c4758e86502676cf8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.00634", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0e6ff994f2725335a6b94c2c4758e86502676cf8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
46845668
pes2o/s2orc
v3-fos-license
The effect of desflurane and propofol protocols on preconditioning Material and methods. Ninety patients, aged > 18 years, American Society of Anesthesiologists (ASA) category III, scheduled to undergo primary elective coronary artery bypass grafting (CABG), were included in the study. During maintenance, the patients in group 1 (n = 30) received a propofol infusion (5–6 mg/ kg/h) combined with a fentanyl infusion (3–5 mcg/kg/h); the patients in group 2 (n = 30) also received a propofol infusion (5–6 mg/kg/h) combined with a fentanyl infusion (3–5 mcg/kg/h), but they were also given 6% desflurane inhalation for 15 min both before cross-clamping of the aorta and after removal of the clamp; the patients in group 3 (n = 30) received a propofol infusion (2–3 mg/kg/h) combined with a fentanyl infusion (3–5 mcg/kg/h) and received the continuous 6% desflurane inhalation. Blood samples were drawn in the preoperative period (S1), during cardiopulmonary bypass, before cross-clamping the aorta (S2), after removal of the cross-clamp (S3) and 24 h after the operation (S4). Introduction Ischemic preconditioning has been defined as the reduction of high energy catabolism by producing short periods of ischemia that are accompanied by a decrease in myocardial contractility, arrhythmia and intracellular acidosis.Thus, ischemia-reperfusion-related contractile dysfunction is prevented, which is crucially important in patients with a hypertrophied ventricle.Preconditioning produces short periods of ischemia that help the heart adapt to ischemia-reperfusion compromise. 1,2s demonstrated by experimental and clinical studies, producing short periods of ischemia using pharmacological and perioperative volatile anesthetic drugs has a pre-conditioning effect on the myocardium. 3Propofol was shown to have antioxidant effects and desflurane and sevoflurane were shown to be associated with lower troponin I levels, which may indicate their potential use for preconditioning. 4,5arge amounts of reactive oxygen radicals are created during cardiopulmonary bypass, causing an increase in systemic oxidative stress and lipid peroxidation that alters myocardial function. 6Tumor necrosis factor alpha (TNF-α), which increases during the creation of oxygen radicals, has been shown to increase following cardiopulmonary bypass. 7Therefore, TNF-α is thought to play an important role in the inflammatory process that causes cardiac dysfunction. 4eart-type fatty acid binding protein (h-FABP) has been shown to be a sensitive marker in the diagnosis of myocardial infarction.Its use in the assessment of preconditioning during cardiac surgical anesthesia was suggested since it may be detected in venous blood within a couple of hours after myocardial ischemia or infarction. 8,9TNF-α was also suggested to be a useful marker in the assessment of effectiveness of the preconditioning method used in cardiac surgery. 10,11Another advantage of TNF-α is its stimulation of the acute phase reaction, which may allow the cardiac protective effects of preconditioning to be traced during cardiac surgery. In light of the above, we sought to evaluate the effects of different propofol and/or desflurane management protocols on preconditioning during coronary artery surgery, with the assessment being based on TNF-α and h-FABP levels. Patients and methods The study was approved by our institutional review board (02-2/6, 20.03.2013).All patients were informed about the study protocol and signed procedure-oriented informed consent forms.Patients aged > 18 years of age, American Society of Anesthesiologists (ASA) category III, scheduled to undergo primary elective coronary artery bypass grafting (CABG) were included in the study.Patients with a left ventricle ejection fraction < 50% and those with unstable angina pectoris, diabetes, renal failure (creatinine ≥ 1.2 mg/dL), or acute or recent (< 2 weeks) myocardial infarction were excluded.Patients with a clear indication for combined valve or aortic surgery and those who had cardiogenic shock or low cardiac output syndrome were also excluded.A total of 90 patients were included in the study. Study protocol and chemical analysis The patients were pre-medicated with 5 mg oral diazepam on the night before the operation.All operations were performed by the same surgical team.Standard monitoring was performed with 12-lead electrocardiogram and pulse oximetry.A peripheral venous line was introduced via the right antecubital vein.Invasive arterial monitoring was achieved via the right radial artery.After 5 min of pre-oxygenation with 100% oxygen, anesthesia was induced with 1.5-2.0mg/kg/min of propofol (Lipuro %1, Braun, Melsungen, Germany) and 5-10 mcg/kg of fentanyl (Fentanyl, Mercury Pharma, London, UK).Neuromuscular blockade was achieved with 1 mg/kg of intravenous rocuronium (Curon, Mustafa Nevzat, Istanbul, Turkey).Patients were intubated and were placed on volume-controlled mechanical ventilation.The respiratory rate was set at 12 times per min, positive end-expiratory pressure at 0 mbar, maximum pressure at 30 mbar and tidal volume at 7-10 mL/kg.End-tidal CO 2 was measured using a Nihon Kohden Life Scope 14.Then, a central venous catheter was introduced via the right internal jugular vein and central venous pressure was recorded during and after the operation.Bispectral index (BIS) monitoring was performed in all patients (Aspect Medical Systems BIS VISTA ™ Covidien). The patients were randomly allocated into 3 groups to receive 1 of 3 different anesthetic maintenance regimens.Randomization was achieved using computer-based software.During maintenance, the patients in group 1 (n = 30) received a propofol infusion (5-6 mg/kg/h) combined with a fentanyl infusion (3-5 mcg/kg/h).Patients in group 2 (n = 30) also received a propofol infusion (5-6 mg/kg/h) combined with a fentanyl infusion (3-5 mcg/ kg/h) but they were also given 6% desflurane (Suprane, Baxter, Puerto Rico) inhalation for 15 min both before cross-clamping of the aorta and after removal of the clamp.The patients in group 3 (n = 30) received a propofol infusion (2-3 mg/kg/h) combined with a fentanyl infusion (3-5 mcg/kg/h) plus continuous 6% desflurane inhalation.BIS was kept at 40-50. Body temperature was monitored using a nasopharyngeal probe and patients' body temperatures were cooled down to 32°C.Blood samples were drawn in the preoperative period (S1), during cardiopulmonary bypass, be-fore cross-clamping of the aorta (S2), after removal of the cross-clamp (S3) and 24 h after the operation (S4).The samples were preserved in a refrigerator at -80°C.TNF-α (USCN Life Science Inc., USA) and h-FABP levels were measured via ELISA.Creatinine kinase (CK), CK-MB, troponin-I, B-type natriuretic peptide (BNP) and lactate dehydrogenase (LDH) levels were measured from samples drawn in the preoperative period and 24 th postoperative hour. Statistical analysis All analyses were performed using STATISTICS for Social Sciences (SPSS) v. 19.0.For related measurements, normally distributed data was compared using repeated measures analysis of variance and non-normally distributed data was compared using the Friedman test.For independent measurements, normally distributed data was compared using one-way analysis of variance (ANOVA) and non-normally distributed data was compared using the Kruskal-Wallis test.Spearman's correlation analysis was used to test for any linear relationship among the study variables.A p-value of less than 0.05 was considered statistically significant. Results The 3 groups were similar in terms of age and body mass index (p > 0.05).CK, CK-MB, LDH, troponin I and BNP levels showed a significant increase in the 24 th postoperative hour compared to their baseline values (p < 0.001).There were no significant differences among the groups either before or after the operation (p > 0.05) (Table 1). In group 1, 2 and 3, TNF-α levels did not differ among S1, S2 and S4 (p > 0.05) whereas S3 was significantly higher than S1, S2 and S4 (p < 0.001).There was a significant difference between S2 and S4 in group 1 whereas no such difference was observed in other groups (p < 0.05) (Table 2).In almost all groups, TNF-α levels showed a significant increase after removal of the crossclamp but had decreased 24 h postoperatively.In addition, S3 TNF-α levels showed a marked increase compared to other stages in all 3 groups.S3 TNF-α levels did not differ significantly among the 3 groups (p < 0.05).S2 TNF-α levels were significantly lower in group 3 compared to group 1 and group 2 (p < 0.05).Similarly, S4 TNF-α levels were significantly lower in group 3 than those in group 1 and group 2 (p < 0.05) (Table 2).S2 TNF-α levels were significantly lower in group 2 and group 3 (desflurane administered) than those in group 1 (desflurane not administered) (p < 0.05).The most profound reduction by the 24 th postoperative hour was that seen in group 3 (p < 0.05).In group 3, S3 h-FABP levels were significantly higher than S1, S2 and S4 levels (p < 0.001) whereas no significant difference was found among S1, S2 and S4 h-FABP levels (p > 0.05).In group 1, no significant difference was found between S1 and S2 h-FABP levels whereas the differences among the other stages were statistically significant (p < 0.001).In group 2, no significant difference was found between S1 and S2 h-FABP levels whereas the differences among the other stages were statistically significant (p < 0.001) (Table 3). In all groups, h-FABP levels were found to be increased after removal of the aortic cross-clamp and decreased by the 24 th hour postoperatively (p < 0.05).There was a moderate but significant correlation between h-FABP and TNF-α (Spearman's rho = 0.47, p < 0.001).S1 h-FABP levels did not differ significantly among the groups (p > 0.05).S2 h-FABP levels in group 3 were significantly lower compared to group 1 (p < 0.05).S3 h-FABP levels in group 3 were also significantly lower compared to group 1 but did not differ significantly from those in group 2 (p < 0.01 and p > 0.05, respectively).S4 levels in group 3 were significantly lower than those in group 2 (p < 0.001) (Table 3). Discussion The myocardium is exposed to artificial ischaemia and reperfusion ischaemia during extracorporeal circulation. 12Myocardial protection against such insults is essential to the success of cardiac surgery.Systemic inflammation plays an important role in the development of reperfusion injury. 13There is a positive relationship between the degree of systemic inflammation and inflammatory biomarkers. 14Studies have demonstrated that remote ischemic preconditioning suppresses pro-inflammatory gene transcription in human leukocytes. 15andoni et al. reported in their randomized meta-analysis that troponin I levels showed greater reduction with the modern volatile agents desflurane and sevoflurane in patients undergoing cardiac surgery. 16However, we found no difference in troponin I levels between the groups receiving or not receiving desflurane.This finding may be attributed to the dosage of propofol or desflurane or use of intravenous anesthesia as the anesthetic approach.In addition, the cardio-protective effect propofol produced alone may be another reason why troponin I levels were different. Moreover, our results are supported by others suggesting that there was no difference between propofol and sevoflurane with regard to postoperative mortality and myocardial infarction in patients undergoing CABG.These results, as reported previously, are due to the antioxidant effects of propofol and preconditioning effects of volatile anesthetics. 17An inverse relationship was noted between the effectiveness of preconditioning and the amount of reactive oxygen species, whilst propofol is known as a reactive oxygen scavenger.On the other hand, Smul et al. reported in their experimental study on rabbits that propofol inhibits desflurane-related preconditioning. 18However no conclusive evidence exists to justify the relationship of this effect with free radicals. In their prospective study on 120 patients, Huang et al. reported that TNF-α showed a significant increase within 5 min after removal of the aortic cross-clamp in all groups whilst TNF-α levels were significantly lower after cross-clamping of the aorta in patients receiving propofol and isoflurane compared to other groups. 8In line with our data, these authors found that an isoflurane and propofol combination was superior to regimens consisting of isoflurane alone or propofol alone. In our study, we found that TNF-α levels were significantly lower in patients receiving low-dose propofol and continuous desflurane administration than in other groups after removal of the cross-clamp and by the 24 th postoperative hour, when stress and traumatic events (inflammation) reach their maximum.This may be attributable to the cardio-protective effect of volatile agents and their anti-inflammatory properties. 11,17Moreover, some studies have reported the anti-oxidant effects of propofol.Such studies demonstrated that, as a pro-inflammatory cytokine that increases with the production of oxygen radicals, TNF-α levels decrease after CPB. 10,11In light of the above, any increase in TNF-α levels should be considered a negative criterion since it is associated with decreased tolerance of ischemic damage and inflammation.We found lower TNF-α levels in the propofol combined with continuous desflurane group compared to the propofol alone group before cross-clamping of the aorta, which may be due to the early cardio-protective effects of desflurane.The significant decrease in TNF-α levels in group 3 in the postoperative period highlights the effectiveness of the preconditioning effect of low-dose propofol and continuous desflurane administration.A few studies support these findings. 19Sayın et al. have reported that propofol inhibits lipid peroxidation. 20In our study, both the cardio-protective and the anti-oxidant effects of propofol and desflurane might have been observed.Unlike previous studies, the present study demonstrated that the addition of desflurane to propofol reduces TNF-α levels following cardiopulmonary bypass.Desflurane and propofol may potentiate the preconditioning effects of each other. In the present study, h-FABP levels showed an initial increase after cross-clamping of the aorta but they had decreased by the 24 th postoperative hour, especially in group 3.The moderate correlation between h-FABP levels and TNF-α levels may be explained by inflammatory and traumatic processes, supporting the view that they may influence the release of each other.Some studies have suggested that h-FABP may be a marker of early ischaemia. 8,9Moreover, h-FABP has been shown to have an earlier peak compared to CK-MB or cardiac troponin I.In another study 21 , h-FABP was demonstrated to be a marker of long-term mortality following acute coronary syndrome, and is capable of defining high-risk patients. 21n light of the above, the present study demonstrated that low-dose propofol and continuous desflurane administration was more effective than propofol alone or propofol combined with 15 min of desflurane administration when h-FABP levels were considered as the measure of preconditioning.Lower h-FABP levels were observed in the low-dose propofol and continuous desflurane group compared to propofol alone before cross-clamping of the aorta and more profoundly after removal of the crossclamp, indicating desflurane's favorable effect on myocardial adaptation to ischaemia.Moreover, the lower h-FABP levels observed in the low-dose propofol and continuous desflurane group at the 24 th postoperative hour demonstrate that the longer the duration of desflurane administration, the better prepared the myocardium is against ischaemia and reperfusion.Tomai et al. found no difference between 15 min of isoflurane administration before cardiopulmonary bypass and control groups with regard to myocardial function and cardiac enzyme levels. 22We found that troponin levels in the continuous or intermittent desflurane administration and non-desflurane groups were similar.In recent years, there has been no detailed data regarding the combined use of propofol and desflurane or their short-course administration.However, there have been many reports suggesting that these drugs inhibit severe inflammation and reduce TNF-α levels as well as their preconditioning effects.Such studies report that ischemic preconditioning inhibits the local myocardial and systemic inflammatory response. 15,23However, whether the decrease in TNF-α levels occurs due to the preconditioning effects of these drugs or their effects on inflammation is unclear. Zhang et al. reported that the antioxidant effect of propofol is due to the phenol group it contains, similar to vitamin E. 24 They found that it causes lower neutrophil activation and a lower increase in C5a levels after CABG. In conclusion, h-FABP and TNF-α levels may be used to assess the effectiveness of ischemic preconditioning practice.On the basis of the measurement of these proinflammatory cytokines, low-dose propofol and continuous desflurane provided more effective preconditioning than propofol alone or short-course desflurane in patients undergoing CABG. Table 1 . Comparison of data from patient groups before and after the operation *Paired t-test for intra-group comparison; **one-way analysis of variance (ANOVA), comparison of pre-op and post-op values between groups was not significant. Table 2 . Comparison of the TNF-alpha (pg/mL) data of stages belonging to all groups S -stage; G -group; SD -standard deviation; CI -confidence Interval; min.-max-minimum-maximum; a -repeated measures ANOVA; b -Friedman Test (nonparametric repeated measures ANOVA); c one-way analysis of variance (ANOVA); d -Kruskal-Wallis Test (nonparametric ANOVA).If p-value obtained by ANOVA is <0.05; *Dunns or **Tukey-Kramer multiple comparisons test (Post-hoc tests) was used to compared all stages (S1,S2 and S3).Post tests were not calculated because the p-value was greater than 0.05. Table 3 . Comparison of the h-FAPB (ng/mL) data of stages belonging to all groups
2018-04-03T04:15:25.143Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "3b1d4a92099057d6b0a3cef06ddfb0d79c2c14dc", "oa_license": "CCBY", "oa_url": "http://www.advances.umed.wroc.pl/pdf/2017/26/5/817.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3b1d4a92099057d6b0a3cef06ddfb0d79c2c14dc", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233740675
pes2o/s2orc
v3-fos-license
The Optimization Strategy of the Existing Urban Green Space Soil Monitoring System in Shanghai, China High concentrations of potentially toxic elements (PTE) create global environmental stress due to the crucial threat of their impacts on the environment and human health. Therefore, determining the concentration levels of PTE and improving their prediction accuracy by sampling optimization strategy is necessary for making sustainable environmental decisions. The concentrations of five PTEs (Pb, Cd, Cr, Cu, and Zn) were compared with reference values for Shanghai and China. The prediction of PTE in soil was undertaken using a geostatistical and spatial simulated annealing algorithm. Compared to Shanghai’s background values, the five PTE mean concentrations are much higher, except for Cd and Cr. However, all measured values exceeded the reference values for China. Pb, Cu, and Zn levels were 1.45, 1.20, and 1.56 times the background value of Shanghai, respectively, and 1.57, 1.66, 1.91 times the background values in China, respectively. The optimization approach resulted in an increased prediction accuracy (22.4% higher) for non-sampled locations compared to the initial sampling design. The higher concentration of PTE compared to background values indicates a soil pollution issue in the study area. The optimization approach allows a soil pollution map to be generated without deleting or adding additional monitoring points. This approach is also crucial for filling the sampling strategy gap. Introduction The quality of the urban ecosystem depends on the green space soil quality. Soil quality refers to the soil's ability to ensure biological productivity, maintain environmental quality, and promote organism health functions within the limit of ecosystems [1]. Rapid urbanization, industrialization [2] and greenery development will affect the soil quality in urban areas [3]. Therefore, urban areas become the sources of various pollutant elements that can be accumulated for an extended period of time in the soil [4][5][6]. Studies on the concentrations of potentially toxic elements (PTE) in urban soils, previously known as heavy metals, started in the 1960s and identified massive heavy metals sources of urban soil pollution [4,5]. The origins of PTE in urban soils are natural and anthropogenic. The pedogenesis processes are considered the natural source of PTE in the soil [7]. The anthropogenic factors are the crucial sources of PTE in soils and predominantly result from urban development and urbanization [8], the distribution of vehicles and the types of fuels [9], emission from industries and transportation [10], smelting, manufacturing, mining, and coal-burning [11]. Based on these factors, urban soils are enriched with a high level of PTE compared to threshold values [12][13][14]. Numerous studies about PTE in urban soils have been conducted in many cities around the world, including Glasgow [15], London [16], Hong Kong [17], New Orleans [18], of the most extensive coastal cities in eastern China, which plays a crucial role in its main economic, financial, trade, and shipping center, with the most important industrial centers in China. The town covers about 6340.5 km 2, of which 6218.65 km 2 is the land, and the rest is water, and it covers 0.06% of China's total territory [46]. The soil types mainly include paddy soil, fluvial-aquic soil, and coastal saline soil [46]. The entire green spaces in 2015 were about 3593.5 km 2 [47]. The city has characterized the subtropical monsoon climate, with an annual mean temperature of 16 °C and yearly average precipitation is approximately 1200 mm. Soil Sampling and Chemical Analysis A total of 460 surface soil (0-20 cm) samples were collected from different green spaces in 2018. The locations were recorded using a global positioning system (GPS) and displayed in Figure 1. Five random soil samples were collected using a soil corer (2.5 cm diameter) and then pooled into one composite sample. The composite samples were air-dried, cleared of visible plant roots and residues. In order to ensure the complete digestion of soil samples, the air-dried soils were ground and passed through a 0.15 mm nylon mesh sieve. For each sample, 0.5 g of soil was digested with a concentrated mixture of HNO3, HF, and HClO4 as stated in the EPA 3052 method [48]. Mixed acid digestion makes the soil digestion more complete. Therefore, compared with the aqua regia digestion, the result of mixed acid digestion becomes higher, which is closer to the actual concentrations of PTE in the soil. The five PTE, including Cu, Zn, Cd, Cr, and Pb contents, were measured using Inductively Coupled Plasma Mass Spectrometry (ICP-MS, NexION 300X, Spectralab Scientific Inc. Markham, ON L3R 3V6, Canada). The limit of detection (LOD) and limit of quantification (LOQ) for the different metals were determined. The LOD for analysis of Cr, Cu, Zn, Cd, and Pb were 0.47 mg kg −1 , 0.25 mg kg −1 , 0.70 mg kg −1 , 0.01 mg kg −1 , and 0.30 mg kg −1 , respectively. The LOQ of the above five PTE Soil Sampling and Chemical Analysis A total of 460 surface soil (0-20 cm) samples were collected from different green spaces in 2018. The locations were recorded using a global positioning system (GPS) and displayed in Figure 1. Five random soil samples were collected using a soil corer (2.5 cm diameter) and then pooled into one composite sample. The composite samples were air-dried, cleared of visible plant roots and residues. In order to ensure the complete digestion of soil samples, the air-dried soils were ground and passed through a 0.15 mm nylon mesh sieve. For each sample, 0.5 g of soil was digested with a concentrated mixture of HNO 3 , HF, and HClO 4 as stated in the EPA 3052 method [48]. Mixed acid digestion makes the soil digestion more complete. Therefore, compared with the aqua regia digestion, the result of mixed acid digestion becomes higher, which is closer to the actual concentrations of PTE in the soil. The five PTE, including Cu, Zn, Cd, Cr, and Pb contents, were measured using Inductively Coupled Plasma Mass Spectrometry (ICP-MS, NexION 300X, Spectralab Scientific Inc. Markham, ON L3R 3V6, Canada). The limit of detection (LOD) and limit of quantification (LOQ) for the different metals were determined. The LOD for analysis of Cr, Cu, Zn, Cd, and Pb were 0.47 mg kg −1 , 0.25 mg kg −1 , 0.70 mg kg −1 , 0.01 mg kg −1 , and 0.30 mg kg −1 , respectively. The LOQ of the above five PTE was four times their respective LOD. Certified soils (GSS series, China) were used as standard reference materials to verify the accuracy of the method, and the recovery rate of all measured PTE was 95-105%. All tested glass and blanks were soaked in HNO 3 , rinsed, and Milli-Q water to prevent contamination of the testing instrument. Geostatistical Methods Geostatistics is an extension tool in GIS that describes the spatial variation and carries out spatial interpolations [49]. The semivariance function and the kriging interpolations were used to produce the initial interpolation map on green spaces soil [50]. Semivariogram is equal to one-half of the expected value of the squared differences between values of X at locations (i) and (i + h) [51], where m(h) is the number of pairs of observations separated by distance h, Z(x i ) is the sample value of the variable Z at location x i , and Z(x i + h) is the sample value of the variable Z at location x i + h. The ordinary Kriging interpolation is one of the most frequently used geostatistics tools to estimate unknown values using the sample data [52], whereẑ(x 0 ) is the value to be estimated at the location of x 0 ; and z(x i ) is the known value at the sampling site x i ; y i represents constant values of each local neighborhood. While, n represents the number of sites or sampling points within the search neighborhoods used for the estimation. The existing monitoring points were visualized and analyzed using exploratory spatial data analysis (ESDA) tools. ESDA was used to assess the degree of spatial association and examine how the data are normally distributed [53,54]. The spatial clusters and outliers of existing data sets were identified using Local Moran's I [55] and Global Moran's I statistic [56]. Prediction Accuracy Improvement Procedures The prediction accuracy improvements can normally be achieved by optimizing sample locations over the geographical areas [57]. Optimization usually consists of adding, removing, and moving stations or sampling points [58]. One of the optimization algorithms used to add, remove, and transfer stations to generate optimized sampling sizes and designs is called SSA [42]. The SSA algorithm uses the mean kriging variance (MKV) as the objective function to obtain an optimal sample layout. In this case, the initial design was optimized by moving existing spatial points to the given study surface areas using soil Pb data as an example. Sample optimization by SSA also considers the kriging prediction and fitting variogram models [59]. Then, data were log-transformed before spatial optimization analysis was undertaken. The detailed optimization and evaluation techniques were explained as follows. Perturb Initial Sampling Design by SSA and Evaluations A 100 m × 100 m grid size overlaid on the study greens spaces areas, and an initial (before optimized) kriging soil Pb predictions and MKV were produced. Then, 50 to 200 random existing sample points were perturbed using 10,000 times iterations by the SSA algorithm. A new combination is generated, and the MKV values are compared with the initial sampling layout's value. The combination is accepted if the change has improved the MKV values. The maximum perturbed sampling points were decided based on the improved MKV values. The process continued until the prediction variance became constant or higher. The best-improved MKV combination was chosen, and a kriging prediction map and sampling distributions were generated. Finally, to evaluate prediction accuracy improvement, cross-validations were performed. Statistical Analysis Software and Tools Spatial sampling optimization and descriptive statistics were performed using the R Statistical Software (version 4.0.2) [60,61]. The spatial clusters and outliers of existing data sets were analyzed using the software GeoDa (version 1.14.0) [62]. Arc GIS (10.4 version) is also used to produce the kriging prediction maps. Mean Concentrations and Summary Statistics of Potentially Toxic Elements The summary statistics and mean concentration of the five PTE in urban green space soils are indicated in Table 1. The highest and lowest mean concentrations were found for Zn, and Cd, respectively. The soil mean background values in Shanghai [63] and China [64] are used as reference values to compare the present study's values. Compared to Shanghai's background values, the mean concentrations of PTE in urban green spaces soil are much higher, except for Cd and Cr. All measured mean values exceed China's reference values (Table 1) Similarly, the coefficients of variation (CV, %) for Pb, Cu, Zn, and Cd were higher, meaning more significant variations among the urban green spaces soils ( Table 1). The high CV of Pb, Cu, Zn, and Cd suggests soil pollution sources in urban green spaces are from anthropogenic sources [65]. On the contrary, the Cr CV is low, which means both natural and anthropogenic factors govern its spatial distribution. The lower CV value of Cr is consistent with many other studies [66][67][68]. The present study is consistent with the previous findings on the park and roadside green spaces in Shanghai [69], but inconsistent with results found on road-greenbelts, except for Pb [70]. The average values of Zn and Cr were significantly higher than the values reported in the western city of Urumqi in China [71]. The mean concentrations of the majority of the five pollutants considered in our study were lower than those found in studies that reported about ten years ago in roadside soil, dust, and sediment in eastern cities in China, including Shanghai [46], Guangzhou [72], and Hangzhou [73] (Table 2). Compared to the average concentrations in worldwide studies, Pb, Cu, and Zn values of our study were much lower than reported values from Spain, Mexico, Turkey, Sweden, and Cuba, but Cr concentration was much higher than the study from Turkey and Sweden (Table 2). In this study, the Cr concentration value was 2.7 and 5.2 times higher than the concentrations values found from Sweden, and Turkey, respectively ( Table 2). For all investigated pollutants, the mean concentration values were higher than those observed in the City of Pensacola, USA ( Table 2). The differences in results between this study and other studies could be due to the test method, level of urbanization in the city, the management strategies on urban green space soils [8], and the sources for variation of PTE [7], such as emissions from industry and transportation [10], smelting, manufacturing, mining, and coal-burning [11]. Optimization Strategy and Evaluation of Existing Monitoring Points The spatial interpolation in kriging is undertaken by accounting for the following assumptions [49]: (1) Data with a normal distribution, (2) data are stationary, and (3) data fit a variogram and spatial autocorrelation. Prior to carrying out the optimization strategy and the evaluation of prediction accuracy, these assumptions should be assessed and evaluated. Spatial Patterns of Existing Monitoring Points The spatial patterns and distribution of each PTE are shown in Table 3. All variables revealed a clustered spatial distribution with a statistical significance (p value < 0.01) and a positive spatial autocorrelation in the existing data sets. The most clustered positive spatial autocorrelation pattern was observed for Pb and Cd (Table 3). Global Moran's I Index values confirmed that the spatial points are clustered and non-randomness. Similarly, the kurtosis and skewness values for all pollutants, except Cr, were higher, which means the data are not normally distributed ( Table 3). The higher Kurtosis values showed many outlier data sets, and the majority of them are clustered at relatively low values. However, it does not state which spatial location features are spatial clustering [80]. Spatial outliers or local outliers are neighboring values that are spatially located at a certain distance [81]. Local Indicators of Spatial Association (LISA), known as Anselin's Local Moran's I, were used to visualize and identify the degree of spatial instability and outliers of the given data set [55]. The results of univariate Local Moran's I scatter plots of the four PTE in the soil at 12905 m threshold distance divided into four association neighborhood layouts are indicated in the supporting data files, Figure 1 HL). Spatial outlier values that include HL and LH values and spatial clusters that include HH and LL values are also indicated. For example, for soil Pb data sets, a 45 feature has neighboring features with values above the mean surrounded by HH values, and one features surrounded by LL values, which is the part of a cluster or pattern data set ( Figure 2). In contrast, 19 data points have nearby features with different values (low high and high low), and this feature is a spatial outlier. Spatial outliers are the values that are different from the values recorded in their surrounding location, while spatial patterns often exhibit spatial continuity and autocorrelation with nearby samples [81]. These spatial outliers influence the spatial structure modeling and prediction of soil pollutant concentrations in urban green spaces. Therefore, the outliers were excluded, and data were transformed before the optimization strategy was undertaken. HL). Spatial outlier values that include HL and LH values and spatial clusters that include HH and LL values are also indicated. For example, for soil Pb data sets, a 45 feature has neighboring features with values above the mean surrounded by HH values, and one features surrounded by LL values, which is the part of a cluster or pattern data set ( Figure 2). In contrast, 19 data points have nearby features with different values (low high and high low), and this feature is a spatial outlier. Spatial outliers are the values that are different from the values recorded in their surrounding location, while spatial patterns often exhibit spatial continuity and autocorrelation with nearby samples [81]. These spatial outliers influence the spatial structure modeling and prediction of soil pollutant concentrations in urban green spaces. Therefore, the outliers were excluded, and data were transformed before the optimization strategy was undertaken. Spatial Structures and Dependency The theoretical Semivariogram models are used to kriging interpolation and optimizing the existing points. The best-fitting Semivariogram models were selected based Spatial Structures and Dependency The theoretical Semivariogram models are used to kriging interpolation and optimizing the existing points. The best-fitting Semivariogram models were selected based on root mean square error (RMSE), average standard error (ASE), and root mean square standardized error (RMSSE) values, indicated in Table 4. The best-fitted model is considered to be the one with the smallest value of RMSE, the absolute values of mean errors near to zero, the mean square error (MSE) near zero, and the RMSSE closest to 1 [82]. Based on these criteria, the fitted semivariograms models for each soil element are summarized in Table 5. The best-fit spatial model of Pb and Cr was spherical, whereas Zn and Cu were best-fitted using the Gaussian model. The Cd was fitted with the exponential model. In the semivariograms, the nugget values represent the variability of the measured variables at a certain distance. The spatial dependence and variation of soil properties can be categorized based on the Nugget/Sill ratio values. Suppose the Nugget/Sill ratio is less than 25%, between 25% and 75%, and greater than 75%, the variable has strong, moderate, and weak spatial dependence [83], respectively. All studied elements had a moderate-to-strong spatial dependency, and fit the assumptions around spatial autocorrelation ( Table 5). The Nugget/Sill ratio also indicated predominant sources or soil PTE factors, either natural or anthropogenic factors. Strong spatial dependence can be attributed to intrinsic factors, and weak spatial dependence can be attributed to extrinsic factors [83]. Prediction Accuracy Improvement by Optimization Strategy A kriging interpolation surface of the study green spaces soil before optimized hereafter refers to the initial sampling design shows a predicted Pb MKV of 131.74 mg kg −1 . The kriging concentration of Pb in the initial sampling design displays spatial heterogeneity with a high prediction hotspot, which is located in the high clustered sampling points and low concentration at the edge segment, since these are the sparse and lacking in sampled areas (Figure 3a). It is also clearly noted that there are many non-sampled green spaces areas at the initial sampling design, which leads to high spatial prediction variance (131.74 mg kg −1 ). In this study, the MKV as the objective function was used in the SSA algorithm to optimize the initial sampling design [84,85]. Each SSA iteration step only involves moving one random sampling point, and the row and column of the covariance matrix are changed. As Figure 3b shows, after optimization, soil Pb sampling points were placed with a better uniformity over the study area than the initial sampling design. The MKV values also were calculated after the initial sampling design is perturbed by SSA (10000 iterations). The initial soil Pb MKV (131.7 mg kg −1 ) decreased to 128.9 mg kg −1 under 50 random spatial samplings perturbed and 102.3 mg kg −1 by 200 random spatial samples perturbed (Table 6). This means the existing soil Pb sampling points captured 22.4% of the total kriged variance improvement and increased the accuracy of un-sampled green spaces without extra sampling points. To evaluate the prediction accuracy and improvements in the initial sampling design, we performed a cross-validation comparison based on prediction RMSE, RMSSE, and ASE ( Figure 4). The values identified for RMSE, RMSSE, and ASE of 20.63, 1.006, and 21.12, respectively, before the initial sampling design was optimized; the values were 19.22, 0.99, 19.99, respectively, after the initial sampling design optimized by SSA. The better prediction accuracy could be found in the smaller values of RMSE, the closer values of RMSE with ASE, and the values of RMSSE approximate to one (Figure 4). In contrast, the value of RMSSE is higher than one for the initial sampling design, which explains the underestimation of the variability of soil Pb predictions on green spaces soil. Figure 3a also shows a higher variability of soil Pb predication concentration in the study areas by comparing the optimized sampling configuration. Many studies confirmed that the initials sampling design samples, optimized by SSA methods, provided closer pre- The MKV values also were calculated after the initial sampling design is perturbed by SSA (10,000 iterations). The initial soil Pb MKV (131.7 mg kg −1 ) decreased to 128.9 mg kg −1 under 50 random spatial samplings perturbed and 102.3 mg kg −1 by 200 random spatial samples perturbed (Table 6). This means the existing soil Pb sampling points captured 22.4% of the total kriged variance improvement and increased the accuracy of un-sampled green spaces without extra sampling points. To evaluate the prediction accuracy and improvements in the initial sampling design, we performed a cross-validation comparison based on prediction RMSE, RMSSE, and ASE ( Figure 4). The values identified for RMSE, RMSSE, and ASE of 20.63, 1.006, and 21.12, respectively, before the initial sampling design was optimized; the values were 19.22, 0.99, 19.99, respectively, after the initial sampling design optimized by SSA. The better prediction accuracy could be found in the smaller values of RMSE, the closer values of RMSE with ASE, and the values of RMSSE approximate to one (Figure 4). In contrast, the value of RMSSE is higher than one for the initial sampling design, which explains the underestimation of the variability of soil Pb predictions on green spaces soil. Figure 3a also shows a higher variability of soil Pb predication concentration in the study areas by comparing the optimized sampling configuration. Many studies confirmed that the initials sampling design samples, optimized by SSA methods, provided closer prediction results to the actual value and the lowest value of mean-variance of spatial prediction [84,[86][87][88][89][90]. Conclusions The current work has been carried out to investigate the five potentially toxic element concentrations and identify a method to improve prediction accuracy in the non-sampled locations in urban green space soils. The mean concentrations of five pollutants in urban green areas are much higher than Shanghai's background values, except for Cd and Cr. However, all measured values exceed the mean reference values in China. The concentrations of Pb, Cu, and Zn were 1.45, 1.2, 1.56 times the background value of Shanghai, respectively, and 1.57, 1.66, 1.91 times the background values of China, respectively. The higher values, in comparison to the background values, may indicate the presence of soil pollution in the study areas. Similarly, the higher CV means more significant variation exists among urban green spaces soils. The second objective was to improve the prediction values of non-sampled locations by optimized limited sampling points in the SSA algorism. As a result, an improvement in prediction accuracy by 22.4% was found for spatial prediction in non-sampled locations. Similarly, the lower mean-variance values of spatial prediction were comparable to those the initial sampling design. Therefore, this optimization approach ensures good quality of soil pollution predictions without deleting or adding monitoring points. Conclusions The current work has been carried out to investigate the five potentially toxic element concentrations and identify a method to improve prediction accuracy in the non-sampled locations in urban green space soils. The mean concentrations of five pollutants in urban green areas are much higher than Shanghai's background values, except for Cd and Cr. However, all measured values exceed the mean reference values in China. The concentrations of Pb, Cu, and Zn were 1.45, 1.2, 1.56 times the background value of Shanghai, respectively, and 1.57, 1.66, 1.91 times the background values of China, respectively. The higher values, in comparison to the background values, may indicate the presence of soil pollution in the study areas. Similarly, the higher CV means more significant variation exists among urban green spaces soils. The second objective was to improve the prediction values of non-sampled locations by optimized limited sampling points in the SSA algorism. As a result, an improvement in prediction accuracy by 22.4% was found for spatial prediction in non-sampled locations. Similarly, the lower mean-variance values of spatial prediction were comparable to those the initial sampling design. Therefore, this optimization approach ensures good quality of soil pollution predictions without deleting or adding monitoring points. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-05-06T06:16:10.542Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "0cbafda31bb43f33d397cefeb8cbeedfc843cb2b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/9/4820/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91d198bcda230dd3d0e323d12b461bee52c9d51f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
214769067
pes2o/s2orc
v3-fos-license
Gingival health and oral hygiene practices among high school children in Saudi Arabia. ABSTRACT BACKGROUND: Gingivitis is a site-specific inflammatory condition initiated by dental biofilm accumulation. The accumulation of dental plaque on the gingival margin triggers inflammatory effects that can become chronic. In addition to its local effect, gingival inflammation has recently been suggested to have an impact on general health. OBJECTIVE: Determine the prevalence of gingivitis and its relationship to oral hygiene practices in high school children in Saudi Arabia. DESIGN: Cross-sectional. SETTING: High schools from different regions in Saudi Arabia. PATIENTS AND METHODS: Periodontal examinations were conducted on a randomly selected sample of high school children between the ages of 15 and 19 years. Gingival and plaque indices, probing depth, clinical attachment level, oral hygiene practices and sociodemographic characteristics were recorded. Data were analyzed using descriptive statistics, chi-square and the independent t test. MAIN OUTCOME MEASURE: Prevalence of gingivitis as defined by mean gingival index. SAMPLE SIZE: 2435 high school students. RESULTS: Twenty-one percent of the sample had slight gingivitis, 42.3% had moderate, and 1.8% had severe. Gender, toothbrushing, tongue brushing, plaque index, and the percentage of pocket depth (PD) ≥4 mm showed a significant relationship with the severity of gingivitis. Almost 39.3% of females had a healthy periodontal status when compared to males (30.7%). Thirty-five percent (35.5%) of students who brushed their teeth had a healthy periodontium compared to 26.9% who did not brush. The mean plaque index was significantly higher in students with severe gingivitis when compared to students with healthy periodontium (2.4 vs. 0.79, respectively). CONCLUSION: Gingivitis prevalence was high compared with Western countries in a nationally representative sample of high school students in Saudi Arabia and was influenced by oral hygiene practices. LIMITATIONS: The half-mouth study design may underestimate disease prevalence. Data on oral hygiene practices was self-reported and may thus have been affected by social desirability bias. CONFLICT OF INTEREST: None. BACKGROUND: Gingivitis is a site-specific inflammatory condition initiated by dental biofilm accumulation. The accumulation of dental plaque on the gingival margin triggers inflammatory effects that can become chronic. In addition to its local effect, gingival inflammation has recently been suggested to have an impact on general health. OBJECTIVE: Determine the prevalence of gingivitis and its relationship to oral hygiene practices in high school children in Saudi Arabia. DESIGN: Cross-sectional. SETTING: High schools from different regions in Saudi Arabia. PATIENTS AND METHODS: Periodontal examinations were conducted on a randomly selected sample of high school children between the ages of 15 and 19 years. Gingival and plaque indices, probing depth, clinical attachment level, oral hygiene practices and sociodemographic characteristics were recorded. Data were analyzed using descriptive statistics, chi-square and the independent t test. MAIN OUTCOME MEASURE: Prevalence of gingivitis as defined by mean gingival index. SAMPLE SIZE: 2435 high school students. RESULTS: Twenty-one percent of the sample had slight gingivitis, 42.3% had moderate, and 1.8% had severe. Gender, toothbrushing, tongue brushing, plaque index, and the percentage of pocket depth (PD) ≥4 mm showed a significant relationship with the severity of gingivitis. Almost 39.3% of females had a healthy periodontal status when compared to males (30.7%). Thirty-five percent (35.5%) of students who brushed their teeth had a healthy periodontium compared to 26.9% who did not brush. The mean plaque index was significantly higher in students with severe gingivitis when compared to students with healthy periodontium (2.4 vs. 0.79, respectively). CONCLUSION: Gingivitis prevalence was high compared with Western countries in a nationally representative sample of high school students in Saudi Arabia and was influenced by oral hygiene practices. LIMITATIONS: The half-mouth study design may underestimate disease prevalence. Data on oral hygiene practices was self-reported and may thus have been affected by social desirability bias. CONFLICT OF INTEREST: None. P eriodontal diseases have major public health importance due to the high prevalence rates and remarkable social impact. Recently, periodontal diseases have been linked to population general health. 1 Dental plaque is considered to be a risk factor for the initiation and progression of periodontal diseases. 2 A wide variety of organisms comprise the dental plaque biofilm collected from oral surfaces. Accumulation of plaque on the gingival margin initiates gingival inflammation that can become chronic. 3,4 This inflammatory condition called gingivitis is characterized by gingival redness, edema and bleeding on probing without detectable alveolar bone loss or tooth supporting structures. 5,6 Gingivitis is reversible without permanent damage if properly treated. However, if left untreated, it can progress to periodontitis leading to destruction of alveolar bone and subsequently may lead to tooth loss. Based on epidemiological and experimental studies, dentists recommend effective oral hygiene to control the dental plaque for maintaining optimal oral health. 2,7,8 Therefore, gingivitis management is a crucial strategy to prevent the development of advanced periodontal disease. 9 Furthermore, gingival inflammation leads to the release of inflammatory mediators into the circulatory system, which may have a negative impact on overall health. 10,11 Gingivitis prevalence was 100% in a sample of adults aged between 18 and 40 years from a private college in Riyadh city 12 and in a sample of 272 of children aged 5-12-years old. 13 In another study, severity varied but was nearly universal in adolescents and children; in children older than 7 years, gingivitis affected almost 70%. 14 However, a cross-sectional study that included a sample of Saudi males (n=685) aged 13-15 years in 2016 concluded that the severity of gingivitis was not associated with toothbrushing, but significantly increased in smokers and people who consumed a sugary diet, which indicates the effect of lifestyle on gingival health status and the need to encourage a healthy lifestyle in the population. 15 Another study also indicated that periodontal disease prevalence is lower in young subjects than in adults and the incidence increases in adolescents aged 12 to 17 when compared to children aged 5 to 11. 16 The role of plaque control, which includes but is not limited to tooth and tongue brushing and flossing, and its association with gingivitis, has been widely studied. It has been globally agreed that dental floss has a positive effect on plaque removal. 17 Eighty percent of plaque deposits can be removed by flossing as reported by the American Dental Association (ADA). 18 It is also universally accepted that oral health status is closely linked to socioeconomic status, which is closely associated with oral health knowledge, attitudes and behaviors. 19,20 A study that analyzed data on self-reported oral hygiene measures showed that some increased risk of gingivitis related to oral hygiene. These findings may be related to the population studied and the impact of regular preventative dental care. In randomly selected sample in Nigeria, toothbrushing once daily was the most common practice, and the authors concluded that gingival health was influenced by socioeconomic status, oral hygiene frequency and toothbrush texture. 21 Early diagnosis and treatment of periodontal diseases in children and adolescents are important for better oral health in adults. Early periodontal diseases in children may develop into advanced periodontal diseases in adults, which may increase susceptibility to certain systemic diseases and conditions. 6,22 Prevention and treatment of most periodontal diseases are very effective and provide lifetime benefits. Patients, families, or populations at risk may be identified and included in special prevention or treatment programs. 23 The significance of implementing dental services should be emphasized through different channels, including schools, social media and oral health professionals. 24 There is a need to form baseline information about oral health in the Saudi population to understand the prevalence of periodontal diseases in Saudi Arabia. Accordingly, our study evaluated the prevalence of gingivitis and its correlation with oral hygiene practices in a nationally representative sample of Saudi school children. SUBJECTS AND METHODS This cross-sectional descriptive study to assess the prevalence of gingivitis and its correlation with oral hygiene practices among school children in Saudi Arabia took place from September 2012 to January 2016. The study was approved by the Research Ethics Committee of Faculty of Dentistry, King Abdulaziz University (073-09-12) The study included a random sample of healthy school children grades 10 to 12 (15-18 years old) of both genders. Students or parents who refused to provide consent or rejected the periodontal examination, and students with medical conditions related to periodontitis were excluded from the study. No children were admitted to the study without their parents' approval. Subject name, gender, age, marital status, address, contact information, and socioeconomic status were recorded on the consent form, which was signed by the parent. A detailed sampling design was reported in an earlier study. 25 We followed a multistage clustered sam-pling design to guarantee an adequate representation of all children in the country within the specified school grades. The study focused mainly on large cities in each region. The relative number of subjects from each city was based on the population in the region where the city is located. Within each chosen city a group of schools were randomly selected from various geographic regions to guarantee a mixture of various social and economic backgrounds. Within each selected school all children grades 10th to 12th were included in the sample. A detailed multilevel quality control procedure was used in this survey. A reference examiner trained the survey examiners and monitored them throughout the survey period. The examiners were evaluated before the survey began and were monitored during the survey period. At the examination visit, the examiners reviewed the medical history with the subjects and recorded the information. A dental history questionnaire was completed by each subject and revised with the examiner. The dental history included reference to the patient's oral hygiene regimen, including toothbrushing, brushing frequency, flossing and tongue brushing. All clinical examinations were performed by four dentists, who were calibrated to the exact procedures for disease diagnosis, the proper use of Williams probes, probe angulation, force and position for each tooth, and other examination criteria was prepared and made available to each in a diagnostic manual. Adequate training and evaluation of the examiners was conducted to document that the examiners were scoring diseases accurately and consistently. Examiners were given didactic sessions to explain the proper use of periodontal probe including force, site and angulation of the probe. After the didactic sessions, all examiners were given hands-on physical training sessions to fill the examination forms accurately. The intra and inter-examiner reliabilities of gingival index, probing depth and clinical attachment level were tested using intra-class correlation coefficients (ICC). The value of the ICC's were >0.7 for all variables, which corresponded to an excellent reliability as reported by Landis and Koch. 26 The gingival and periodontal examination consisted of measurement of the gingival and periodontal supporting tissue including gingivitis, attachment loss, and probing pocket depth. Probing depth and attachment loss were measured at six sites for each examined tooth (using a Williams probe). We randomly selected one maxillary and one mandibular quadrant using simple random sampling. The disease was evaluated at mesiobuccal, mid-buccal and distolingual (MB-B-DL) of all teeth excluding third molars following the partial mouth 3 protocol. 27 For oral hygiene evaluation, we used the Silness and Loe plaque index. 28 For severity of gingival inflammation, we used the Loe and Silness gingival index. 29 The mean gingival index was used for the assessment of severity of gingival inflammation in the study sample. Slight gingivitis was defined as gingival index 0.1-1, moderate gingivitis as gingival index 1.1-2.0, and severe gingivitis as gingival index 2.1-3.0. 30 The data were analyzed using IBM SPSS version 22.0.0 (IBM SPSS, Armonk, NY: IBM Corp). Simple descriptive statistics were used to define the characteristics of the study variables by counts and percentages for the categorical and nominal variables while continuous variables are presented as mean and standard deviation. To test for a relationship between gingivitis and categorical and continuous variables, we used the chisquare and independent t test, respectively. These tests were done under the assumption of a normal distribution. Statistical significance was set at P<.05 Prevalence of gingivitis The sample consisted of 2435 subjects ( Table 1) with a mean (SD) age of 17.3 (1.0) and mean percentage of pockets >4 mm in depth of 1.85 (range, 0 to 66.7) (Figure 1). Of the 2435 study subjects, 209 (8.6%) had periodontitis as reported earlier. 25 Table 2 shows the the prevalence of slight, moderate, and severe gingivitis and the relationships of other variables to severity of gingivitis in 2226 subjects. The remaining subjects who had periodontitis, another form of periodontal diseases, were excluded and reported in an earlier study. 24 Gender, toothbrushing, tongue brushing, plaque index, and the percentage of PD ≥4 mm showed significant relationships with the severity of gingivitis. For instance, 39.3% of females had a healthy periodontal status when compared to males (30.7%). Table 3 shows that females (96%) brush their teeth more than males (82.3%). Flossing (95.7%), tongue brushing (99.7%), plaque index (1.23 [0.8]), and gingival index (1.00 [0.8]) had statistically significant relationships to toothbrushing. Ninety-nine percent of students who brushed their tongue also brushed their teeth while only 40% of students practice the opposite. Table 4 shows that the majority(83.5%) brushed their teeth once or twice while only 16.5% brushed their teeth more than two times per day. Female gender (21.1%), flossing (27.5%), tongue brushing (20.3%) plaque and gingival indices had significant relationships to tooth- brushing frequency. Table 5 shows that 89.5% of the students did not floss their teeth and that subjects who use dental floss tend to brush their tongue (16.2%) and brush their teeth more than two times per day (18.8%). Females (13.7%) flossed more than males (7.8%), and gingival indices had a significant relationship with flossing. Table 6 shows that gender, regular dental visit, toothbrushing, brushing frequency, flossing, previous dental treatment, plaque and gingival Indices and missing teeth had a significant relationship with tongue brushing. For instance, 80% of male students did not brush their tongue compared to 50% of females. Table 7 shows that females (23.7%), students who brush their teeth (20%) and their tongues (23.8%) visit the dentist regularly. DISCUSSION Gingivitis, the most common form of periodontal disease, is characterized by inflammation of the soft tissue without evident clinical attachment loss. 31 Studies on gingivitis have been conducted in many parts of the world with people of different ethnic and cultural backgrounds, but periodic evaluation of data is very much required. Presence of gingivitis in the school children can be due to different food habits, the presence of mixed dentition, improper and unsupervised oral hygiene practices, and malocclusion. 16,32 Based on our knowledge, the present study is the first study reporting the prevalence of gingivitis and its correlation with oral hygiene practices in a representative sample from different regions in Saudi Arabia. The prevalence of gingivitis varies between studies which could be due to dissimilarities in age groups, study populations, and the case definition of gingivitis. In general, gingivitis starts in early childhood, and becomes more prevalent and severe with age. In the present population, 21.3% had slight gingivitis, 42.3% had moderate gingivitis and 1.8% had severe gingivitis with a total of 65.4% having some severity of gingivitis, which is consistent with most studies. In a study that described periodontal health in 14-to 17-year-old children who participated in the National Survey of Oral Health in US schoolchildren, during 1986-87, prevalence of gingivitis was approximately 60%, which is consistent with our data. 33 In contrast, the prevalence of gingivitis was higher in another study from Saudi Arabia in which prevalence of gingivitis was 100% in a sample of 385 adult subjects 18-40 years of age. 34 A study from Iran reported a gingivitis prevalence of 97.9%. 35 Theses inconsistencies among various studies could be attributed to age group differences, dietary habits, oral hygiene practices and/or demographic backgrounds. In our study, the severity of gingivitis was significantly related to toothbrushing, tongue brushing, plaque index, and the percentage of PD ≥4 mm. Subjects who brush their teeth and tongue were less likely to have gingivitis. As is to be expected, subjects who do not brush had a higher plaque index and percentage of PD ≥4 mm compared to those who do brush. It has been shown that abstaining from oral hygiene measures for 21 days will result in the initiation of gingival inflammation. 36 This correlates well with the results of this study, specifically that 11.5% reported that they do not brush their teeth and 40.7% brush only once a day. Our study indicated that there is a significant relationship between the severity of gingivitis and gender. Males were more likely to have gingivitis compared to females and this is consistent with several other studies. 24,37 One explanation, which is consistent with our data, is that males have poorer hygiene practices and worse attitudes to oral health and visiting the dental office. In our study population, more males did not brush their teeth, floss, or attended regular dental visits compared to females. A higher rate of oral hygiene practices (tooth and tongue brushing, frequency of brushing and flossing) were observed among females than males in our study sample. This translated to a higher plaque index in the male population compared to the female population. Another study similarly reported that 82% of boys and 76% of girls were affected with gingivitis, and they attributed this discrepancy to the greater cleanliness of the girls. 15 A study on the prevalence of periodontal disease in 19-year-old individuals in Sweden in relation to gender revealed that gender (males) and the particular county region were significant factors associated with high plaque and gingivitis score. 38 Our data indicates that age and nationality (Saudis vs. non-Saudis) were not significantly associated with the severity of gingivitis, which is inconsistent with other studies 39,40 that showed worse oral hygiene status among older children. The inconsistency could be due to the older age group in the present study (15-19 years) while in the aforementioned studies the ages were 7-15 years, and the sample size of the non-Saudi students was small. In the present study, the mean plaque index was higher in those who reported not brushing their teeth (plaque in-dex=1.59) than in those who did (plaque index=1.23). The same pattern was observed with the gingival index. The mean gingival index was 1.21 in those who did not brush their teeth and 1.00 in those who did. These parameters were also significantly related to the frequency of toothbrushing, tongue brushing and flossing. Plaque index was significantly associated with the severity of gingivitis, meaning that the higher the plaque index the more severe the gingival inflammation. However, in a 3-5 year-old Flemish population, almost 30% to 40% of the children presented with noticeable plaque accumulation and only 3% to 4% of the population presented with clinical signs of gingival inflammation. 41 The differences may be due to the different age groups. Gingivitis in younger age groups usually tends to be plaque independent. The major contributing factors of increased gingival and plaque indices were incompliance with oral hygiene measures and lack of regular dental visits, which is consistent with a study of 3090 Saudi students, where 22.6% had never visited the dentist. That study showed a correlation between the use of dental services and periodontal health status, especially if the services were suggested by the dentist or the patient was given oral hygiene instructions (i.e. taught how to brush by the dentist). 24 Toothbrushing is the most prevalent method of plaque control at home. Lack of plaque reduction despite satisfactory brushing frequency seems to be attributed to a lack of oral hygiene practices and skills, which also influences the efficiency of self-performed mechanical plaque removal in adults. 42 Therefore, studies on oral hygiene techniques indicate that appropriate techniques in use for a long time (modified Bass technique, modified Stillmann technique, Charter technique) are significant in the prevention of periodontal diseases. 43 Our study indicated that almost all students who brush their tongue brush their teeth. However, only 40% of students who brush their teeth brush their tongue. One study showed significant reductions in plaque levels after 10 and 21 days of tongue brushing. 44 It also showed that tongue brushing was equally effective in reducing plaque deposits in children. Therefore, all the other oral hygiene practices should be stressed out as an adjunc-tive to toothbrushing during the educational programs to maximize the effect of plaque control measures. Although oral hygiene instructions have been considerably researched, most of the studies suffer from methodological deficiencies, such as missing control groups. 45,46 The importance of education was demonstrated in a study in Edinburgh. School children who received 20 minutes information sessions and takehome educational material had statistically significant improvements in plaque scores and gingival health compared to those who did not. 47 The present study used the half-mouth study design, which may underestimate the disease prevalence. The data on oral hygiene practices was self-reported and may thus have been affected by social desirability bias. In conclusion, there was a high prevalence of gingivitis among the study sample and it was related to oral hygiene practices. Therefore, emphasis on the importance of tooth and tongue brushing, flossing and regular dental visits is recommended to prevent gingivitis. Moreover, community instructive and preventative programs should be contemplated and re-implemented on a larger scale-to a larger study population, in both urban and suburban areas, with a larger population size and clearer instructive programs. Although there is good evidence to support toothbrushing only for children, flossing is recommended to develop the necessary skills and establish a habit. Further research on the application of oral health educational and preventive methods is required to improve the oral health status of Saudi population. There is also good evidence to recommend the use of chlorhexidine oral rinse to help prevent gingivitis. 48
2020-04-03T19:18:52.446Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "a1b92fb6ea2b7223c80395c6bd58d1014df20047", "oa_license": "CCBYNCND", "oa_url": "https://www.annsaudimed.net/doi/pdf/10.5144/0256-4947.2020.126", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de7bec57edfb50a8cbc50ce73f403eb8a820aff2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233370762
pes2o/s2orc
v3-fos-license
Intradermal delivery of a synthetic DNA vaccine protects macaques from Middle East respiratory syndrome coronavirus Emerging coronaviruses from zoonotic reservoirs, including severe acute respiratory syndrome coronavirus (SARS-CoV), Middle East respiratory syndrome coronavirus (MERS-CoV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), have been associated with human-to-human transmission and significant morbidity and mortality. Here, we study both intradermal and intramuscular 2-dose delivery regimens of an advanced synthetic DNA vaccine candidate encoding a full-length MERS-CoV spike (S) protein, which induced potent binding and neutralizing antibodies as well as cellular immune responses in rhesus macaques. In a MERS-CoV challenge, all immunized rhesus macaques exhibited reduced clinical symptoms, lowered viral lung load, and decreased severity of pathological signs of disease compared with controls. Intradermal vaccination was dose sparing and more effective in this model at protecting animals from disease. The data support the further study of this vaccine for preventing MERS-CoV infection and transmission, including investigation of such vaccines and simplified delivery routes against emerging coronaviruses. Introduction Middle East respiratory syndrome (MERS) coronavirus (MERS-CoV) is a positive-sense, single-stranded RNA coronavirus that infects the lower and upper respiratory tract, causing a viral pneumonia characterized by acute respiratory symptoms, such as fever, aches, shortness of breath, sore throat, cough, diarrhea, and vomiting (1). Since 2012, there have been 2566 laboratory-confirmed cases and 882 MERS-CoVassociated deaths (34.4% case fatality rate) (2). Human cases are frequently associated with close contact of infected camels; however, human-to-human transmission, nosocomial infections, and travel-associated cases have been observed. MERS-CoV has therefore become a global health priority concern. The 2015 South Korean outbreak originated from a single traveler who returned home from the Middle East. In total, 186 people were infected during the South Korean outbreak, with 36 MERS-associated fatalities (3) and a significant impact on the healthcare system. This outbreak highlights the importance of rapid infection control for emerging coronaviruses and other infectious diseases. The urgent need for accelerated vaccine development has become critical in light of the ongoing severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, a betacoronavirus related to MERS-CoV. DNA vaccines are a nonlive, noninfectious platform that are re-administrable, easily scalable for manufacturing, have an established safety and tolerability profile, and are heat stable (4,5). We previously described the rapid development of an anti-MERS synthetic DNA vaccine encoding a full-length MERS-CoV spike (S) antigen, which induced robust humoral and cellular immune responses and protected rhesus macaques from MERS-CoV challenge (6). This MERS DNA vaccine candidate (INO-4700/GLS-5300), delivered by intramuscular (i.m.) administration, was found to be safe and tolerable with a 3-dose Emerging coronaviruses from zoonotic reservoirs, including severe acute respiratory syndrome coronavirus (SARS-CoV), Middle East respiratory syndrome coronavirus (MERS-CoV), and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), have been associated with human-tohuman transmission and significant morbidity and mortality. Here, we study both intradermal and intramuscular 2-dose delivery regimens of an advanced synthetic DNA vaccine candidate encoding a full-length MERS-CoV spike (S) protein, which induced potent binding and neutralizing antibodies as well as cellular immune responses in rhesus macaques. In a MERS-CoV challenge, all immunized rhesus macaques exhibited reduced clinical symptoms, lowered viral lung load, and decreased severity of pathological signs of disease compared with controls. Intradermal vaccination was dose sparing and more effective in this model at protecting animals from disease. The data support the further study of this vaccine for preventing MERS-CoV infection and transmission, including investigation of such vaccines and simplified delivery routes against emerging coronaviruses. JCI Insight 2021;6(10):e146082 https://doi.org/10.1172/jci.insight.146082 injection regimen in a recently completed human phase I study (7) and is currently in expanded studies of a phase I/IIa trial in South Korea. Further study of low-dose delivery with shortened dosing regimens is important to rapidly induce protective immunity, particularly during an emerging outbreak (8). Here, we describe i.m. and intradermal (i.d.) delivery, immunogenicity, and protective efficacy of the MERS DNA vaccine candidate INO-4700, using an abbreviated 2-dose immunization regimen in rhesus macaques. We observed induction of strong antibody titers against the full-length S protein as well as the receptor-binding domain (RBD), S1, and S2 regions of the S protein. We also observed induction of neutralizing antibody responses and cellular immune responses. Finally, the animals were challenged and the effect of the vaccination on infection against vigorous MERS-CoV challenge in nonhuman primates (NHPs) was studied. Macaques receiving this 2-dose vaccine demonstrated lower viral loads with protection of the lung from inflammation, protection against elevated cytokine levels, and, most importantly, protection against clinical disease symptoms such as breathing difficulties. Even low-dose i.d. delivery afforded comparable efficacy to higher dose i.d. and i.m. regimens, and both i.d. immunizations exhibited improved disease control compared with i.m. vaccination. The data support further evaluation of simple dose-sparing i.d.-delivered DNA vaccination regimens against MERS-CoV. These advances have important applicability for similar DNA vaccines and i.d. delivery against other emerging betacoronaviruses, such as SARS-CoV-2, as well as for future emerging infectious diseases. Results Immunogenicity of i.d. delivered MERS DNA vaccine. Very recent advances in formulations for i.d. delivery of synthetic DNA vaccines with adaptive electroporation (EP) have significantly improved the generation of antigen-specific immune responses, including long-term antibody and T cell responses induced in human trials, with responses persisting at least 1 year after vaccination (9)(10)(11). Delivery of DNA vaccines i.d. is tolerable, simple to administer, and is potentially more immunogenic than i.m. delivery when given at the same dose in recent clinical studies (7,9,10). Therefore, we evaluated the efficacy of our previously described synthetic MERS DNA vaccine (6), which had been studied in NHP using an i.m. 3-dose immunization regimen. Here, we studied an abbreviated 2-dose i.d. immunization regimen and compared this approach with i.m. delivery. Rhesus macaques (n = 6/group) were first administered either a 0.2 mg dose (i.d.-low), a 1 mg dose (i.d.-mid), or a 2 mg dose (i.d.-high) of the MERS DNA vaccine by i.d. injection followed by adaptive EP. The i.m. group (n = 6) received a 1.0 mg dose. All vaccinated groups received a 2-dose regimen, spaced at a 4-week interval ( Figure 1A). The control group (n = 6) was not vaccinated. Cellular and humoral immune responses were assayed following each immunization. Following the immunization studies, we selected 3 of the groups and 4 of the animals from each of the selected groups for MERS viral challenge, based on space limitations. We analyzed both humoral and cellular responses, as the role of both adaptive immune compartments may be important for viral clearance and recovery from infection, as has been described for both SARS-CoV and MERS-CoV (12,13) and suggested by recent studies of human immune responses in convalescent patients with SARS-CoV-2 (14,15). We analyzed the induction of T cell responses by IFN-γ ELISpot 2 weeks after each immunization. T cell responses against peptide pools spanning the full-length S protein were readily detected in 6 Figure 1; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.146082DS1). Additionally, IFN-γ ELISpot assays were performed using full-length recombinant S protein for stimulation as a tool to address rapid vaccine evaluation during an outbreak in the absence of synthetic peptide pools. Although fewer total spots were observed, on average, strong T cell responses were induced in all groups following a similar trend to those observed with peptide pools ( Figure 1C), supporting the full-length antigen study as an additional assay tool in evaluation of vaccine immunogenicity. To address the question of antibody responses following in vivo processing of a full-length spike protein antigen, we assayed antibody endpoint titers against the full-length S as well as S1, S2, and RBD by total IgG binding ELISA. After 1 immunization, 67% ( , all vaccinated animals seroconverted to full-length S, S1, S2, and RBD proteins, except for 1 animal in the i.d.-mid group that did not seroconvert to S2 protein ( Figure 1D). Two weeks after the second immunization, geometric mean endpoint titers in all groups were approximately 10 4 for both full-length S and S1 proteins. Geometric mean endpoint titers in all groups were approximately 10 2 to 10 3 for S2 and RBD, with a trend for slightly higher titers in the i.m. group, though there were no significant differences in endpoint titer values between vaccine groups. Overall, the antibody responses induced in this study demonstrate the consistency of synthetic DNA vaccination and robust induction of antibody responses by the simple i.d. delivery. Notably, responses were also robust in the low-dose (0.2 mg) i.d.-delivered MERS DNA vaccine group ( Figure 1D). Eighteen animals were selected for challenge with MERS-CoV of the 30 total animals, due to funding and space limitations. Based on the ELISpot and endpoint binding antibody titer data available at the time, a total of 12 vaccinated animals and the 6 naive control animals were moved into a challenge study (animals that were not selected for challenge are indicated by open shapes in Figure 1, C and D). There is no statistical difference regarding immune responses between the animals in each group that were challenged compared with those that were not challenged. Because the i.d.-low group exhibited robust immunogenicity, we wanted to compare its challenge outcome to the i.d.-high group, so 4 animals each from the i.d.-high and i.d.-low groups were chosen for the challenge. Four animals from the i.m. group served as a comparison with previous studies, which were 3-dose immunization studies (6). Neutralizing antibody titers for the challenged animals were assayed using MERS-CoV EMC/2012 ( Figure 1E). Neutralizing activity was detected in the sera after the boost, peaking at week 6, with average titers of 50, 170, and 130 for i.m., i.d.-high, and i.d.-low groups, respectively. By week 8, all vaccinated groups had comparable neutralizing antibody titers, demonstrating that similar binding and neutralizing antibody titers could be induced by low-dose (0.2 mg) i.d. vaccination as compared with higher doses (1.0 mg i.m. vaccination and 2.0 mg i.d. vaccination). Delivery i.d. appears dose sparing based on this comparison, and a similar observation has recently been reported for an HIV DNA vaccine studied in the clinic, which was delivered by the Cellectra i.d. EP approach (11). Challenge outcome of i.d. versus i.m. MERS DNA vaccine regimens. Macaques were challenged by inoculation with 7 × 10 6 median tissue culture infectious dose (TCID 50 ) of MERS-CoV EMC/2012 strain through rigorous installation via a combination of intratracheal, intranasal, oral, and ocular administrations, as previously established (16,17). NHPs were monitored for clinical signs of disease and also received chest x-rays on days 0, 1, 3, 5, and 6 after challenge, before they were euthanized and necropsied on day 6 for lung pathology and viral load determination. All immunized animals, except 1 i.m. animal (11 of 12), had a major reduction in clinical signs of disease as compared with the control group (Figure 2A and Supplemental Table 1) showing significant disease protection. A upE qRT-PCR assay was performed to detect viral loads present in the collected lung tissue. Overall, compared with the unvaccinated animals, all MERS DNA vaccine groups exhibited log reductions in viral loads across all regions of the lower airways ( Figure 2B). Significantly decreased viral loads were observed in all vaccinated groups compared with control animals in the right bronchus, right middle lung, right lower lung, left upper lung, left middle lung, and left lower lung lobes (P values are listed in Supplemental Table 2 Table 3). Minimal virus was detected in the routes of installation challenge. It is likely that residual virus from the installation was being detected in these tissues, as 2 animals were still positive in the conjunctiva (ocular administration route), a nonrespiratory tissue, at day 6. In both the vaccinated and control groups, radiographic signs of disease were minimal. Lung tissues from all challenged animals were examined with H&E staining and IHC against MERS-CoV antigen to evaluate virus-induced pathology ( Figure 2E). Histological evidence of mild focal interstitial pneumonia was observed in 5 of 12 animals in the vaccinated group, with multifocal moderate interstitial pneumonia in all 6 naive macaques. All 6 animals in represented with closed symbols were challenged with MERS-CoV 4 weeks after final immunization Open symbols depict responses for animals that were not selected for challenge. (E) Vaccine-induced neutralizing antibody titers in challenged animals (n = 4/vaccinated groups, n = 6/naive). Sera were evaluated for their ability to neutralize MERS-CoV. Reciprocal neutralizing antibody (nAb) titers are shown, with boxed indicating 25th percentile, median, and 75th percentile, and whiskers showing the minimum and maximum values. the control group eventually developed multiple symptoms of disease, including difficulty breathing, as did 1 animal in the i.m. vaccinated group. No animals in the i.d. groups exhibited symptoms associated with lung impact in the challenge course of study. All of the control animals showed lung disease symptoms during the challenge course as well as other symptoms. Finally, MERS-CoV antigen was detected through IHC in 4 of 6 lung specimens from unvaccinated macaques but was not observed in any vaccinated macaques ( Figure 2E). Discussion In the last twenty years, 3 new CoV have emerged from zoonotic reservoirs (MERS, SARS, and SARS-CoV-2). There are no licensed vaccines to prevent coronavirus infections in people; however, important products are advancing in this space. Vaccine candidates that are simple to deliver, well tolerated, do not induce anti-vector immunity, and that can be readily administered in resource limited settings could be important. There have been a few other vaccine candidates evaluated in NHP challenge studies for MERS. These include an rRBD-plus-adjuvant-vaccine approach using 3 immunizations reported by Lan et al., which induced partial protection in a short-term, 3-day challenge NHP model (18). A study by L. Wang et al., using combinations of DNA vaccines and protein boosts, showed limited vaccine effect on infection by CT scan read out (19). A recent study by van regimens. Both dose regimens could effect viral disease and viral load, particularly in the lower respiratory tract, with the single-dose regimen exhibiting a smaller protective effect with limited effect on pathogenesis compared with the 2-dose regimen. Data from these reports are illustrative of the utility of this particular multiple route-challenge NHP model developed at Rocky Mountain Labs (RML) for vaccine testing. It is reproducible and provides broad tissue sampling as well as disease read outs (6,16,17,20). Here, we investigated the immunogenicity and protective efficacy of an i.d.-delivered synthetic MERS DNA vaccine using a shortened 2-dose immunization schedule and compared this to an i.m. delivered 2-dose DNA vaccine formulation. Immune analysis compared 3 different vaccine doses for their immune potency by i.d. delivery in parallel with i.m. delivery. The MERS DNA vaccines induced antibody responses against all regions of the S protein and robust neutralizing antibodies. Cellular immune responses were induced in all animals, which may be important for clearance of virally infected cells, limiting pathogenesis, and reducing viral loads. Vaccines that drive both antibody and T cell immunity could be important for preventing asymptomatic spread and protecting the lower airway, thus mitigating disease. For challenge, we downselected animals to focus on vaccine groups from the i.d.-low, i.d.-high, and i.m. immunization groups, and due to cost constraints, we were limited in the number of animals in each challenge group, although the immune spread was overlapping. Challenge outcome showed that all 3 vaccination groups protected rhesus macaques against MERS-CoV EMC/2012 challenge compared with unvaccinated control animals; however, the i.d. groups, including the low-dose group, appeared to have the most robust effect on disease and symptomology. To our knowledge, this is the first demonstration of protection with an i.d.-delivered MERS, or other coronavirus, vaccine candidate. Using a sensitive RT-PCR assay, we observed significant decreases in viral loads in vaccinated animals in the lower lung regions and significant reduction in early inflammatory cytokines in response to viral infection as well as protection against symptomology. Prior work with MERS vaccine candidates has focused primarily on i.m. delivery (18,19,21), Additionally, this study demonstrated that a 2-dose regimen and the low-dose i.d. delivery was more impactful on disease than a higher dose i.m. delivery. In this study, we observed that i.m. delivery of synthetic DNA vaccines induced somewhat higher cellular immune responses than i.d. delivery at the same dose, though i.d. delivery induced consistent IFN-γ enzyme-linked immunospot (ELISPOT) responses. In contrast, i.d. delivery appeared to induce faster seroconversion, and higher binding antibody titers, as well as neutralizing antibody titers than i.m. delivery. This trend can be seen in this study with a MERS DNA vaccine (Figure 1) as well as in recent clinical studies of DNA vaccines targeting HIV (11) and Ebola (10). In addition, it could be that there is a different induction of T cell trafficking induced by i.d. versus i.m. immunization, such as has recently been reported in a leishmania model system (22). One hypothesis is that different cell populations are transfected between the muscle (myocytes) and skin (keratinocytes, fibroblasts, dendritic-like cells, adipocytes, and potentially some myocytes) (23), resulting in different recruitment profiles for antigen-presenting cells to the site of immunization. Additional study in this regard is warranted. i.d. delivery using synthetic DNA has significant advantages for rapid clinical development, is dose sparing with a simple administration procedure, and is associated with high tolerability. As MERS vaccine candidates progress through preclinical and clinical studies, questions regarding animal models and efficacy endpoints are important to address. In-country human efficacy trials may be challenging due to the low number of yearly cases (<300). Data from animal models such as this NHP model may therefore have value as a bridge, with human data coming from expanded phase clinical trials. Understanding the relevance of rigorous installation challenges in NHPs will be important, as it is unlikely that humans will encounter such a high infectious dose from multiple sites. It is possible that this model is a high bar for vaccine sterilization; however, the vaccines tested in this study exhibited substantial impact and protection from disease, which was more pronounced using the i.d. route of vaccination. The reproducibility of the NHP model of MERS-CoV infection and the clear phenotype of disease induced mimicking aspects of human infection suggests that such a model might also be useful with regards to studies of vaccines for SARS-CoV-2, the virus responsible for the COVID-19 disease pandemic that was first identified in China in 2019. Furthermore, questions have been raised by some vaccine studies in SARS and MERS challenge models reporting enhancement of viral pathogenesis in immunized animals compared with nonvaccinated controls in the absence of robust neutralizing antibodies. For example, Hashem et al. reported on an Ad5-MERS spike vaccine, which in a mouse model appeared to increase lung pathogenesis following viral challenge (24). Such enhancement of disease has also been reported for an MVA vectored spike SARS vaccine in an NHP challenge model where immunized animals presented with diffuse alveolar damage after SARS-CoV challenge, whereas control immunized animals showed only JCI Insight 2021;6(10):e146082 https://doi.org/10.1172/jci.insight.146082 signs of minor inflammation after SARS-CoV infection (25). In the current study, no evidence of adverse lung pathology was observed in any of the dosing groups compared with unimmunized control animals. Assessment of a large panel of blood cytokines after challenge showed significant decreases in all such inflammatory mediators and were consistently observed across the animals in this challenge, suggesting that the vaccines have a benefit in prevention of virally induced destructive inflammation. In summary, our results illustrate that a MERS spike antigen synthetic DNA vaccine administered in a 2-dose i.d. EP regimen can have positive impact in an important NHP challenge model protecting against symptoms and pathology. Dose-sparing impact was shown whereas no evidence of enhanced lung pathology and limited virally induced systemic inflammation was shown after i.d. delivery of a synthetic DNA vaccine encoding a full-length MERS-CoV spike. In addition, the vaccine induces antibody and cellular immune responses, both which can contribute to protection and clearance of virally infected cells, limiting pathogenesis and reducing viral loads in MERS-infected patients (13). Additional studies and comparison of immunogenicity data from human trials will be informative for MERS-CoV vaccines as well as for other emerging CoV infections. Methods Study design. Groups of 6 rhesus macaques (BIOQUAL Inc.) were vaccinated twice 4 weeks apart i.m. (1 mg -1 site) or i.d. with various doses (2 mg -1 mg in 2 sites; 1 mg -1 site; 0.2 mg -0.1 mg in 2 sites) of a synthetic DNA vaccine encoding a full-length MERS-CoV S antigen with EP (6). A subset of animals (i.m., n = 4; i.d.high, n = 4; i.d.-low, n = 4; control, n = 6) was transported from BIOQUAL Inc. to RML approximately 2 weeks before live-virus challenge. Humoral responses were similar for all selected animals, and selection was based on their cellular responses after immunization. Macaques that trended toward higher antibody and T cell levels were selected for challenge, although levels were not significantly different from animals that were not selected. Open symbols in Figure 1, C-E, indicate animals not selected for challenge. Animals were randomly assigned study numbers before arrival at RML, and all RML personnel were completely blinded to group assignments. Rhesus macaques were inoculated with 7 × 10 6 TCID 50 of MERS-CoV EMC/2012 by combination of intratracheal, intranasal, oral, and ocular routes (26). After challenge, the animals were observed twice daily for clinical signs of disease and scored using a previously described clinical scoring system (the same person, blinded to group assignments, scored the animals throughout the entire study) (27). On 0, 1, 3, 5, and 6 days after inoculation, clinical exams were performed on anesthetized animals by board-certified clinical veterinarians. Blood was collected for hematology, serum chemistry, and serological analysis. Ventral-dorsal and lateral radiographs were collected. On day 6 after inoculation, all animals were euthanized, and necropsy was performed on all animals by a board-certified veterinary pathologist. Conjunctiva, nasal mucosa, mandibular lymph nodes, tonsils, pharynx, trachea, right and left bronchus, samples from all lung lobes, mediastinal lymph nodes, liver, spleen, kidney, and urinary bladder were collected for virological analysis; whole lungs were collected for histopathological analysis. Hematology and clinical chemistries. The total white blood cell count, lymphocyte, neutrophil, platelet, reticulocyte, and red blood cell count as well as hemoglobin, and hematocrit values were determined from EDTA blood with the IDEXX ProCyte DX analyzer (IDEXX Laboratories). Serum biochemistry (albumin, AST, ALT, GGT, BUN, creatinine) was analyzed using the Piccolo Xpress Chemistry Analyzer and Piccolo General Chemistry 13 Panel discs (Abaxis). PBMC isolation. Whole blood was collected from each NHP into sodium citrate cell preparation tubes (CPT; BD Biosciences) containing an anticoagulant and a gel barrier. Before same-day shipment and following collection, the tubes were spun to separate and concentrate PBMCs as per the manufacturer's instructions. Red blood cells and neutrophils pellet at the bottom of the tubes and are held in place by the gel barrier. Plasma and lymphocytes remain above the gel barrier. Each CPT can hold approximately 8 mL of blood and is shipped at room temperature. The spun CPT tubes were processed for PBMC isolation. After red blood cell lysis with ammonium-chloride-potassium buffer, viable cells were counted using ThermoFisher Countess Automated Cell Counter and resuspended in complete culture medium media (RPMI 1640 supplemented with 10% FBS, 1% penicillin/ JCI Insight 2021;6(10):e146082 https://doi.org/10.1172/jci.insight.146082 streptomycin) (R10). After removing cells for IFN-γ ELISpot and ICS assays, the remaining PBMCs were frozen in freezing media (10% DMSO, 10% RPMI, 80% FBS) in cryovials and stored long term in liquid nitrogen. ELISPOT assay. To assess the cellular IFN-γ responses to vaccinations, monkey IFN-γ ELISPOT assays were performed using a IFN-γ ELISpotPRO kit (ALP) (catalog 3421M-2APW-10, Mabtech) following the manufacturer's instructions. Briefly, 96-well plates were blocked for a minimum of 2 hours with R10 and then 200,000 PBMCs from study animals were added to each well and incubated at 37°C in 5% CO 2 in the presence of media with DMSO (negative control), cell stimulation cocktail (PMA/ionomycin, eBioscience) (positive control), peptide pools consisting 15-mers overlapping by 9 amino acids and spanning the length of MERS S protein (GenScript, custom made), or recombinant S protein (SinoBiological). After 18-20 hours, the plates were washed and spots were developed according to the manufacturer's instructions. Antigen-specific responses were determined by subtracting the number of spots in the DMSO containing wells from the wells containing peptides or protein stimulation. ELISA. ELISA was performed to determine the antigen-specific antibody response in sera. 96-well ELISA plates (Nunc, 44-2404-21) were coated with 1 μg/ml recombinant MERS S, S1, S2, or RBD proteins (SinoBiological) in DPBS overnight at 4°C. Plates were then washed 4 times with PBS plus 0.05% Tween-20 (PBST) and blocked with 5% skim milk in PBST for 90 minutes at 37°C. After blocking buffer incubation, plates were washed and serially diluted rhesus macaque sera were added with dilution buffer (5% skim milk in PBST) and incubated for 1 hour at 37°C. Plates were washed and 1:10,000 dilution of secondary antibody HRP conjugate (4700-05, Southern Biotech, clone SB108a) was added and incubated for 1 hour at 37°C. Plates were washed, 1-step TMB (MilliporeSigma) was applied to the plates, and the reaction was stopped with 2 N sulfuric acid. Plates were then read for absorbance at 450 nm within 30 minutes using a Biotek Synergy 2 plate reader. Sera from 24 unvaccinated rhesus macaques was utilized to determine the background cut-off for calculating endpoint titers for each target protein. Sera samples were scored as positive for binding antibodies if they were 3 standard deviations above the average of the unvaccinated animals. Virus neutralization assay. Two-fold serial dilutions of heat-inactivated (30 minutes, 56°C) rhesus macaque sera were prepared in DMEM containing 2% fetal calf serum, 1 mM L-glutamine, 50 U/ml penicillin, and 50 μg/ml streptomycin, after which 100 TCID 50 of HCoV-EMC/2012 virus was added. After a 1-hour incubation at 37°C, this mix was added to VeroE6 cells. At 5 days after infection, wells were scored for cytopathic effect. The virus neutralization titer was expressed as the reciprocal value of the highest dilution of the serum that still inhibited HCoV-EMC/2012 virus replication. Quantitative RT-PCR. Tissues (30 mg) were homogenized in RLT buffer and RNA was extracted using the RNeasy kit (Qiagen) according to the manufacturer's instructions. For detection of viral RNA, 5 μl RNA was used in a 1-step real-time RT-PCR upE assay (28) using the Rotor-Gene probe kit (Qiagen) according to instructions from the manufacturer. In each run, standard dilutions with known copy numbers of a T7 in vitrotranscribed RNA standard were run in parallel to calculate the copy number of RNA present in the samples. Radiographs. Ventrodorsal and lateral (right and left) radiographs were obtained using a mobile digital radiography unit with a flat-panel digital detector (Sound Technologies tru/DR) and portable x-ray generator (model PXP-HF, Poskom). Radiographs were interpreted by 2 board-certified clinical veterinarians. Histopathology. Histopathology and IHC were performed on macaque lungs. Tissues were placed in cassettes and fixed in 10% neutral buffered formalin for 7 days. Tissues were subsequently processed with a Sakura VIP-5 Tissue Tek, on a 12-hour automated schedule, using a graded series of ethanol, xylene, and ParaPlast Extra. Embedded tissues were sectioned at 5 μm and dried overnight at 42°C prior to staining. Tissue sections were stained with H&E. Specific anti-CoV immunoreactivity was detected using an in-house polyclonal rabbit antibody against MERS-CoV EMC/2012 at a 1:1000 dilution. The tissues were then processed for IHC using the Discovery XT automated processor (Ventana Medical Systems) with a DAPMap kit (Ventana Medical Systems). Statistics. GraphPad Prism 7.02/8.0 was used to analyze and plot the data. Data are presented as a range from minimum to maximum value, with all data points shown. Where appropriate, the statistical difference JCI Insight 2021;6(10):e146082 https://doi.org/10.1172/jci.insight.146082 between immunization groups at each time point was assessed using a parametric t test (2 tailed) or nonparametric Mann-Whitney test, adjusted for multiple comparisons using a Bonferroni correction. Adjusted P < 0.05 was defined as significant. Study approval. All animal experiments were approved by the Institutional Animal Care and Use Committees at BIOQUAL Inc. and at Rocky Mountain Laboratories, NIH, and were carried out by certified staff in Association for Assessment and Accreditation of Laboratory Animal Care International accredited facilities, according to each institution's guidelines for animal use. The studies followed the guidelines and basic principles in the United States Public Health Service Policy on Humane Care and Use of Laboratory Animals (http://grants.nih.gov/grants/olaw/references/PHSPolicyLabAnimals.pdf) and the Guide for the Care and Use of Laboratory Animals (National Academies Press, 2011). The Institutional Biosafety Committee approved work with infectious MERS-CoV under BSL3 conditions. Sample inactivation was performed according to Institutional Biosafety Committee-approved standard operating procedures for removal of specimens from high containment. Author contributions AP, ELR, HF, KM, KEB, and DBW contributed to the conception and design of the studies and reviewed data over the course of the studies. AP, ELR, FIZ, KYK, DPS, RS, FF, AO, KMW, EH, TT, RR, JL, PWH, KM, and GS performed experiments and analyzed data or generated reagents supporting the studies. JM managed the animal procurement and shipping. All authors interpreted/reviewed findings. AP, ELR, ZX, and KM contributed to the first drafts of the manuscript, HF, AP, ELR, DBW, KEB, and LMH edited sections for the manuscript. All authors contributed to and approved the final version of the manuscript.
2021-04-24T06:17:56.785Z
2021-04-22T00:00:00.000
{ "year": 2021, "sha1": "fa0f21567d47d4b37fa84c7a90f93359d2b0e9f8", "oa_license": "CCBY", "oa_url": "http://insight.jci.org/articles/view/146082/files/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c29a0a185276827a9585486830b81985af8e950c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229474886
pes2o/s2orc
v3-fos-license
On normalized generating sets for generalized quasi-twisted codes over finite fields In this short correspondence, we firstly determine the normalized generating set for generalized quasi-twisted codes over finite fields. Then we present an algorithm to construct it. As an application, some optimal or the best-known linear codes are derived from generalized quasi-twisted codes by applying our method. IOP Publishing doi: 10.1088/1742-6596/1684/1/012089 2 We prove that GQT codes over finite fields also have the normalized generating sets, and we construct some good GQT codes by the normalized generating sets. This paper is organized as follows. In Section II, we introduce some preliminary notations and definitions. Section III determines the normalized generating set for GQT and presents an algorithm to construct it . In Section IV, we construct some optimal or the best-known GQT codes. Section V concludes the paper. PLIMINARIES Let q F be a finite field and * q F be the unit group of q F , where q is a power of a prime. Let Then C is called a λ -generalized qausi-twisted (GQT) code of length n over q F . Note that, if 1 2 r r r = = =   , then C is a λ -qausi twisted (QT) code. And if 1 λ = , then C is a generalized qausi-cyclic (GQC) code. Let i π be the i -th canonical projection of C , i.e., 1 ,0 ,1 , . Then R has an [ ] q x F -module structure given by the multiplication "*" as follows: Let C be a GQT code and we have an isomorphism of q F -modules from C to R by mapping 1 Then there exists a generating set 1 2 Further, C has the following form of generator matrix over q . of . C In this section, we will show that any generating set of GQT codes can be normalized. Further, by the normalized generating sets, we can construct some GQT codes with good parameter to solve the open question in [9]. ON NORMALIZED GENERATING SETS FOR GQT CODES OVER FINITE FIELDS Theorem 1 Let C be a GQT code of length such that 1 2 , , , . Such a generating set is called normalized here. Proof: When 1 =  , then C is a λ -constacyclic code. Obviously, the results hold for is generated by 1 2 , , , … a a a  ,which satisfy above conditions by mathematical induction. By mathematical induction, we get the desired results. Corollary 2 If a GQT code C has a normalized generating set as in Theorem 1, then the dimension of C is 1 1 deg . Then it is easy to see that the set S generates C ant the elements in S are q F -linear independent. By Theorem 1, we give an algorithm to construct a normalized generating set from any generating set of the GQT code C . Suppose that is a generating set of C . • step(a): By Euclidean algorithm, find 1 2 gcd( , SOME GOOD GQT CODES In the section, we will construct some good GQT codes over q F by Theorem 1. Applying the Magma System [10], we attain that C is a [12,8,3]-code over 3 F . By the bounds on the minimum distance of linear codes [11], we know that the C is an optimal linear code over 3 F . By calculation, we attain that C is a [14,6,6]-code over 3 F which is an optimal linear code. ) . CONCLUSIONS In this paper, we determined the normalized generating sets for GQT codes. Moreover, an algorithm of producing normalized generating sets also has been presented. As an application, some good GQT codes are derived by our construction.
2020-12-03T09:07:46.665Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "6bccacea1cde2a4c383215489bf14b6bd6291e60", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1684/1/012089", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "08eba74b4344b866e9ec7a9f45ae3fe2c53a90eb", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118548201
pes2o/s2orc
v3-fos-license
Magnetic phenomena in 5d transition metal nanowires We have carried out fully relativistic full-potential, spin-polarized, all-electron density-functional calculations for straight, monatomic nanowires of the 5d transition and noble metals Os, Ir, Pt and Au. We find that, of these metal nanowires, Os and Pt have mean-field magnetic moments for values of the bond length at equilibrium. In the case of Au and Ir, the wires need to be slightly stretched in order to spin polarize. An analysis of the band structures of the wires indicate that the superparamagnetic state that our calculations suggest will affect the conductance through the wires -- though not by a large amount -- at least in the absence of magnetic domain walls. It should thus lead to a characteristic temperature- and field dependent conductance, and may also cause a significant spin polarization of the transmitted current. I. INTRODUCTION There is presently a strong interest in the physics of metal nanowires of atomic dimensions. A freely hanging metallic nanowire is formed when two pieces of material, initially at contact, are pulled away from each other over atomic distances. In the process a connective bridge or neck elongates and narrows. Experimentally, segments of such nanowires have been formed between tips, in particular of Au, 1,2 but very recently also in break junctions of Pt and Ir. 3 . The one-dimensional character of nanowires cause several new physical phenomena to appear, like quantized conductance 4 and helical geometries. 1,5,6 With respect to the bulk metal, the freely hanging nanowires are of course unstable and therefore transient objects, which undergo thinning and eventual breaking. Nevertheless, the kinetics leading to that thinning process slows down when reaching special "magic" geometries, where long lifetimes of several seconds have been recorded. The occurrence of magic geometries has been proposed to correspond to local minima of the nanowire effective string tension. 6 Some of these wires are only a few atomic layers thick, but can extend in length up to 15 nm, which corresponds to roughly 50 gold-atom diameters. 1 Ultimately thin wires, consisting of only a single atomic strand, so-called monowires -necessarily much shorter than the above mentioned thicker wires -have also been observed. 3,7,8 Besides these transient objects, another type of nanowires, which are stable, also exist. Structurally stable nanowires can be grown on stepped surfaces, like for example the recently observed Co monatomic chains on Pt substrate, 9 or inside tubular structures, like the Ag nanowires of micrometer lengths grown inside self-assembled calix [4]hydroquinine nanotubes. 10 An interesting question is of course if, when and how magnetism may appear in nanowires and how -if that is the case -this affects the other properties. Those metals, which are magnetic already in bulk, can be expected to be magnetic also as nanowires, Hund's rule being reinforced by lower coordination. But may also normally non-magnetic metals magnetize in nanowires form? It has been suggested that even a jellium confined in a thin cylinder may in principle magnetize for certain radii of the cylinder. 11 However, the moment formation is confined to very special radii or electron densities, and the associated energy gain is very small. That is of course so because exchange interactions, as described by Hund's first rule, are not particularly strong in an sp band metal (e.g., Na or Al), a typical system that might be thought of as a jellium. The situation is radically different for transition metals of the 4d and 5d series. Because of the partly occupied d orbitals, their ability to magnetize is much stronger and of a fundamentally different nature compared to the jellium. In bulk, the resulting large exchange interactions of these metals are overwhelmed by the large electron kinetic energies, resulting in very large bandwidths and a nonmagnetic ground state. In the present work, we concentrate on monatomic wires made up of the 5d elements Os, Ir, Pt and Au, investigating the possibility of ferromagnetism 12 in these nanowires. Of these metals, Os and Ir exhibit monolayer magnetism on Au or Ag substrates. 13,14 Os, Ir, and Pt might conceivably develop Hund's rule magnetism in free-hanging nanowire form, due to their partly empty d shell. Au, on the other hand, is basically an sp metals but with some d orbitals quite close to the Fermi level, making it a borderline case. If strong Hund's rule magnetism develops in these wires, the number of conductance channels would be greatly influenced, and the results could show up in the form of, e.g., strong and unusual joint magnetic field and temperature dependence in the ballistic conductance. Of course, thermal fluctuations in nanowires are expected to be very large, which would destroy long range magnetic order in the absence of an external magnetic field. Depending on temperature and on external field, there will nevertheless be two different fluctuation regimes: a slow one and a fast one. Slow fluctuations such as those attainable at low temperatures and/or in presence of a sufficiently large external field take a nanomagnet to a superparamagnetic state, where magnetization fluctuates between equivalent magnetic valleys, separated by, e.g., anisotropy-induced energy barriers. If the barriers are sufficiently large the nanosystem spends most of the time within a single magnetic valley, and will for many practical purposes behave as magnetic. We may under these circumstances be allowed to neglect fluctuations altogether, and to approximate some properties of the superparamagnetic nanosystem with those of a statically magnetized one. Experimentally, evidence of one-dimensional superparamagnetism with fluctuations sufficiently slow on the time scale of the probe was recently reported for Co atomic chains deposited at Pt surface steps. 9 At the opposite extreme -a situation reached for example at high temperatures, and in zero external field -the energy barriers are so readily overcome that the magnetic state will be totally washed away by fast fluctuations, leading to a conventional paramagnetic state. A complete description of this high entropy state is beyond scope here, and we have chosen, as is usually done, to approximate it with the conventional T = 0 nonmagnetic, singlet solution of the Kohn-Sham electronic structure equations. In this paper we will only deal with straight undimerized wires. This might appear oversimplified, since, for instance, it has been calculated that infinite gold wires have a local energy minimum for a zigzag structure. 15 Similar structures are also possible for the other metals studied here. Our rationale for this simplification of the wire geometry is that wires extended between two tips are inevitably subject to stretching. The simple thermodynamics causing the wire-tip atom flow of atoms and driving the thinning 16 implies a finite string tension. 6 Thus, even if a free-ended wire favored a zigzag structure, this effect will be washed out in the ultimate wire hanging between tips just before breaking of the contact. Moreover possible dimerization of the wires, an issue whose possible relevance is restricted to gold, will not be considered here. II. METHOD In the present density-functional-based 17 electronicstructure calculations we used the all-electron fullpotential linear muffin-tin orbital method (FP-LMTO). 18 This method assumes no shape approximation of the potential or wave functions. The calculations were per- formed using the generalized gradient approximation (GGA). 19 As a test, some calculations were also performed using the local density approximation (LDA), 20 , giving results very similar to the GGA ones. Further, some calculations were double-checked using the WIEN code, 21 again with very similar results. We chose an allelectron approach in order to rid our calculations of possible sources of doubt that may arise when using pseudopotentials in the presence of magnetism and in nonstandard configurations. The calculations were performed with inherently threedimensional codes, and thus the system simulated was an infinite two-dimensional array of infinitely long, straight wires. A one-dimensional Brillouin zone was used, i.e the k-points form a single line, stretching along the z-axis of the wire. The Bravais lattice in the xy-plane was chosen hexagonal. Furthermore, we used non-overlapping muffin-tin spheres with a constant radius in the calculations of the equilibrium bond lengths d. The magnetic moments, bands structures, conductance-channel curves and band widths were calculated using muffin-tin spheres scaling with the bond length. Convergence of the magnetic moment was ensured with respect to k-point mesh density, Fourier mesh density, tail energies, and wire-wire vacuum distance. We performed both scalar relativistic (SR) calculations, and calculations including the spin-orbit coupling as well as the scalar-relativistic terms. The latter will be referred to as "fully relativistic" (FR) calculations in the following, although we are not strictly solving the full Dirac equation, or making use of current density functional theory. In the FR calculations, the spin axis was chosen to be aligned along the wire direction. A. Bond lengths and energetics The chemical bonding in a wire is, of course, quite different from the bonding in a bulk material. In a monowire, there are only two nearest neighbors, and therefore it is expected that the bond length minimizing the total energy be smaller than in the bulk. This is indeed the case, as can be seen in Table I, where calculated bond lengths for monowires and bulk are listed, together with the experimental bulk values. Our bulk GGA calculations for the equilibrium bond lengths are in very close agreement with the experimental values, and slightly underbonding. Our corresponding LDA calculations (not shown) yield, as expected, slightly shorter bond lengths, and overbond. Our nonmagnetic nanowire calculations compare well with existing ones in Refs. 15,16, and 22. We should perhaps stress again that, strictly speaking, a tip-suspended wire will not have a quasi-stable configuration at the bond length which minimizes the total energy, but at a slightly larger value since it is rather the string tension than the total energy which should attain a local minimum. 6 Nevertheless, for simplicity, in the remainder of this paper, the bond length which minimizes the total energy will be called the equilibrium bond length. Table I also shows our calculated mean-field magnetic moments at the equilibrium bond lengths. The scalar relativistic calculations (SR) predict the Os and Ir wires to be magnetic at the equilibrium bond length. In contrast, the fully relativistic calculations (FR) predict a much smaller moment for Os compared to the SR calculation, no moment at all for Ir, and then, quite unexpectedly, a substantial moment in the Pt wire. Thus, the spin-orbit coupling is seen to have a profound effect on the existence and magnitude of the magnetic moments. The rightmost column in Table I lists the experimental atomic ground state configuration, showing that the free Os, Ir, Pt and Au atoms have spin moments 4, 3, 2, and 1 µ B , respectively. Thus, the predicted wire moments are much smaller than the magnetic moments of the free atom. An interesting side question is whether there exists a substantial magnetostrictive effect in the wires, i.e., if the appearance of a magnetic moment in itself causes the equilibrium bond length to increase. Although the calculated wire magnetic moments are quite large in some cases, we find that this has almost no effect on the equilibrium bond length. The calculated equilibrium bond lengths for the magnetic wires are indeed always larger, but only very slightly so, typically one or two hundredths of anÅngström. In fact, the strictive effect of spin-orbit coupling is as large or larger (while still a small effect). For the Os and Au wires, the bond length decreases when the spin-orbit coupling is taken into account, whereas in Ir and Pt it increases. De Maria and Springborg 23 also calculated a similar decrease of the Au monowire bond length. In order to analyze the stability of wire formation as well as the stability of the magnetism in the wires, we calculated the energy gain when the wire is allowed to spin polarize, and also the energy difference between wire and bulk. The results are displayed in Table II. For Au, a bulk atom is around 2 eV more stable than the monowire, whereas for Os, Ir and Pt, this energy difference is about twice as large. This rationalizes why wire formation is easiest in Au. Energy differences between monowire and bulk have been reported earlier for Pt and Au, and our results are in good agreement with those calculations. 15,16,22 The energy gain due to spin polarization is of course a much smaller quantity, and differs greatly from element to element. For example, in the scalar relativistic calculations, the energy gain for Ir is much greater than that in Os, although the moment is larger in Os than in Ir. It is also very sensitive to the spin-orbit coupling. In the case of Os, the relative stability of the magnetic solution drops from 18 to 6 meV when spin-orbit coupling is introduced. This drop for the Os wire is to a large extent due to the magnetic moment being much smaller in the FR calculation. Such a small magnetic energy gains suggests that cryogenic temperatures could be required in order for the slow fluctuation regime to be reached, and magnetism to be observable, in these nanowires. B. Magnetic moments The magnetic moments per atom monowire as a function of bond length are shown in Fig. 1. The solid lines refer to the fully relativistic calculations (FR), and the dotted lines to the scalar relativistic (SR) calculations. The first thing to note is that all the metals studied exhibit a magnetic moment for values of the bond lengths at or close to equilibrium. Ir and Au merely need a slight stretch in order to spin polarize. Another general feature is that the magnetic profiles for the SR and FR calculations are very different. For instance, the SR calculation for Pt predicts this metal to be magnetic only for stretched wires, whereas the FR calculation predicts it to be magnetic in the whole range of bond lengths studied (2.2Å to 3.2Å). Also the Os wire spin polarizes in the whole range of bond lengths studied. Unexpectedly, for this metal the FR calculation predicts the magnetic moment to initially decrease with stretching whereas the SR calculation finds a monotonically increasing magnetic moment. For Os, Ir and Pt, the magnetic moment reaches a plateau value for very large bond lengths (around and beyond 3.2Å, so large that the wires are most probably since long broken). The value of this plateau magnetic moment is close to, even if still below, the atomic spin moment. In Au, the situation is quite different from that of the other metals. The Au wire acquires a very small magnetic moment, less than 0.1 µ B in the FR calculation, for slightly stretched bond lengths. With further stretching, the moment disappears again. Of course, it will eventually reappear at larger (but unphysical) bond lengths, because the free Au atom has a filled 5d shell and one unpaired 6s electron, giving a pure s moment of 1 µ B . In order to shed some light onto the mechanisms behind the magnetic profiles displayed in Fig. 1, we will now analyze the electronic structure of the wires, using band structures and the energy positions of s and d levels relative to one another and to Fermi. Relative positions of s and d levels A determining factor for the magnetic state of transition metal atoms is the close competition between the s and d states. According to the standard Aufbau principle of orbital filling, the (n + 1)s orbitals should fill before the nd orbitals, where n is the principal atomic orbital quantum number. However, this rule is often broken for heavier elements. The reason is that due to relativistic effects influencing the kinetic energy of the orbitals the relevant s and d levels are very close in energy, so which one becomes populated in the end may depend on a number of factors such as the fine balance between the repulsion of the other orbitals in the shell, the energy gained from completing a d shell (if possible), the energy cost associated with populating both orbitals in the s shell, and the form of the orbitals (due to different n). In bulk, on a surface, or in a wire, the situation is further complicated by hybridization and the accompanying broadening of the atomic levels into bands. Magnetism may not even appear at all, since for broad enough bands, the exchange energy gain due to spin polarization cannot match the increased cost in terms of kinetic energy. This is the situation for the bulk 5d transition metals and also for wires with very short bond lengths. In order to quantify the relative positions of the s and d levels for our wires, we plotted the bottom and top of the s and d bands as a function of bond length, see Fig. 2. The bottom and top of a band have been estimated us-ing the Wigner-Seitz rule, so that the top is taken as the energy where the wave function is zero, and the bottom of the band is that where the derivative of the wave function is zero. This qualitative measure of the bandwidth tells us the relative positions of the s and d states, especially for large bond lengths where the bands narrow into atomic-like levels. Calculations must be taken up to very large bond lengths (6Å), in order to recover the situation close to that of free atoms. As we will see, this analysis of the relative band positions catches the main trends for the wire magnetic moments. In Os and Ir, we see that the d level is slightly above the s level at the atomic limit, with the result that the s shell will fill up, giving the atomic configurations d 7 s 2 and d 8 s 2 , respectively. This matches with the overall tendency of the magnetic moments in Os and Ir to increase as the wire is stretched (at least for large enough bond lengths) in the following way. Two mechanism are at work. The first one, valid as long as the band widths are still substantial, is that as the d band width decreases, the spin polarization within the d shell increases due to exchange. The second mechanism, valid in the atomic limit, is that as the s shell fills up, the number of d electrons decreases, which, equivalently, results in an increased magnetic moment. In Pt, s and d levels are essentially degenerate, and consequently the s shell never fills up completely (atomic configuration d 9 s 1 ). In Au the d level lies clearly beneath the s level, and so the d shell will be fully occupied for large enough bond lengths. This is the reason why the d magnetism in the Au wire disappears at larger bond lengths. Band structures Some more detailed insight regarding the shape of the magnetic profiles can be gained by analyzing the band structures, and how they change as a function of bond length. Band structures for two different bond lengths, the equilibrium bond length, and a larger one of 2.8Å, roughly representing two magnetic regimes, are shown in Fig. 3 for each of our four elements. The bands run from the zone center, Γ, to the zone edge, A, in the direction of the wire. The character of the bands close to the Fermi level is of critical importance for the moment formation, and therefore we also show character-resolved bands, see Fig. 4. We found it useful to split up the d character into d z , d xz + d yz and d xy + d x 2 −y 2 , and so, Fig 4 has four panels, displaying separately the s, d z , d xz + d yz and d xy + d x 2 −y 2 characters of the bands. The vertical error bars, or "thickness", of the bands indicate the relative character weight. The data in Fig 4 has been taken from a calculation for Pt. However, for the other metals, the relative weight of the orbitals for each band is qualitatively similar to the one shown. From Fig. 4, we see that almost all bands in the vicinity of the Fermi level have predominantly d character. In fact, there are only two bands with some s character crossing the Fermi level (see upper left panel in Fig. 4). Of these, the highest lying band crosses the Fermi level halfway between the zone center and zone edge. This band is almost purely s at that crossing. For Ir, Os and Pt, this band actually crosses the Fermi level twice. However, the degree of s character for this band diminishes rapidly as the reciprocal lattice vector approaches the zone center Γ, i.e., the second crossing is d-dominated, as is evident from Fig. 4. The second one of the two s-containing bands crosses the Fermi level close to the zone edge (A). At that point, it has some s character, but is in fact dominated by d z character. At Γ and A, both of them critical points by symmetry, all band dispersions are horizontal, giving rise to very sharp band edge van Hove singularities, a feature due to the one-dimensionality of the systems. Since the bands have mostly d character at the edges, the exchange energy gain will be rather large if a band spin-splits so that one of the spin-channel band edges ends up above the Fermi level, and the other one below. Strictly speaking, the spin-orbit coupling will mix the two spin channels so that, in general, an eigenvalue will have both majority and minority spin character. However, in the present calculations, this mixing is so small, typically just a few percent, that it is irrelevant for the qualitative discussion we make here. Thus, if a band edge ends up sufficiently near the Fermi level, we may expect a magnetic moment to develop. While apparently similar to the magnetization of the jellium wire, magnetism here is much more substantial, the d states involving a much stronger Hund's rule exchange. We now go through all four metals, starting with Os, analyzing how the band edges move as a function of bond length, and how this affects the magnetic state of the wires. Os: The magnetism in the Os wire has two regimes, one for bond lengths below 2.6Å, and one for bond lengths above this value. Below 2.6Å, the magnetic moment actually decreases with increasing bond length. At the equilibrium bond length, only one band edge (of mostly d z character and some s character), at A, has spin-split around the Fermi level (see panel a in Fig 3). This gives rise to a small moment of a few tenths of a µ B . As the bond length increases, this band edge moves downward, through the Fermi level, and the magnetic moment is killed off. At the same time, the band edges (at Γ and A) of the rather flat d xy + d x 2 −y 2 band come sufficiently close to the Fermi level, causing a large splitting (see panel b). This results in a rapid increase of the magnetic moment, creating the second, large-moment, magnetic regime. Ir: With one more electron than Os, the bands of the Ir wire lie generally deeper. At the equilibrium bond length, the band edge responsible for the low-moment regime for Os lies well below the Fermi level and is inactive. With increased bond length, the A edge of the flat d xy + d x 2 −y 2 band gradually sinks toward the Fermi level and eventually causes a large magnetic splitting. Thus, the whole magnetic regime in Ir is similar to the large-moment regime in Os. Pt: In Pt, the very same flat d xy + d x 2 −y 2 band leading to Hund's rule magnetism in Os and Ir behaves here in the opposite way. At very small bond lengths (2.2Å), this band is entirely occupied, and moves upwards (instead of downwards) with increased bond lengths. As the edge at A touches the Fermi level, a magnetic moment develops. Two other bands, a d z 2 -dominated one with band edge at A and a d xz + d yz -dominated one with band edge at Γ are also important. They are just slightly higher in energy than the first band edge, and with increasing bond length, they move to lower energies. Thus, these three band edges become increasingly degenerate with stretching, and split around the Fermi level at 2.4Å, causing a rapid increase in the magnetic moment. Au: For Au, the d bands causing the magnetism in Os, Ir, and Pt lie well below the Fermi level and cannot give rise to a magnetic moment. The magnetically active band edge is at Γ, and belongs to a band with relatively high dispersion and d xz +d yz character. With increasing bond length, this band edge moves downward, and as it passes through the Fermi level it creates a small magnetic moment. As can be seen in Fig. 3, panel h, the spin splitting of the band edge is really very tiny, and the magnetism in Au is reminiscent of the magnetism of the jellium cylinder, i.e., a band-edge phenomenon rather than Hund's rule driven spin polarization. Further stretching causes this edge to sink below the Fermi energy, and the magnetic moment consequently disappears. It is not clear at present whether this moment may be of any real physical significance. C. Ballistic conductance channels As seen from the above discussion of the nanowire band structures, spin-splitting of bands does alter the number of bands -or channels -n crossing the Fermi level. By virtue of the Landauer formula where τ i is the transmission through channel i, the ballistic conductance measured has, in units of 1 2 G 0 = e 2 /h, precisely the number of bands n crossing the Fermi level as its upper limit. Thus, the conductance through the wires should be affected by magnetism. Fig. 5 shows how the number n of conducting channels is influenced by nanowire spin-polarization and bond length. For Os and Pt in their magnetized state at the equilibrium bond length, n is large, 11 and 8, respectively, against 12 and 10 in the nonmagnetic state. Magnetism has decreased the number of channels, but not dramatically so. Should all these channels transmit fully, a large ballistic conductance of 4G 0 for Pt or 5.5G 0 for Os would ensue, to be compared with nonmagnetic values of 5G 0 and 6G 0 , respectively. In reality however most of the open channels have d character. While the conductance of the broad band s channels is generally close to one owing to nearly complete transmission, that of the narrow band d channels is much smaller, with a high reflection at the lead-wire junction, generally dependent on the detailed junction geometry. Of the conductance channels in these metals, two have s character, both in the spin-polarized and nonmagnetic calculations, bringing an expected contribution close to G 0 to the total conductance. All the other channels have predominantly d character. Their contribution to the conductance is therefore expected to be much smaller than 1 2 G 0 per channel. We may thus expect these wires to have a conductance above G 0 but well below 4G 0 and 5.5G 0 , respectively. Since the scattering of the d waves at the junctions depends highly on the geometry, whose details will change at every realization, we also expect the conductance histograms to exhibit peaks that could be both broad and poorly reproducible. For Ir, our calculations indicate that the conductance at the equilibrium bond length should lie between G 0 and 5G 0 . For these three metals, according to our calculations the number of conductance channels decreasesby and large -as the wire is stretched. However, the disappearing channels are always d-dominated. Of the metals Os, Ir and Pt, measured conductance histograms have been published only for Pt so far. Smit et al. 24 find a large, broad peak centered around 1.5G 0 and a smaller bump centered around 2.2G 0 . The conductance histograms reported by Yanson 25 are similar in structure, but the positions are shifted, to around 1.7G 0 and 3G 0 . Rodrigues et al. 26 find a peak centered around 1.4G 0 , and in addition a peak at very low conductance, around 0.5G 0 . In the Au wire, we find theoretically four open conductance channels. Two of these are s dominated, just as for the other metals, and two are d dominated. However, the d channels are merely touching the Fermi level, and are therefore expected to have a very marginal effect on the conductance. Experimentally, gold nanowires yield a rather sharp peak between 0.9G 0 and G 0 , confirming that the d influence is probably very small. IV. CONCLUSIONS In conclusion, our calculations suggest that the Os, Ir, and Pt monatomic nanowires should exhibit spontaneous Hund's rule superparamagnetism for values of the bond length at equilibrium or -in the case of Irslightly above. The energy gain connected with the magnetic state is small, less than 10 meV for Os and Pt at the equilibrium bond length. Au nanowires also theoretically magnetize, but the calculated energy gain is an order of magnitude smaller than for the other metals. From a methodological point of view, the spin-orbit coupling is found to be crucial for a correct description of the magnetic state, as is probably the use of all-electron techniques. 22 How might this magnetism be detected experimentally? Merely measuring the conduction through the wire at one single temperture and magnetic field strength will most probably not give conclusive information regarding the magnetic state of the atoms in the wire, since the tranmission through d channels is rather poor and vary greatly with geometry and can hardy be regarded as quantized. A key experiment would be to measure ballistic conductance as a function of temperature and of external magnetic field. At high temperature and zero field the nanowire should be nonmagnetic, due to fast fluctuations. High field and low temperature would take the nanowire to a magnetic, or in any case to a slowly fluctuating superparamagnetic regime. In this transition the number of conductance channels should diminish, and so should the conductance -even if not by very much. At sufficiently low temperatures, the conductance should definitely be field sensitive. Such a behavior would be a clear indication of a superparamagnetic state. In some situations, more majority bands may cross the Fermi level than do minority bands, leading to partial spin-polarization of the transmitted electron current. If this current could be measured, it would be a very direct way of confirming the existence of a superparamagnetic state. Fractional conductance peaks have been observed experimentally, for example the 1 2 G 0 peak reported by Ono for Ni, 27 and very recently by Rodrigues et al. for Co, Pd and Pt, 26 at room temperature and zero field. These results are intriguing, since we expect that the s channel alone should yield a conductance larger than that. The peaks observed in Co, Pd, and Pt, centered around 1 2 G 0 , are rather broad, which suggests that they might not be caused by one single fully transmitting spin-polarized channel, but perhaps by several poorly conducting channels. We discussed in previous work, 28 a possibility to obtain conductance G 0 from a magnetic transition metal nanowire with a magnetization reversal occurring inside the nanowire. This could further drop to 1 2 G 0 in an asymmetrical situation, with a net prevalence of majority spins over minority spins. Although it is not inconceivable that this might occur in Co and Ni, we are unable to explain at the moment how that kind of state could be sustained in Pt, and by extension in Pd too, at the experimental conditions of zero field and room temperature. It would anyway be interesting to see the effect of cooling and of an external field on these results.
2019-04-14T02:02:52.888Z
2003-05-28T00:00:00.000
{ "year": 2003, "sha1": "e4735969c37ff4ee574503ddb446b0725a986cac", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0305658", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d6aede2e416c8abf1da2e5acd7744b7e06d36db3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
12382064
pes2o/s2orc
v3-fos-license
Therapeutic Effects of PPARα Agonists on Diabetic Retinopathy in Type 1 Diabetes Models Retinal vascular leakage, inflammation, and neovascularization (NV) are features of diabetic retinopathy (DR). Fenofibrate, a peroxisome proliferator–activated receptor α (PPARα) agonist, has shown robust protective effects against DR in type 2 diabetic patients, but its effects on DR in type 1 diabetes have not been reported. This study evaluated the efficacy of fenofibrate on DR in type 1 diabetes models and determined if the effect is PPARα dependent. Oral administration of fenofibrate significantly ameliorated retinal vascular leakage and leukostasis in streptozotocin-induced diabetic rats and in Akita mice. Favorable effects on DR were also achieved by intravitreal injection of fenofibrate or another specific PPARα agonist. Fenofibrate also ameliorated retinal NV in the oxygen-induced retinopathy (OIR) model and inhibited tube formation and migration in cultured endothelial cells. Fenofibrate also attenuated overexpression of intercellular adhesion molecule-1, monocyte chemoattractant protein-1, and vascular endothelial growth factor (VEGF) and blocked activation of hypoxia-inducible factor-1 and nuclear factor-κB in the retinas of OIR and diabetic models. Fenofibrate’s beneficial effects were blocked by a specific PPARα antagonist. Furthermore, Pparα knockout abolished the fenofibrate-induced downregulation of VEGF and reduction of retinal vascular leakage in DR models. These results demonstrate therapeutic effects of fenofibrate on DR in type 1 diabetes and support the existence of the drug target in ocular tissues and via a PPARα-dependent mechanism. Retinal vascular leakage, inflammation, and neovascularization (NV) are features of diabetic retinopathy (DR). Fenofibrate, a peroxisome proliferator-activated receptor a (PPARa) agonist, has shown robust protective effects against DR in type 2 diabetic patients, but its effects on DR in type 1 diabetes have not been reported. This study evaluated the efficacy of fenofibrate on DR in type 1 diabetes models and determined if the effect is PPARa dependent. Oral administration of fenofibrate significantly ameliorated retinal vascular leakage and leukostasis in streptozotocin-induced diabetic rats and in Akita mice. Favorable effects on DR were also achieved by intravitreal injection of fenofibrate or another specific PPARa agonist. Fenofibrate also ameliorated retinal NV in the oxygen-induced retinopathy (OIR) model and inhibited tube formation and migration in cultured endothelial cells. Fenofibrate also attenuated overexpression of intercellular adhesion molecule-1, monocyte chemoattractant protein-1, and vascular endothelial growth factor (VEGF) and blocked activation of hypoxia-inducible factor-1 and nuclear factor-kB in the retinas of OIR and diabetic models. Fenofibrate's beneficial effects were blocked by a specific PPARa antagonist. Furthermore, Ppara knockout abolished the fenofibrate-induced downregulation of VEGF and reduction of retinal vascular leakage in DR models. These results demonstrate therapeutic effects of fenofibrate on DR in type 1 diabetes and support the existence of the drug target in ocular tissues and via a PPARa-dependent mechanism. Diabetes 62:261-272, 2013 W ith the rising incidence of diabetes, the prevalence of the vascular complications of diabetes are increasing, in spite of recent advances in therapies targeting hyperglycemia, hypertension, and dyslipidemia (1,2). Diabetic retinopathy (DR) is a feared and common microvascular complication of diabetes and one of the most common sight-threatening conditions in developed countries (3). DR is a chronic, progressive, and multifactorial disorder, primarily affecting retinal capillaries (4,5). Diabetes induces retinal inflammation, blood-retinal barrier breakdown, and increased retinal vascular permeability, leading to diabetic macular edema (DME) (6). In proliferative DR, overproliferation of capillary endothelial cells results in retinal neovascularization (NV), which can cause severe vitreous cavity bleeding, retinal detachment, and vision loss (7,8). Unlike type 2 diabetes, in type 1 diabetes, obesity, the metabolic syndrome, and dyslipidemia are less common, although when present in people with type 1 diabetes, they are risk factors for micro-and macrovascular complications (9,10). Retinopathy in both type 1 and type 2 diabetes develops retinal vascular leakage, inflammation, NV, and fibrosis (11). Even though it is well established that vascular endothelial growth factor (VEGF) mediates the pathologic processes of vascular leakage and angiogenesis in DR, anti-VEGF compounds are not always effective in all patients with DR (12). This may be ascribed to the fact that DR is mediated by multiple angiogenic, inflammatory, and fibrogenic factors such as VEGF, tumor necrosis factor-a (13), intercellular adhesion molecule-1 (ICAM-1) (14), and connective tissue growth factor (15), and thus, blockade of VEGF alone is not sufficient to ameliorate all of the perturbed signaling. Fenofibrate, a peroxisome proliferator-activated receptor a (PPARa) agonist, available clinically for .30 years for the treatment of dyslipidemia (16,17), is particularly effective in improving the lipid profile in hypertriglyceridemia and low HDL syndromes (18), and for reducing some cardiovascular events (19). Recent studies reported that activation of PPARa suppresses transforming growth factora-induced matrix metalloproteinase-9 expression in human keratinocytes (20), blocks tumor angiogenesis via vascular NADPH oxidase (21), modulates endothelial production of inflammatory factors (22), and improves wound healing in pediatric burn patients (23). In the retinal pigment epithelium, fenofibrate modulates cell survival signaling (24) and reduces diabetic stress-induced fibronectin and type IV collagen overexpression (25). Moreover, fenofibrate also prevents interleukin-1b-induced retinal pigment epithelium disruption through inhibition of the activation of AMPactivated protein kinase (26). Recent studies suggest that PPARa is an emerging therapeutic target in diabetic microvascular complications (27)(28)(29). Two recent, large, prospective, placebo-controlled clinical trials have demonstrated protective effects of fenofibrate against DR in type 2 diabetic patients. The Fenofibrate Intervention in Event Lowering in Diabetes (FIELD) Study reported that fenofibrate monotherapy significantly reduced the cumulative need for laser therapy for DR by 37% (30), nephropathy progression by 14% (31), and amputations (23), including microvascular amputations, by 37% (32) in type 2 diabetic patients. The Action to Control Cardiovascular Risk in Diabetes (ACCORD) Lipid Study of combination simvastatin and fenofibrate demonstrated a 40% reduction in progression of proliferative DR in type 2 diabetic patients over simvastatin only (33). Despite these exciting clinical findings, several unanswered questions remain. Is fenofibrate effective against DR in type 1 diabetes? Is the fenofibrate effect on DR a direct action on retinal vasculature or through the systemic lipid-lowering effect? Are the ocular fenofibrate effects PPARa dependent? This study was designed to address these important questions. In the current study, we explore whether fenofibrate has therapeutic effects on DR in type 1 diabetes animal models, and on ischemia-induced retinal NV, and whether such effects are dependent on PPARa. RESEARCH DESIGN AND METHODS Animals. Brown Norway (BN) rats, male Akita mice and their age-matched wild-type (Wt) littermates (C57BL/6 mice), and Ppara 2/2 mice were purchased from The Jackson Laboratory (Bar Harbor, ME). Rodents were kept in a 12-h light-dark cycle with an ambient light intensity of 85 6 18 lux. Care, use, and treatment of the animals were in strict agreement with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research, and local ethics committee approval was obtained. Oral fenofibrate administration. Fenofibrate (Sigma-Aldrich, St. Louis, MO) was given as a 0.25 or 0.15% admixture with rodent chow (5001; LabDiet/ TestDiet, Ft. Worth, TX). The diabetic or nondiabetic Wt and male, heterozygous Akita mice at 1 week after diabetes onset and male streptozotocin (STZ)-induced diabetic rats at 1 week after diabetes onset were fed chow with fenofibrate for 3 and 7 weeks, respectively. Western blot analysis. For in vivo studies, two eyecups from each rodent were combined and homogenized, and protein concentration was measured by the Bradford method (34). The same amount (50 mg) of total protein from each rodent was used for Western blot analysis using the enhanced chemiluminescence system, as described previously (35). Immunohistochemistry. Frozen retinal sections (4 mm) were incubated overnight (4°C) with primary antibodies followed by several PBS washes. The sections were incubated (30 min) with secondary antibodies, and the nuclei were counterstained with DAPI (Sigma-Aldrich). The sections were then mounted in antifade medium and viewed on a laser-scanning confocal microscope (model LSM 510; Carl Zeiss Meditec, Jena, Germany). The secondary antibodies were fluorescein isothiocyanate (FITC) (or Texas Red)-conjugated goat anti-mouse IgG (Jackson ImmunoResearch Laboratory, Inc., West Grove, PA), fluorescent anti-rat IgG with mouse adsorbed (Vector Laboratories, Burlingame, CA) (or Texas Red-conjugated goat anti-rat IgG; Invitrogen, Carlsbad, CA), and Texas Red-conjugated goat anti-rabbit IgG (Jackson ImmunoResearch Laboratory, Inc.) at a dilution of 1:200. The oxygen-induced retinopathy model and analysis of retinal NV. The oxygen-induced retinopathy (OIR) model was induced in BN rats as described previously (36). BN rats at postnatal day 7 (P7) were placed in a 75% oxygen chamber until P12. Fluorescein retinal angiography and quantification of preretinal vascular cells were performed at P18 as previously described (37). STZ-induced diabetic rats. Experimental diabetes was induced by an intraperitoneal injection of STZ (50 mg/kg) into anesthetized BN rats (8 weeks old) after an overnight fast. To induce diabetes in mice, the mice received five daily injections of STZ. Blood glucose levels were measured 48 h after the STZ injection and monitored weekly thereafter. Only animals with consistently elevated glucose levels .350 mg/dL were considered diabetic. No exogenous insulin treatment was given. Intravitreous injection. In brief, animals were anesthetized with a 50:50 mix of ketamine (100 mg/mL) and xylazine (20 mg/mL), and pupils were dilated with topical phenylephrine (2.5%) and tropicamide (1%). A sclerotomy was created ;0.5 cm posterior to the limbus, and a glass injector (;33 gauge) connected to a syringe filled with fenofibrate in 10% rat serum, 0.1% DMSO, and 0.9% NaCl, and the same volumes of the vehicle as control, into the contralateral eye. Fenofibrate and GW7647 (Sigma-Aldrich) were dissolved in DMSO and diluted with 10% rat or mouse serum before injection. Retinal angiography. Rats were anesthetized and perfused with 50 mg/mL 2 3 10 6 -molecular-weight FITC-dextran (Sigma-Aldrich) as described by Smith et al. (37). The animals were immediately killed. The eyes were enucleated and fixed with 4% paraformaldehyde in PBS for 10 min. The retina was separated and flat mounted, and vasculature was then examined under a fluorescence microscope (Axioplan2 Imaging; Carl Zeiss) by an operator masked to treatment allocation. For quantification of preretinal vascular cells, eyes were fixed, sectioned, and stained as described previously (38). The preretinal nuclei were counted by an operator masked to therapy, averaged, and compared. Retinal vascular permeability assay. Retinal vascular permeability was measured according to a documented method (39) with minor modifications. Evans blue (Sigma-Aldrich) was injected through the femoral vein (10 mg/kg body weight) under microscopic inspection. Two hours after the injection, the mice were perfused via the left ventricle with PBS (pH 7.4). Evans blue dye in the retina was measured and normalized by total retinal protein concentrations. Retinal vascular leukostasis assay. The assay followed a documented protocol (40). In brief, anesthetized rats were perfused with PBS to remove nonadherent leukocytes in vessels. The adherent leukocytes in the vasculature and vascular endothelial cells were stained with FITC-conjugated concanavalin-A (40 mg/mL). The retinae were then flat mounted, and adherent leukocytes in the vasculature were counted under a fluorescence microscope by an operator masked to treatment allocation. ELISA for retinal monocyte chemoattractant protein-1 and soluble ICAM-1. The eyecups or retinae were homogenized and centrifuged. Monocyte chemoattractant protein-1 (MCP-1) (Assay Design, Ann Arbor, MI) and soluble ICAM-1 (sICAM-1) levels (R&D Systems, Minneapolis, MN) were measured using ELISA according to the manufacturer's instructions and normalized by total protein concentration in the retina. Triglyceride measurement. The measurement of triglyceride concentrations in plasma followed a manufacture's procedure (Triglyceride Determine Kit; Sigma-Aldrich). In brief, blood was collected and centrifuged at 700 g for 10 min at 4°C. The triglyceride in the plasma was initiated enzymatically by adding lipase and incubated at 30°C for 10 min to convert triglyceride to free fatty acids and glycerol. The released glycerol was subsequently measured by a coupled enzymatic reaction system with a colorimetric readout at 540 nm. To determine the total triglyceride level, the glycerol was continuously catalyzed at 37°C for 15 min by a reconstituted triglyceride reagent including ATP, glycerol kinase, glycerol phosphate oxidase, and peroxidase; the released quninoneimine dye, directly proportional to the triglyceride concentration of the sample, was measured and recorded at 540 nm. Retinal endothelial cell tube formation. Primary bovine retinal endothelial cells (RECs) at passage five were used throughout the study. After pooling of 50 mL ice-cold Matrigel into the 12-well plates at 37°C for 30 min to solidify, RECs were overlaid onto the Matrigel with or without 50 mm fenofibrate in the media, and tube formation was examined at 6 h by an operator masked to treatment identity. Endothelial cell scratch wound assay. Eighty-percent confluent RECs were wounded by drawing a line with a sterile 200-mL pipette tip across the monolayer surface. The cells were then cultured for 24 h. The average linear migration rate was calculated by tracing the border of the cell monolayer on both sides of the wound at 0 and 24 h, measuring the cell-free area over a fixed length along the wound, by an operator masked to cell treatment. Transwell inserts cell migration assay. The undersurfaces of Transwell motility chamber inserts of a 96-well Transwell (Neuro Probe, Inc., Gaithersburg, MD) were coated with or without 10 mg/mL mouse cellular fibronectin, and lipophilic carbocyanines (Dil)-labeled RECs (Invitrogen, Grand Island, NY) were cultured in the upper chamber of the inserts in the presence of fenofibrate at various concentrations (0, 50, 100, and 200 mmol/L). After 6 h incubation, the cells on the upper surface of the membrane were removed, and the fluorescence in the cell monolayer on the other side was determined. Statistical analysis. Quantitative data were analyzed and compared using Student t test for comparison of two groups and one-way ANOVA for studies of more than two groups. Statistical significance in multiple groups was determined by Tukey post hoc analysis, and statistical significance was set at P , 0.05. RESULTS Fenofibrate attenuates retinal vascular permeability in type 1 diabetes models. To determine if fenofibrate decreases retinal vascular leakage in type 1 diabetic rodents, STZ-induced diabetic rats at 1 week after diabetes onset were fed chow containing fenofibrate for 7 weeks. Controls were age-matched nondiabetic rats and diabetic rats fed with regular chow. Retinal vascular leakage was evaluated using Evans blue dye as tracer with normalization to total retinal protein concentration. Oral fenofibrate treatment significantly reduced retinal vascular leakage in STZ-diabetic EFFECTS OF PPARa AGONISTS ON RETINOPATHY rats, compared with the untreated diabetic rats, to a level similar to nondiabetic rats (Fig. 1A). Similarly, 3 weeks of oral fenofibrate treatment of Akita mice, a genetic model of type 1 diabetes, also significantly reduced retinal vascular leakage in diabetic mice to a level similar to the age-matched nondiabetic mice (Fig. 1B). Fenofibrate reduces retinal vascular leukostasis in type 1 diabetic rats. We examined fenofibrate effects on leukocyte adherence (leukostasis) in the retinal microvasculature in STZ-induced diabetic rats. Diabetic and nondiabetic rats were fed chow containing fenofibrate for 7 weeks, and retinal leukostasis was examined. Unlike the retinal vasculature in nondiabetic rats ( Fig. 2A and B), multiple adherent leukocytes were observed in the retinal vasculature of untreated diabetic rats (Fig. 2C and D), but there were fewer leukocytes in the fenofibrate-treated diabetic rats (Fig. 2E and F). There were significantly fewer adherent leukocytes in the fenofibrate-fed diabetic rats compared with untreated diabetic rats (P , 0.001), to a level comparable to the retinae of nondiabetic rats (Fig. 2G). Fenofibrate attenuates overexpression of inflammatory factors in the retinae of type 1 diabetic animals. Levels of ICAM-1 and MCP-1, which promote leukostasis and leukocyte infiltration (41), were measured by Western blot analysis and ELISA in the retinae of age-matched Akita mice fed standard chow (control) and those fed fenofibrate chow. As shown in Fig. 3A and B, fenofibrate treatment significantly reduced retinal levels of MCP-1 and ICAM-1. As shown by immunohistochemistry, fenofibrate also decreased levels of NF-kB and attenuated NF-kB nuclear translocation in the retina of Akita mice (Fig. 3C-H) Intraocular administration of fenofibrate reduces retinal vascular leakage in diabetic and OIR rats. As fenofibrate decreases hepatic VLDL production (42), we determined if fenofibrate's effects on DR were via its systemic effects or via direct effects on the retina. We injected 5 mL of 125 mmol/L fenofibrate into the vitreous in one eye and the same volume of vehicle (10% rat normal serum in DMSO) into the contralateral eye of STZ-induced diabetic rats. As shown by the retinal permeability assay, intraocular injection of fenofibrate significantly reduced retinal vascular leakage in the diabetic rats, when compared with the vehicle control (Fig. 4A). Similarly, 3 mL of 125 mmol/L fenofibrate was injected into the vitreous of OIR rats at age P12, a model of ischemia-induced retinopathy. As shown by retinal vascular permeability assay at P16, intraocular injection of fenofibrate significantly reduced vascular leakage in the retina, compared with the contralateral eye injected with vehicle only (Fig. 4B). Intraocular injection of fenofibrate attenuates retinal NV in OIR rats. To evaluate the effect of fenofibrate on retinal NV, fenofibrate was injected into the vitreous of OIR rats (3 mL of 125 mmol/L fenofibrate) at P12. Fluorescein angiography at P18 showed that the fenofibrate-treated eyes developed less severe retinal NV (Fig. 5), compared with the contralateral eyes injected with the vehicle only. Quantification of preretinal NV cells on cross-sections of OIR eyes showed that the eyes injected with fenofibrate developed significantly fewer preretinal vascular cells, relative to the vehicle control in the contralateral eyes (Fig. 5C-E), supporting an inhibitory effect of fenofibrate on ischemiainduced NV. Intraocular injection of fenofibrate ameliorates retinal inflammation in OIR rats. To evaluate the direct effects of fenofibrate on VEGF overexpression, fenofibrate was injected into the vitreous of OIR rats at P12, and vehicle was injected into the contralateral eyes; retinal VEGF levels were measured at P16. As shown by Western blot analysis, fenofibrate greatly decreased retinal VEGF levels in the OIR rats, compared with vehicle control (Fig. 6A). Similarly, immunostaining showed that the immunosignals of VEGF and hypoxia-inducible factor-1a (HIF-1a), a transcription factor activating VEGF in ischemic conditions, were decreased by fenofibrate in the inner retina of OIR rats (Fig. 6B-G). These results indicate that intraocular administration of fenofibrate attenuated ischemia-induced HIF-1 activation and VEGF overexpression in the retina. Fenofibrate inhibits REC tube formation and migration. In the tube formation assay, RECs cultured in the absence of fenofibrate aggregated to form a tube-like pattern on Matrigel, whereas RECs exposed to 50 mmol/L fenofibrate did not form the tube-like structures (Fig. 7A and B). REC migration was also evaluated using the scratch wound healing assay in primary REC monolayers, which showed that fenofibrate-treated RECs had substantially decreased motility, as measured 24 h after wounding ( Fig. 7C and D). Additionally, the Transwell cell migration assay demonstrated that the number of RECs that migrated through the filter was significantly decreased by fenofibrate, compared with the vehicle-only control (Fig. 7E-G). These results support that fenofibrate inhibits REC migration. The therapeutic effects of fenofibrate on DR in type 1 diabetes models are PPARa dependent. To investigate if the therapeutic effects of fenofibrate on DR are via a PPARa-dependent mechanism, we activated PPARa by intravitreal injection of GW590735 (2 mL of 100 nmol/L), FIG. 4. Intraocular injection of fenofibrate reduces retinal vascular leakage in STZ-induced diabetic rats and in OIR rats. A: STZ-induced diabetic rats were injected with 5 mL of 125 mmol/L fenofibrate into the vitreous of one eye and the same volume of vehicle into the contralateral eye, 6 weeks after diabetes onset. Four days after the injection, retinal vascular leakage was quantified by the vascular permeability assay (mean 6 SD; n = 5). B: OIR rats were injected with 3 mL of 125 mmol/L fenofibrate into the right vitreous cavity at P12 (immediately after they were removed from 75% oxygen), and the same volume of vehicle was injected into the left vitreous cavity as control. At P16, retinal vascular leakage was quantified by the vascular permeability assay (mean 6 SD; n = 5). **P < 0.01. DM, diabetes. EFFECTS OF PPARa AGONISTS ON RETINOPATHY another PPARa agonist with a chemical structure different from fenofibrate, into diabetic rats. The permeability assay showed that GW590735 significantly reduced retinal vascular leakage in diabetic rats (Fig. 8A). We also used Ppara 2/2 mice for the OIR and diabetic models to evaluate fenofibrate's efficacy. Wt and Ppara 2/2 mice with 4 weeks of STZ-induced diabetes and agematched Wt controls were fed fenofibrate chow (as described above) for another 6 weeks. The permeability assay showed that fenofibrate significantly reduced retinal vascular leakage in Wt mice with diabetes, but not in diabetic Ppara 2/2 mice (Fig. 8B). In the Ppara 2/2 mice with OIR, an intravitreal injection of fenofibrate did not decrease retinal VEGF levels (Fig. 8C), demonstrating that the beneficial effects of fenofibrate on DR are through PPARa activation. DISCUSSION Two independent, large clinical studies demonstrated substantial protective effects of fenofibrate against diabetic eye complications in type 2 diabetic patients (29,30). Here we provide the first evidence that fenofibrate also has therapeutic effects on DR in two type 1 diabetes models and on ischemia-induced retinal NV. Furthermore, we have demonstrated that the therapeutic effect of fenofibrate on DR can be achieved by intravitreal injection, suggesting FIG. 5. Intraocular delivery of fenofibrate ameliorates ischemia-induced retinal NV in OIR rats. Rats were exposed to 75% oxygen from P7 to P12. The rats were returned to room air and received an intravitreal injection of 3 mL per eye of 125 mmol/L fenofibrate into the vitreous cavity of the right eye, and the same amount of vehicle into the left vitreous cavity as control at P12. that the drug's protective effects are independent of its systemic effects. Toward its mechanism of action, we have shown that the anti-inflammatory effect of fenofibrate may be through downregulation of ICAM-1 and MCP-1 expression and inhibition of NF-кB signaling in the diabetic retina. We have also shown that the anti-NV effect of fenofibrate may be ascribed to its inhibition of hypoxia-induced activation of the HIF-1 pathway and, subsequently, attenuation of VEGF overexpression. More importantly, our results using another PPARa agonist, a PPARa antagonist, and Ppara 2/2 mice suggest that the therapeutic effects of fenofibrate on DR occur through a PPARa-dependent mechanism. Chronic inflammation, including increased vascular leukostasis, which damages the retinal endothelium and promotes vascular leakage, has been shown to play a major pathogenic role in DR. Leukostasis can also lead to retinal capillary closure, causing nonperfusion of vessels and local ischemia, which subsequently induces overexpression of VEGF and other proinflammatory factors and promotes further vascular leakage, leading to clinical DR and DME (43,44). Our results demonstrate that fenofibrate significantly decreases retinal leukostasis and retinal levels of VEGF and proinflammatory factors ICAM-1 and MCP-1 in two type 1 diabetes models, STZ-induced diabetic rats and Akita mice. NF-кB signaling plays a key role in upregulation of inflammatory factors in DR (45). HIF-1 is a major transcription factor activating VEGF expression in ischemic conditions, such as in diabetic retina, which contributes to vascular leakage and NV in DR (46). Our results show that fenofibrate inhibits activation of both NF-кB and HIF-1 signaling, which may account for its anti-inflammatory and anti-NV effects in DR models. Retinal vascular leakage is a major cause of DME. Our in vivo retinal vascular permeability assays have shown that oral administration of fenofibrate significantly reduces retinal vascular leakage in both STZ-diabetic rats and Akita mice, without significantly lowering blood glucose levels and body weights ( Supplementary Fig. 4). In the human clinical trials, such as the FIELD study (30), fenofibrate use was not associated with significant differences in HbA 1c or body weight relative to placebo use. Furthermore, intraocular injection of fenofibrate also attenuated retinal vascular leakage in OIR rats. These results support that fenofibrate may have therapeutic effects on DME in type 1 diabetes. These results are also in keeping with the report that an early fibrate, clofibrate, reduces retinal hard exudates (47). To confirm that the beneficial effects of fenofibrate on retinal inflammation and retinal vascular leakage are a local ocular effect and not secondary to its systemic effects, we injected fenofibrate into the vitreous of STZ-diabetic and OIR rats. Direct ocular delivery of fenofibrate decreased retinal inflammation and retinal vascular leakage to a level comparable to the nondiabetic control rats. Ocular injection of fenofibrate also ameliorated retinal NV in the OIR model, a model of ischemia-induced retinal NV without diabetes and dyslipidemia. These results indicate that the drug targets for fenofibrate are present in and/or are accessible via the retina. Our studies provide supportive evidence of fenofibrate's benefit on DR in humans being chiefly due to direct ocular effects on ocular tissues rather than a consequence of systemic effects, such as decreased hepatic VLDL production and clearance. This notion is consistent with the findings from the FIELD and ACCORD studies that the ocular benefits of fenofibrate did not correlate with changes in the lipid profile (30,33). Fenofibrate has been used clinically for many years to treat dyslipidemia (16,17), but the recent FIELD and ACCORD clinical trials both reported the surprising findings of fenofibrate benefit on DR in type 2 diabetic patients (30,33). A natural question is whether the beneficial effects of fenofibrate on DR are through activation of PPARa or are off-target effects. To address this intriguing question, we conducted several experiments, including the use of another PPARa agonist, a PPARa antagonist, and PPARa knockout mice. First, intraocular delivery of another PPARa agonist that has a chemical structure distinct from fenofibrate reduced retinal vascular leakage in a DR model, similar to that of fenofibrate. Second, the beneficial effects of fenofibrate on DR were diminished by a PPARa antagonist ( Supplementary Fig. 2). Third, the therapeutic effects of fenofibrate were abolished when PPARa was deficient, as in the Ppara 2/2 mice. Taken together, these observations provide evidence that the beneficial effects of fenofibrate on DR are through PPARa activation. The present series of studies show that oral and intraocular administration of fenofibrate are promising therapeutics for the treatment or prevention of DR in type 1 diabetes. The exciting finding of direct ocular effects of fenofibrate on DR and DME in both types of diabetes is crucially important, since the FIELD trial is the first clinical study to identify an oral drug, other than oral hypoglycemic agents, that has clinical benefit on DR in any form of diabetes. Compared with anti-VEGF compounds, fenofibrate has several advantages, including low costs, oral administration, low toxicity, and protection against diabetic nephropathy (48,49) and amputation in diabetic patients (32,48). As not all patients can tolerate oral fenofibrate, ocular administration may be advantageous. The current study, with multiple model validation, is the first to reveal that fenofibrate's benefit on DME and DR is via direct effects on retinal inflammation, retinal vascular leakage, and retinal NV, and that it is relevant to type 1 diabetes as well (50). However, many questions, such as the mechanism of action of fenofibrate and the signaling pathway mediating the effect of PPARa on inflammation and angiogenesis, remain to be fully elucidated. Future studies should include clinical studies of fenofibrate in type 1 diabetes patients and basic science studies of the A: STZ-induced diabetic rats at 4 weeks after diabetes onset received an intravitreal injection of 2 mL of GW590735 (100 nmol/L). The same volume of vehicle and fenofibrate (Feno) (50 mmol/L) were used as a negative and a positive control, respectively. Retinal vascular leakage was quantified by vascular permeability assay using Evans blue dye as a tracer (mean 6 SD; n = 7). **P < 0.01; *P < 0.05. B: STZ-induced diabetic Wt mice or Ppara 2/2 mice at 4 weeks after diabetes onset were fed chow with or without 120 mg/kg/d fenofibrate for 6 weeks. Retinal vascular leakage was quantified by the vascular permeability assay (mean 6 SD; n = 7). C: Newborn Ppara 2/2 mice were exposed to 75% oxygen from P7 to P12. At P12, the OIR mice received an intravitreal injection of 3 mL per eye of 125 mmol/L fenofibrate. Age-matched Ppara 2/2 mice maintained at constant room air were used as controls. The retina was dissected at P16 and homogenized. The same amount of retinal proteins from each mouse was used for Western blot analysis of VEGF, which was semiquantified by densitometry and normalized by b-actin levels. DM, diabetes; KO, knockout. underlying cell signaling and molecular mechanisms by which fenofibrate ameliorates DR. Elucidation of the molecular mechanisms responsible for fenofibrate's effect may reveal a promising drug target and herald the opportunity for new classes of agents effective on DR and DME.
2017-05-31T08:14:17.315Z
2012-12-13T00:00:00.000
{ "year": 2012, "sha1": "7839e8b1c907368a36025ee663a78dfcda236194", "oa_license": "CCBYNCND", "oa_url": "https://diabetes.diabetesjournals.org/content/diabetes/62/1/261.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7839e8b1c907368a36025ee663a78dfcda236194", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5695149
pes2o/s2orc
v3-fos-license
Measuring dementia carers' unmet need for services - an exploratory mixed method study Background To ensure carers of people with dementia receive support, community services increasingly use measures of caregiver (carer) burden to assess for unmet need. This study used Bradshaw's taxonomy of need to explore the link between measures of carer burden (normative need), service use (expressed need), and carer's stated need (felt need). Methods This mixed method exploratory study compared measures of carer burden with community services received and unmet needs, for 20 community-dwelling carer/care-recipient pairs. Results A simple one-item measure of carers' felt need for more services was significantly related to carer stress as measured on the GHQ-30. Qualitative data showed that there are many potential stressors for carers, other than those related to the care-giving role. We found a statistically significant rank correlation (p = 0.01) between carer's use of in-home respite and the care-recipient's cognitive and functional status which is likely to have been related to increased requirement for carer vigilance, effort and the isolation of spouse carers. Otherwise, there were no statistically significant relationships between carer burden or stress and level of service provision. Conclusion When carers are stressed or depressed, they can recognise that they would like more help from services, even if measures of carer burden and care recipient status do not clearly indicate unmet service needs. A question designed to elicit carer' felt need may be a better indicator of service need, and a red flag for recognising growing stress in carers of people with dementia. Assessment of service needs should recognise the fallibility of carer burden measures, given that carer stress may not only come from caring for someone with dementia, but can be significantly compounded by other life situations. Background Assessment and monitoring of caregiver (carer) burden are increasingly seen as essential factors in ensuring that carers receive community support [1], but the outcomes of this approach are uncertain. While meeting carer service needs has been the subject of increased policy and research interest, Bradshaw's [2] taxonomy of need has not been utilised in this context. Intervention studies often show non significant findings, and researchers increasingly question the usefulness of existing measures of carer burden [3,4]. It follows then that using these same measures to assess 'carer's service needs may generate misleading outcomes. This study contributes to knowledge about the options for assessing carer's unmet needs [4][5][6]. We argue here, that the complexity of carer's needs make it difficult to rely on measures of carer burden and care-recipient dependency for assessing carer's needs. The focus of this study is carers of persons with dementia, who provide critical support for care recipients by improving their quality of life and delaying entry to care homes [7]. With estimates of 63 million people with dementia globally in 2010 [8] and in recognition of the important economic and quality of life benefits provided by carers, policy makers are urgently seeking ways to ensure the ongoing capacity of carers to provide the bulk of care. 'Carer burden' is a term used to describe the negative effects of caring on carers' physical, mental, social, and financial well-being. Such effects are clearly evident in the context of caring for people with dementia [9,10]. Factors such as carer resources, the tasks of care (such as managing the behavioural and psychosocial symptoms of people with dementia), carer attributes, and the relationship between carers and carer recipients, interact in complex ways with carers' coping and wellbeing [1,5,6,[11][12][13]. Service providers have attempted to capture this complexity through an increasing array of carer and carerecipient assessments, striving to prevent carer burnout by meeting carer' service needs [14]. In comparison with the general carer population, carers of people with dementia exhibit higher levels of unmet need and lower levels of service use [3,15,16]. This is problematic because the resultant physical and emotional distress for carers [17,18], is strongly predictive of impending entry to care homes or the death of the care recipient [18,19]. Community services are commonly seen as a key intervention for reducing carer burden, despite the inconsistent nature of research evidence [18,20]. Services such as respite, nursing assistance, domestic assistance, and personal care aim to support carers and people with dementia, so that ageing in place can be maintained as long as possible. Low levels of service use however, suggest that there are unrecognised obstacles to assessing and meeting carer needs [14]. Respite care, for example, is one of the major forms of assistance directly targeted to carers [21], but in Australia this service is not used by up to 70% of carers [22]. Brodaty et al. [23] developed a typology of four reasons for service non-use in dementia carers: 'managing at the moment', 'reluctant to use services', 'service characteristics', and 'do not know about services'. Studies show that a complex range of socio-cultural factors is implicated in carer 'reluctance to use services', including carer identity barriers, fear of role change, concerns for privacy, financial factors, and personal characteristics [11,13,23]. This complex causality of carers' service needs has to date been addressed through an increasing reliance on comprehensive assessments of carer need. Service organisations typically use a range of assessment tools at admission to assess carers' service needs, which generally act as a 'gateway' to services. In this approach, a care professional "begins with the identification of specific difficulties, accounts for the presence and efficacy of current help, recognises perceived need and finally specifie[s] the type of intervention required to meet those needs" [ [3]: 323]. Within this paradigm, the care professional focuses assessment on the status of the person with dementia, the subsequent perceived burden on the carer, and any existing socio-economic support deficits [3]. However, given the increasing awareness of the complexity of factors that may be responsible for unmet need [18], such an approach promises an increasing cost in terms of time and other resources. Given the recognised socio-cultural complexity of carers unmet needs, alternatives to the professional assessment paradigm is needed [24]. We adopted Bradshaw's [2] sociological typology of need as one that might best capture the complexity of carers' situations. Categories of need defined by Bradshaw [2] are normative need, felt need, expressed need, and comparative need. The carer burden assessment approach described by Meaney et al. [3] is a normative view of need, which is professionally identified. The term felt need identifies needs articulated by potential service users themselves. This need is captured by the simple process of asking people what services they would like. Expressed need is the actual demand or uptake for services. The low rate of translation of felt need into expressed need in carers' orientation towards services can be explained by a range of socio-cultural factors. Finally, comparative need is assessed by comparing groups of service users with groups of nonservice users to compare characteristics. No single form of measure is likely to capture all carers' unmet needs, but the move towards person-centred care has increasingly challenged normative need as paternalistic and inappropriate [3]. This study aimed to explore the relationship between different types of carer service need using Bradshaw's [2] typology. The in-depth study used a mixed method design with a concurrent triangulation strategy [25] to investigate a group of carers and care recipients with dementia. Quantitative subjective and objective measures of carer burden and carer stress were administered and community service use measured, thereby capturing indicators of normative, felt, and expressed need. Qualitative in-depth interviews were conducted at three time points, in order to assess the context of service use and carer need, and to capture the complexity of socio-cultural contexts. Participants The study population consisted of 20 community-dwelling pairs of dementia carers and people with dementia. Carers were required to identify as a 'primary carer' -that is, the person providing the greatest amount of care -and could be either co-habiting with a person with dementia or living apart. Carers (n = 24) known to the local Alzheimer's Australia organisation were contacted and invited to participate, with 20 consenting. This method of recruiting ensured that carers were caring for someone with a dementia diagnosis, and that they were linked into some formal support, irrespective of their concurrent use of other community services. Isolated carers without any links to community services were not represented in the sample. Ethics approval was received for this study from the Tasmanian Social Science Human Research Ethics Committee. Procedures Four visits took place with each carer over 12 weeks, at weeks 1, 4, 8 and 12. During the first visit, as data collection was occurring with carers, care recipients were seen by a psychologist in a separate location in their homes. Carers completed self-report measures on carer burden and stress (normative need measures), indicated their service wants (felt need measures), kept a service usage diary over the 12 week study period (expressed need measures), and participated in three semi-structured interviews, conducted at monthly intervals. Data was compared across participants. Qualitative data The interviews lasted between 30 to 90 minutes and elicited data on the nature, frequency and quality of carers' interactions with community service providers. Carers' experiences of their carer role were also explored, in order to provide a context for their interactions with service providers. Carers were engaged in a process of progressive disclosure during the interviews, in which semistructured research questions guided them through a series of increasingly sensitive topics. For example, in interview 2, carers were asked to clarify the services they had been using (and had documented in the service usage diary) and to expand on their interactions with service providers. In later interviews (3 and 4), questions centred on the carers' socio-economic circumstances, their felt need for more or different services and their care-giving experience. Measures of Care Recipient Dependency Carer burden (normative need) was assessed by measuring the severity of dementia and functional dependency of the care recipient, as well as carers' subjective ratings of burden. Two tools were used to obtain an indication of the severity of dementia in the care recipient. The Dementia Rating Scale-2 (DSR-2) was administered, a tool widely used to screen cognition in people with a known or suspected dementia. This tool provides objective, psychometric measures of attention, construction, initiation/perseveration, conceptualisation and memory, with lower scores indicating higher deficits [26]. Carers also completed the Bayer Activities of Daily Living Scale (BADLS) [27]. This informant-rated questionnaire assesses functional disabilities across 25 everyday tasks, using a ten-point Likert scale to assess difficulty: (1 = never has difficulty, 10 = always has difficulty), with the additional options of "I don't know' and 'Not applicable'. Higher scores correspond to higher deficits. Measures of Carer Burden Indicators of carer burden and mental health status were obtained at time point 1 (normative need). A researcheradministered Carers' Checklist [28] captured carers' perceptions of burden in domains relevant to care recipients and carers' functioning [29]. The Carers' Checklist addresses a number of domains relevant to the functioning of the person with dementia and his or her carer. For the person with dementia, this includes: cognitive and psychological symptoms, ADLs and self care, inappropriate behaviours, social behaviours and safety issues. To assess the extent of behavioural and psychological problems in the care recipient, carers indicate whether any problem behaviours are exhibited by the person with dementia: ("always", "sometimes", "never"). The Carers' Checklist also includes items on the care-recipients' assistance needs for Activities of Daily Living (ADLs) and self care, social behaviours and safety issues. 'Objective' burden is indicated by means of carers' reports as to how often these 26 dementia-related problems occur (BPSDs, ADLs etc.), while 'subjective' burden is measured by how stressful carers rate each of these problems [28]. Carer perceptions of overall burden are obtained through five scales, which ask carers about physical, financial, emotional and social burden. Carers rate how burdensome they find caring on each scale from 1 (no burden at all) to 5 (a great burden) [28]. The Carers' Checklist has been used across a range of service settings, including the community [29], and has shown high internal consistency (Cronbach's alpha = 0.93) [28]. The General Health Questionnaire-30 item version (GHQ-30) was also administered at time point 1. This is designed as a first stage screening tool for psychiatric illness, to provide an objective indicator of non-psychotic psychiatric disorders (typically anxiety and mood disorders) [30]. A GHQ score of 5 or above indicates an increased likelihood of a non-psychotic mental health problem which would warrant subsequent treatment measures [30]. Service Usage The final domain covered by the Carers' Checklist is carers' felt need for services. This is captured through four items that ask carers whether they need more help from services than is given, want better access to services, want more information than is given, or feel that services should work together and communicate more effectively. Carers responded using a 3 point scale: (never, sometimes and always). Service usage (expressed need) was captured through service diaries. Carers reported the weekly hours of community services received, with new diaries provided each month. Weekly support phone calls and monthly face to face assistance at the time of the interviews formed the cornerstone of a progressive engagement approach, which was aimed at maintaining carers' engagement with the project and facilitating complete and accurate data collection. At the final interview, a question about the carers experience of participating in the study, aimed to bring a natural, seamless, and positive closure for them. Disengagement was also assisted by presenting a (unexpected) gift voucher in appreciation of participants' contribution. Analysis Bivariate descriptive analysis with categorical data used Kendall's tau-b for ordinal data, with square tables and Spearmans rho for variables with more categories. Service variables from the service usage diaries were categorised into: (i) Practical assistance, encompassing domestic help, gardening help and physical care; (ii) in-home respite, incorporating any service delivered in the family home, primarily to provide diversional activities or supervision for the person with dementia; (iii) out-of-home respite, incorporating any service that was situated in the community, designed for people with dementia, and relieved carers of the caring role, and; (iv) a new dichotomous variable that was created for each service type of any or no service use. All interview data were transcribed and subjected to case analysis by two members of the research team. Discussions about services and usage of services were highlighted. Cases that exhibited normative need (i.e had low service usage according to burden measures in comparison with other cases) were investigated, in order to uncover possible explanations. Members of the research team engaged in peer debriefing, exploring rival explanations, probing biases, and clarifying the basis of interpretation, with a view to enhancing the credibility of the analysis [31]. Table 1 summarises the demographic data and Table 2 the clinical characteristics of the sample. Our sample of carers had characteristics consistent with the demographic profile of carers of people with dementia in Australia, although males were slightly under-represented [22]. Most were female (90%), co-resident carers (85%), the spouse of the care recipient (70%), and aged over 66 years. Our care-recipient sample was consistent with known characteristics of this group, in terms of age, and dementia cause [22]. The majority of care recipients had a formal diagnosis, with 55% having Alzheimer's disease, 15% vascular dementia, 20% dementia of an unknown/ unspecified cause and 5% Parkinson's disease or frontotemporal dementia. Despite the relatively small sample size, participants' characteristics fitted within the range of those reported in studies of Australian carers and people with dementia, though we had a higher proportion of female carers and more male care-recipients. Limited data about carers of people with dementia has been generated within Australian contexts. Table 1 includes a column where available national statistics from AIHW (2007) are included. Care Burden -normative need The DRS-2, BADLS and Carers' Checklist scores indicate that care recipients were moderately to severely impaired with dementia. Seventy percent scored at or below the 1 st percentile on the total DRS-2 score, indicating pronounced cognitive impairment significantly affecting a wide range of everyday activities. The most commonly reported behaviours were forgetfulness (11/20 always, and 8/20 sometimes) and always asking questions (9/20 always and 6/20 sometimes). Additionally, eleven care recipients "could not be left alone even for one hour" and eight "wandered at night". The total BADLS scores had a positive relationship (Spearmans rho = 0.649, p = 0.01) with the carer reported dementia related problems, as could be expected, but there was no relationship between mean behaviour scores and GHQ-30 scores. Results for measures of carer stress and strain were obtained through the Carers' Checklist and the GHQ 30, and are presented in Table 2. Carers self-rated a 'moderate strain' (mean of 3 out of 5) overall, and in the physical and social domains of caring. Emotional strain was rated at the point halfway between 'moderate strain' and 'a great strain', whilst financially, carers indicated a lower strain. The mean GHQ-30 score for carers was nearly double that of their non care-giver peers (8 versus 4.72) [30] with two-thirds (compared to 33% of their peers) scoring 5 or over. These results indicate significantly elevated levels of psychiatric symptoms (likely to be predominantly anxiety and depression) in this sample, despite carers' subjective assessments of more moderate burden. We found no significant correlation between measures of dementia severity and the GHQ-30 scores. Overall, the mean carer self-rating for stress linked to dementia behaviours was 15, out of a total possible score of 52. At time point 1, care recipient behaviours most commonly rated as 'very stressful' or quite stressful' by carers were 'forgets things which have happened' (7/20 -very stressful; 8/20 -quite stressful), 'always asking questions' (7/20 -very stressful; 7/20 quite stressful), 'not safe to be in the house alone' (5/20 -very stressful; 7/20 -quite stressful); 'cannot be left alone even for one hour' (5/20very stressful; 5/20 -quite stressful) and 'wanders at night' (4/20 -very stressful; 1/20 -quite stressful). These measures demonstrated that within our sample. carers exhibited high levels of normative need on objective carer burden measures, and moderate levels of normative need on subjective carer burden measures. Service Usage -expressed need Service usage summary data is detailed in Table 2. The most used service was out-of-home respite, which had a greater range of hours because the category included overnight respite. Most carers received a combination of services, with 25% (n = 4) of the sample receiving all three types of services, 30% (n = 6) received both out-of-home respite and practical assistance, and 15% (n = 3) used both in-home respite and out-of-home respite services. The relationships between measures of normative and expressed need were limited. Counter-intuitively, there was no relationship between dementia severity indicators (BADLS; DRS-2; dementia related problems via carer checklist) and out-of-home respite or practical care. In contrast, a negative correlation (Spearman's rho = 0.646, p = 0.01) did exist between DSR-2 scores and in-home respite hours, meaning that in-home respite service use increases as cognition deteriorates in the care recipient, (See Figure 1). The total BADLS scores also had a moderate correlation with in-home respite hours (Spearman's rho = 0.574, p = 0.01), suggesting that both deterioration in cognition and function are related to the need for inhome respite. Severity of cognitive impairment, behavioural and psychological symptoms, and impairment to everyday living in the care recipients bore no relationship to the amounts of practical care and OHR used by carers. Carers' felt needs More than half of carers said they would like more help from services than they were currently receiving (felt need). Importantly, there was a correlation between felt need and carers' objective measures of stress (normative need). The carer rated item -'I need more help from services than I am given' was positively related to carers' GHQ-30 (Spearmans rho = 0.625, p = 0.01), as was carers' 'I need more information than I am given' (Spearmans rho = 0.557, p = 0.05), and the carer rated stress related to these unmet service needs (Spearmans rho = 0.503, p = 0.05) and information needs (Spearmans rho = 0.634, p = 0.01). These were the only significant correlations between carers' subjectively and objectively rated burden and felt service need. This relationship suggests that carer-rated felt service needs are a useful indicator of carer psychological stress, while carer subjective burden is not. Significantly, carer felt need did not correlate with service use (expressed need), implying a high level of unmet service need. We found that some carer characteristics were related to service use, namely that carers other than spouses were more likely to receive practical help (tau b = 0.535, p = 0.01), and that carers aged over 75 years were more likely to be using out-of-home respite (tau b = 0.33, p = 0.05). The suitability of services supplied The qualitative data provided insights into why carers had unmet service needs and why dementia severity and carer stress were not directly linked to out-of-home respite or practical assistance received. A wide range of concerns led to resistance to, or refusal of, services, even for some objectively 'stressed' carers (i.e high GHQ-30). While the benefits of respite (time-out, opportunity to complete other chores etc.) were generally acknowledged, the cost/benefit balance for out-of-home respite was sometimes too high, because of the 'effort' needed to get the care recipient to an out-of-home respite facility, the emotional burden of guilt or worry, or the financial cost. Need more help from services than I am given -yes, Figure 1 Regression of hours of in-home respite by DRS-2. Regression of the total hours of in-home respite received by carers over a 12 week period by the Dementia Rating Scale -2 of the care recipients where less IHR services were related to higher DRS-2 scores (better cognition). The effort involved in helping to prepare care recipients with more advanced dementia for visiting an external service could be too great, for reasons ranging from functional incapacity, resistance, agitation, or unreliable transport services. Some care recipients with milder dementia did not like to attend out-of-home respite, and carers, did not like to (or could not) force them, as in the following example: "I've talked to her about the [OHR] and she just won't have a bar of it" [(2) Int 2]. Carers felt 'caught' and 'trapped' when care recipients were reluctant to leave the home, as most care recipients could not be left alone. As a consequence, our sample of carers needed in-home respite as dementia severity increased, in order to address the basic requirements of their lives and households. One interviewee caring for a person with a DRS-2 score of 35 (i.e severe dementia) reported that: "With [IHR] I can leave the house to go out and do the things that I am interested in" [(13) Int 2]. Another carer felt that in-home respite was necessary because: "You just go out, pay your bills, (then) you've got to get back again because you can't leave them on their own, they're not safe to be on their own" [(8) Int 3]. In instances of in-home respite, carers also felt reassured that their 'time-out' had not been obtained at the cost of distressing the care recipient by exposure to new environments and unfamiliar circumstances. The interviews also suggested reasons why practical help was the least used service in this sample. Most of the carers were female spouses, and for many (but not all) of this group, caring fell within their normative expectations of the spousal role. The increased 'work' within the home (for example, extra cooking and washing) was accepted as an extension of the regular duties that this role implied. For carers such as this one, offers of practical help were deemed unnecessary and inappropriate: "All I ever wanted was someone to be here in the house so he was safe, to feed him. I never expected or wanted anyone to come in and do my housework or any of those types of thing." [(13) Int 1]. This quote illustrates the difference between the carer's felt need and a normative assessment of need, demonstrating that the carer is quite clear about the type of service she would like to receive and would accept. The interviews supported documented evidence that care-giving is stressful, but also showed that carers' stress may originate from events that are unrelated to the cognitive or functional status of care recipients. The carer sample proffered examples of other events that had generated stress in their lives, such as a carer diagnosed with a lifethreatening illness or with a medical history of mental ill-ness, the death of a pet, and grief for the loss of the partnership that the care recipient had once provided. These diverse life situations and expectations highlight problems inherent in basing carers service needs on measures of carer burden or care recipient disease status, and explain why normative measures of need may fail to identify carers experiencing significant stress. The interviews also indicated that the process of finding out about existing services and about their own eligibility for those services could be onerous for carers. Some carers felt that assessments were time consuming and achieved no satisfying result, either because particular services were not what the carer wanted or because the care recipient was not deemed eligible. The following quote is from a carer whose care-recipient had only moderate cognitive impairment, but whose GHQ-30 score was 13: "Well there's been five actual assessments and three interviews... over the last four months...then there was the day care lady came and assessed her and that was fruitless" [(3) int 3]. This quote also demonstrates how many resources can be applied to normative need assessments (five home visits), without necessarily benefiting stressed carers. Overall, the interview data highlighted the complexity of interactions between the care-giving situation and service use, and the limitations of relying on professional assessments. Discussion A key implication arising from our data is that felt needs expressed by carers of people with dementia are an important indicator of service need. The extensive list of possible causes of burden in carers' lives extends beyond the fact of caring for someone with dementia. Within the context of particularly challenging life circumstances, even modest care recipient demands may 'tip the scales' for carers and cause excessive stress. This means that relying on assessments of the status of care recipients with dementia, and on carer burden, may inadvertently exclude those for whom the basis of their need for services falls outside existing measures. While large data sets indicate that key factors such as cognition and ADL functions increase the probability of carers needing services [19], unknown and unpredictable contextual conditions also need to be taken into account. The significant correlation between our carers' mental health status and their stated need for more services suggests that felt need should be given greater priority over normative need in assessing service needs for carers of people with dementia. Our interview data shed light on the limited relationship between normative measures of need and the expressed need of service use in the case of dementia carers. Insufficiently acknowledged interactions between carer life circumstances and identity issues, and the disease status of care recipients may mean that measures of normative need capture only a limited range of causes of carer stress and service needs. Measuring stress clinically with a more direct tool such as the GHQ-30 objectively measures carer stress, since professionals are not trying to 'work backwards' by assessing possible causes of stress, in the manner of many burden measures. Given the complexity of carers' life situations, and the benefits that carers provide to the health care system in supporting people with dementia, it may be that offers of services should be based primarily on carers felt needs. This suggestion is supported by previous studies finding that unmet service needs have complex causes [20], and that direct questions of unmet need were better predictors of impending admission to care homes or death of care recipients than other assessments [19]. The correlation between DSR-2 and BADLS and inhome respite suggests that the cognitive and functional aspects of care recipient deterioration are linked to inhome respite need. The reported social isolation of spouse carers [13] is an explanation supported by the study's qualitative data, complementing Mahoney's [ [32]: 221] finding that carer burden emerges from the constant "vigilance" that is required of the carer, in order to protect and care for the person with dementia. Assisted by the provision of only modest service hours, carers who participated in this study were able to support care recipients to remain at home until they reached moderate to severe stages of dementia. Carers such as these make an important contribution to the health care system and save health and social services significant costs thereby benefiting the public purse. In line with previous research [9,10] however, we found that caring for a person with dementia brings social, emotional, physical and financial costs to the carers themselves, with our participants identified as more stressed than their non-caregiving peers according to the GHQ-30. This stress highlights the importance of addressing carers unmet service needs, if policy makers wish them to maintain care in the home for extended periods. Our results suggest that carers' stated (felt) service needs should be considered a 'red flag' by service providers. What of those who refuse services? These findings suggest carers could be refusing services because particular services are not suitable, or because the carer does not identify his or her stress as directly linked to the care recipient. In these cases, services may need to offer more flexible options, a conclusion also reached by other researchers [14]. These carers are also likely to benefit from approaches that involve them in decisions about services and provide choice. Having choice and being able to participate in service decisions are two factors desired by health service users [33] and found to delay entry to care homes for care recipients [34], but these options were missing in the experiences of our participants. The complexity of circumstances surrounding each carer indicates that they must be allowed to play a more pivotal role in need assessments, and that services need to offer service choices that are flexible and responsive. In order to have choice, carers firstly require information about service availability. This is a confusing area even for experienced health professionals [35], which, as a crucial first stage in service provision, must be rendered less daunting for all stakeholders as a matter of priority. Consistent with other studies [29,36,37] we found that carers may be unaware of available services and would be likely to benefit from greater knowledge of available forms of assistance. Overall, it is likely that the provision of more comprehensive information, engagement in service needs assessments, and allowance for choice and service flexibility constitute the first steps to be taken in decreasing service refusal in situations of need. The small convenience sample used in this study is a limitation that raises the risk of Type II errors and prevents generalisability. The participants comprising our sample were already linked into and using some services, as could be expected by the dementia severity of their care recipients, and did not include isolated carers, or carers of persons with early stage dementia. Our sample also consisted mostly of female spouses, and this clearly influenced their belief systems about the appropriateness of using services. However the mixed method design we utilised provides a different strength through triangulation of quantitative with qualitative data, in which interlinked contextual information informed the interpretation of measurement results. Conclusions Overall, this exploratory study suggests that normative need is not as useful as felt need when considering the health service needs of carers of people with dementia. A focus on measuring the care recipient's disease status and carer's burden overlooks the influences that broad life circumstances have on carer's service needs. Attempts to index the carer's service needs via objective cataloguing of functional and cognitive impairments may therefore be inadequate Our data suggest that felt need is a suitable indicator of carer unmet needs, because it is significantly related to carers' mental health status. Felt need may therefore be an appropriate 'red flag' of carer burnout, even in the absence of obvious "flags" raised by behavioural, functional and cognitive decline. In view of the current client-centred focus of health services, and the acknowledged burgeoning of numbers of people with dementia, our study contributes to the growing knowl-edge of how services can better support carers. Health services and professionals may need to reorient their approach away from traditional normative need assessments toward more participatory felt need. The small sample of carers and care recipients used in this mixedmethods study means that a larger investigation is required to assess the relevance of our findings to the general population of carers of people with dementia. However the similarity of our findings to larger data sets suggests that such a wider investigation needs to focus on felt carer need and service access, rather than on assessment.
2014-10-01T00:00:00.000Z
2010-05-13T00:00:00.000
{ "year": 2010, "sha1": "d1555f0f089f94d249897cfea66c84f6f08ce54e", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-10-122", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d1555f0f089f94d249897cfea66c84f6f08ce54e", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
90030420
pes2o/s2orc
v3-fos-license
Diffeomorphism Cohomology in Beltrami Parametrization II : The 1-Forms We study the 1-form diffeomorphism cohomologies within a local conformal Lagrangian Field Theory model built on a two dimensional Riemann surface with no boundary. We consider the case of scalar matter fields and the complex structure is parametrized by Beltrami differential. The analysis is first performed at the Classical level, and then we improve the quantum extension, introducing the current in the Lagrangian dynamics, coupled to external source fields. We show that the anomalies which spoil the current conservations take origin from the holomorphy region of the external fields, and only the differential spin 1 and 2 currents (as well their c.c) could be anomalous. Introduction The most transparent and useful formulation for a field Theory surely is the one in which the locality is manifest. The interest for this approach has been raised from the physical relevance of the local symmetries, such as gauge invariance for particle Physics and the diffeomorphism role in String and Gravitational models. Now it appears in pratice that, for system of physical interest in which a local symmetry is realized, pointing out locality is a good appeling to investigate the deep meaning of the symmetry through the study of those local objects which, due to their invariance, have global properties. This is the reason why it has been necessary to introduce in the literature the so called descent equations , and, plunged in the B.R.S. approach, they gave outstanding results in hunting anomalies, vertex operators, and so on. In this scheme it has been necessary to provide the local objects as a form graduation, in relation to their invariance. In a recent paper [1] we have used this strategy to study the anomalies [2] and the vertex operators in two dimensional diff-invariant models in Beltrami parametrization [3,4,8,5,11,12], seen as local density quantities with form graduation equal to two, since they are to be integrated in two dimensional manifolds. Indeed Beltrami parametrization is a good chance for studying the chiral resolution in conformal blocks [6] and holomorpic factorization [7,3,4,5,8], since the complex structure parametrization is realized in an automatic way. In this paper we want to investigate the objects with form graduation equal to one, that is currents, which play an essential role in the symmetry realization and in the understanding of the links between the Lagrangian and the Hamiltonian approaches. In [9,10] this analysis has been used in the Hamiltonian spirit, so its improvement within a Lagrangian framework is required. (1.3) where the ghost fields c z (z,z), cz(z,z) carry a Q ΦΠ charge equal to one. So the diffeomorphism will change the (Z,Z) coordinates by means the action on the (z,z) ones. The Beltrami parametrization: with the compatibility condition: is particularly attractive since in this approach conformal rescaling is seen as a diffeomorphism (Z,Z) −→ (Z ′ ,Z ′ ) of the surface into itself with the same µ. Indeed when the Beltrami parameters are taken as constant the reparametrization operation is represented by complex analytic transition functions. Furthermore equivalence class of analytic atlases will identify a complex structure, and a conformal classe of 2-dimensional metrics. That is identifying conformal invariant models on a Riemaniann manifold needs a carefull description to factorize this equivalence arbitrariness. However, not all diffeomorphisms of the surface into itself amount to just a conformal rescaling. The intrinsic geometry of the surface is determined by the metric tensor (which is a coordinate-free object), and other changes in the metric produce confomally inequivalent surfaces. Furthermore in a Lagrangian Field Theory model the Beltrami differentials are the appropriate sources of the energy momentum tensor components, whose short distance products will define the algebraic construction [6] of conformal current algebra. Our purpose is here to characterize in a cohomological way all the diffeomorphism conserved currents, first at a Classical level an then to extend their properties (first of all their conservation) to the Quantum one. Being the Action an invariant (1, 1) tensor, we shall suppose that in a conformal invariant theory the matter fields are realized on the Riemaniann manifold by local tensor fields Φ j (Z,Z)dZ j dZ of weight (j,) invariant under change of holomorphic charts Φ j (Z,Z) α dZ j α dZ α = Φ j (Z,Z) β dZ j β dZ β (1. 6) It is possible also to define, via the diffeomorphism action restricted to dilatations, the geometric dimensions defined as: The B.R.S. realization of the infinitesimal diff-variations then reads: (1.8) and its complex conjugates. Going to little "z" indices it writes: with of course the complex conjugate expressions. The matter fields are parametrized as: . (1.12) The previous variations (1.9) will define a B.R.S. local operator δ such that δ 2 = 0 acting on the space of the previous fields and their derivatives considered as independent monomials coordinates as in Ref [14] So, even if the (Z,Z) frame will describe the model, the use of little "z" coordinates is particularly useful (as remarked in [1,15], since the the derivative operator can be defined, in the above mentioned scheme,by means of the δ operator and the "little" c ghosts as we shall see in the following). Let Q ⋆ be the physical charges in each tensorial sectors, such that the ⋆ label will sum up covariant and controvariant "big" indices. These charges will derive from currents as: so that the form degree is with respect the "big" indices. The diffeomorphism invariance will assure that it will be a counterpart in the "little" indices: 14) The aim of this paper is to study the diff-invariant charges, that is the quantities Q ⋆ which verify: It is well known that such a symmetry has to require, at least at the quantum level, conserved currents, which is a "local" constraint well defined with respect to a reference frame; but the diffinvariance (1.1) puts on the same footing a large class of systems of coordinates: so it may be interesting to investigate how the symmetry, realized in a local way, will generalize at each chart the currents conservation. We shall find that the two dimensional character of the theory, if it is defined on manifolds without boundary, requires, for the existence of the charge, the holomorphic factorization of currents in the Z(z,z) (or its c.c) variable. This fact will have many important consequences that we shall investigate in this paper. First of all the fact that all the local currents will have definite covariance properties, that is their will be (n + 1, 0) (or their c.c) true tensors. Furthermore our aim will be to extend at the Quantum level all the properties found, established at the classical one. For this reason we have to put the currents inside an invariant Action by coupling them to external fields, and we have to study the perturbative renormalization of the model. We shall show that only the spin 1 and 2 currents will spoil conservation at the quantum level, while all the other ones will maintain all the classical symmetries. The paper is organized as follows: In Section 2 we shall solve the 1-forms descent equation deriving from the diff-mod d invariance and in particular we shall show that diff-current conservations can be derived from the diff-cohomology, so they cannot depend on the particular choice of coordinates: we shall establish these relations in the "true" (Z,Z) coordinates as well as in the reference frame (z,z).We shall show that the charge existence condition on a Riemann surface with no boundary, implies holomorphic constraints for the currents. In Section 3 we shall briefly recover the previous results in a Lagrangian two dimensional dynamics, which forces a two-forms analysis. This artillery allows a perturbative quantum extension of the diffinvariance in order to find the possible obstructions to both the current covariance and the current conservation. An Appendix is devoted to some computational details using the spectral sequences method [14] [15] 2 The 1-forms in the δ-cohomology In this section we want to analyze the descent equation of 1-forms, already done in [9][10], but following the spirit of [1]. To be more accurate we shall first relate the diff-mod d cohomology to the local unintegrated functions, by solving the ordinary differential action in term of the B.R.S. operator for diffeomorphism. Furthermore we shall show that all the uncharged elements which are solutions of the 1-forms descent equations will indentify conserved currents: more exactly, the diffeomorphism cohomology alone will specialize current conservation both on the "little" and "big" coordinates: This fact has an important consequence for the two-dimensional character of the theory: the current conservation condition will admit inversion formula, and, on a manifold without boundary, the holomorphic factorization of currents will be obtained. We stress that this is, in our framework, a classical level analysis which support a particular importance for proving the stability properties of theory; the quantum extension will need more accurancy. The consistency equations and the current conservation We shall start from those objects (defined in the general reference frame (z,z)) Q ⋆ = (J z,⋆ dz + Jz ,⋆ dz) = J 0 1,⋆ (z,z) (2.1) (and the form degree is relative to the "little" indices) which are elements of the δ cohomology: In terms of local quantities the cocycle equation will imply: where δ operator acts in the space of local unintegrated functions as described in (1.9); its full complete description will be found in the Appendix(A.3). The previous equation will characterize J 0 1,⋆ (z,z) as an element of the diff-mod d cohomology . so that the bottom current writes where J ♮ 1 0,⋆ (z,z) is an element of the cohomology of δ in the space of the unintegrated functions. Writing the differential operator in term of the δ as in [15,1] (we remark that it is only true in the "little" coordinates reference frame) one gets by a direct substitution in eqs. (2.4) This is the fundamental formula which relates the elements of the diff-mod d cohomology to the elements of of δ one. It will be very useful to calculate this cohomological space: this will be done below. Let us however recall the most important result coming from this calculation and show their consequences. First of all, denoting by N z (↓) and N z (↑) the lower and upper "little" indices counting operators respectively, we shall show in the Appendix that: which as 1-form implies: so (2.7) reduces to: where N Z (↓) and N Z (↑) are, as can be easily understood, the counting operators of the "big" lower and upper indices respectively. Then (2.10) tell us that the current conservation is a direct consequence of the diffeomorphism invariance. Indeed, as pointed out in [9] applying d on eq. (2.3), we get: that is, since d, δ = 0, in terms of the δ-cohomology functions, we get: we want here to show that the J 0 2,⋆ (z,z) ♮ term of the r.h.s. is zero. From the very definition we have: (2.14) on the one hand we have from (2.6) On the other hand, by using directly, in (2.14): Comparison of eqs now (2.15) and (2.17): that is, the J 0 2,⋆ (z,z) ♮ obstruction term, which a priori occurred in (2.13) does not appear. Moreover the term 0,⋆ (z,z) will contain negative charged fields:so, since we remove the anti-ghosts fields, after imposing their equations of motion through the gauge fixing, we can assume the only Φ, Π charged negative fields are only those which occur in the Lagrangian coupled to the B.R.S. transformation. We shall show in the Appendix that in the δ cohomology space no negative charged field can appear, hence: but recalling that J 0 1,⋆ (z,z) is a representative of an equivalence class, and is defined modulo arbitrary δ contributions, the current conservation is realized only for elements J 0 1,⋆ (z,z) − δ J −1 1,⋆ (z,z) which will define "locally" the conserved currentJ 0 1,⋆ (z,z), such that: The local δ cohomology As pointed out in the formula (2.10), we have shown that the elements of the diff-mod d cohomology can be easily derived from the ones of the δ cohomology in the space of local functions. The aim of this part is to solve: where the upper index r will label the Φ, Π charge and the lower index the form degree respectively. The Φ, Π charge sector we are interested in, is the one with r = 1 If we decompose the cohomology spaces into their underivated ghost content: ,⋆ (z,z) do not contain underivated ghosts, the condition (2.21) will imply the following system: In the Appendix we shall show that in a general Lagrangian model, in which, for the sake of simplicity the matter fields are taken to be scalar, and the only Φ, Π negative charged fields are the external ones coupled to the B.R.S. variations (and the gauge terms are taken away by solving δ 2 = 0) the δ cohomology space does not contain negative charged fields, so for r = 0 the solution of (2.28) is δ trivial: Then, using: So, if we define: which is solved, according to the results given in the Appendix,by: are elements of the δ cohomology space. Next defining: By introducing: 43) Since ∂Z and ∂ Z commute with δ, then ∂Z J ♮,r−1 Z,0,⋆ (Z,Z) and ∂ ZJ ♮,r−1 Z,0,⋆ (Z,Z) are elements of the same space, so the only possibility to verify (2.43) is: Furthermore, since the δ cohomology does not depend on the external negative charged fields,solving (2.44) gives: so the final decomposition writes: We emphasize that, in the "big" coordinates, the current J ♮,r−1 Z,0,⋆ (Z,Z) will always transform as a scalar quantity, despite of its tensorial ⋆ content, that is: On the other hand, introducing the local quantities: they will transform in the"little" c ghosts as: so that it is a (1,0) tensor, while, going to the C(z,z) ghosts, we get: Similarly for its c.c. counterpart: (it is a (0,1) tensor ) So we can rewrite (2.47) as: 55) A complete description in terms of the c's can be achieved introducing: and (2.55) then reads: Therefore our final result for the currents (2.10), by specializing r=1, writes: or, by covariance: Finally we want to remark that δJ ♮ r 0,⋆ (z,z) = 0 implies, by using the decomposition (2.47) : the last equality tells us that that the diffeomorphism invariance will imply the current conservation both in the "big" index current J ♮,r−1 Z,0,⋆ (z,z) and in the "little" index one J ♮ r−1 0z,⋆ (z,z), that is dJ 0 1,⋆ (z,z) = 0 as before. The charge definition and the holomorphic factorization of currents The two dimensional character of the theory has an important consequence, since the current conservation condition can be inverted. In fact from the condition (2.45): it will formally follow: where the inverse operator (∂Z ) −1 can be defined, only in two dimension, using the Cauchy theorem: we remark that if we define the currents S ♮,r−1 z,0 (z,z) andS ♮,r−1 z,0,⋆ (z,z) from their previous BRS variations, then the conditions (2.69) will derive from the required conditions (2.6): In the Z coordinates these solutions will imply: and similarly for (2.78) We recall that the ⋆ index will indicate arbitrary "big" ZZ indices, so the switching to "little" coordinates from S 0 z,0,⋆ (z,z) ≡ S 0 z,0,Z n ,Z m (z,z) is done with a suitable λ andλ rescaling. It is easy to verify that the previous currents do not verify the local current conservation in µ and µ (2.69) unless a particular tensorial content is realized: in particular this is achieved in (2.79) only if S 0 z,0,⋆ (z,z) will contain only Z indices andS 0 z,0,⋆ (z,z) onlyZ ones, signature of the holomorphicity in Z. The Classical Level The previous Section has introduced, from a heuristic point of view the descent Equations which define the 1-forms. Moreover these equations play an important role in the dynamics; so they have to be embedded in a Lagrangian model whose quantum extension will provide informations concerning the renormalization of those currents. We have to introduce an invariant Classical Action Γ Cl 0 such that: In [1] we have shown that the more general invariant Classical Action under diffeomorphisms takes the form: We shall treat here, for the sake of simplicity, the spin zero case; in this case the most general invariant Lagrangian reads: We remark that, due to the dimensionless character of the scalar field, the only diff-invariance requirement will imply an infinite number of interaction terms at the Classical level, raising a lot of problems on the physical meaning of the model; anyhow several criteria can be established on the resummation of the interacting part, which will involve particular addition conditions on the definition on the model which would not destroy the reparametrization invariance. These aspects do not compromise our treatment which will hold validity for all these classes of models. According to the general prescription, we have to introduce the B.R.S. variations coupled to Φ, Π negative charged external fields. Furthermore, as said before, we put into the dynamics the 1-forms current coupled to external fields. where the "source" Action reads: and of course the complex conjugate expressions and the "current" Action : We have to impose the invariant condition: The U.V. dimensions of the constituents of the model are: [∂] = [∂] = 1 [S 0 z,0,z n (z,z)] = 1 + n (3.14) The external fields coupled to the holomorphic currents are introduced in the Lagrangian, by fixing their variations in order to get the descent equations seen in the previous Section: So we have: sρ z n (z,z) = (c·∂)ρ z n (z,z) − n(∂c(z,z) + µ∂c(z,z))ρ z n (z,z) (3.15) sβ z n z (z,z) = (c·∂)β z n z (z,z) +(∂c(z,z) +∂c(z,z))β z n z (z,z) −(n + 1)(∂c(z,z) + µ∂c)β z n z (z,z) +∂ρ z n (z,z) − µ∂ρ z n (z,z) + n∂µρ z n (z,z) (3.16) and their c.c. expressions. That is, if we define: we get the following descent equations for the sources: the introduction of the previous fields allows us to reproduce at the Lagrangian level the right properties of the holomorphic currents S 0 z,0,z n (z,z) at the classical level, when the simmetry is preserved. The role of the ρ z n field, as inhomogeneous part of the β tranformations, is of prime importance: it will fix the current conservation (2.69). The BRS philosophy forces us to fix their covariance properties, so the Λ z,z,z n (z,z) term is a priori needed at the tree level: we shall show that this term is unessential at the Classical level, but, on the other hand, is fundamental at the quantum level. (3.23) So we can write the current Ward identities, coming from the B.R.S. variation of the external sources ρ z n (z,z),β z n z (z,z). They reproduce in a functional approach the descent equations just encountered in the previous Section, written in terms of Λ z,z,z n (z,z),S r−1 z,0,z n (z,z) and their c.c. ≡ δ − c∂ −c∂ − (n + 1)(∂c + µ∂c) S r−1 z,0,z n (z,z) = 0 (3.25) Their solutions are carried out as before; introducing: But,since, ∂Z S 0,♮ Z,0,Z n (Z,Z) is an element of theδ-cohomology, the previous equation is consistent only if each term is identically zero: we have shown so that the diff invariance will imply the holomorphicity of S 0,♮ Z,0,Z n (Z,Z), that is: So the current conservation will derive from the inhomogeneous part of the β variation, so a priori we have to require: (∂ρ z n (z,z) − µ∂ρ z n (z,z) + n∂µρ z n (z,z)) = 0. (3.31) the eq.(3.31) will imply: On the other hand it is easy to realize from (3.24) (3.25) that in the quantum extension of the model, the possible ρ and β dependent anomalies will spoil the current conservation and their covariance properties respectively. The B.R.S. approach consists in the study of the cohomology of the δ operator in the space of local functionals, the charge zero space will identify the Classical Action while the charge one will give the quantum anomalies. This analysis has to be done as the one carried out in [1], where, in a similar way we have related the diff-mod d cohomolology space to the one of δ within the class of local functions. Calling ∆ p 2 (z,z) the more general element of the diff-mod d cohomolology and labeling with the ♮ index the δ cohomology elements we can find the 2-form extension of (2.7) as calculated in [1] The novelty of this paper with respect to [1] consists in introducing the currents inside the dynamics of the Lagrangian, and (as it is easy to realize from (3.24) (3.25) ) the quantum extension of the model might generate ρ and β dependent anomalies which could spoil the current conservation and their covariance properties respectively, as already stated. The next Section will investigate this possibility. The Quantum Level The quantum extension of the model has to be done as in our paper [1]; first of all we have to parametrize the anomaly as: dz ∧ dz∆ 0 (z,z) + n ρ z n (z,z)∆ 0,n z,z,z n (z,z) +ρz n (z,z)∆ 0,n z,z,z n (z,z) +βz n z (z,z)∆ 1,n zz n (z,z) + β z n z (z,z)∆ 1 z,z n (z,z) (3.34) and ∆ 0,n z,z,z n (z,z) has Φ, Π charge equal to zero, and ∆ 1,n z,z n with charge one, they both have U.V. dimensions 2+n. The hunting of anomalies has to be done as in [1], and within this model we show in the Appendix that: Theorem The ghost sectors (Φ-Π charge sectors) of the δ-cohomology in the space of analytic functions of the fields, where the fields λ andλ satisfy the equation (1.5), and completed with the terms {ln λ, lnλ},seen as independent fields depend only on terms containing underivated source ρ which multiply zero ghost sector elements of the cohomology. The zero ghost sector is, on the other hand, non-trivial only in the part which contains matter fields. Its elements will contain no free z andz indices, i.e. they are "scalar-like" quantities with respect to "little indices" but can hold the tensorial content with respect the "big indices" Z andZ. A generic element of this space will be a (h,h)-conformal quantity of the form where f is an analytic function, (polynomial). In this framework we can verify that: so they are coboundaries in "non local " basis, while in a local ones the compensation mechanism is not possible and give rise to anomalies. Finally calculations are concluded by deriving the Ward anomalies: the Ward identity obstruction to the (1,0) current conservation takes the form: and the (2,0) obstruction which corresponds to that of the energy momentum tensor reads: Conclusions In the present work we have studied the role of the diffeomorphism current both at the Classical and at the Quantum level by computing some specific cohomologies. It has been shown, within the Beltrami parametrization of compex structures, that the holomorphic properties play a fundamental role in the dynamics of simple conformal models. This fact again infers the relevance of the complex structure of the Riemann surface on which the field theoretical model is built on. The locality requirements govern deeply the occurrence of anomalies at the quantum level. The study of diffeomorphism current is not completed here and deserves some more careful results in particular in the meaning of the anomalous Ward identities for correlation functions with diffeomorphism current insertions. Appendix A The δ cohomology The previous results heavy rely on the calculation of the δ operator on the space V of the local functions with positive power on the matter field φ 0,0 and the Φ.Π. charged fields, and analitical in the Beltrami fields. These constaints are required on the basis of Φ.Π. charge superselection rules and Lagrangian contruction and play a relevant role, since the cohomology definition depends not only on the operator but on its domain too.For the construction given in the text the space V do not contain underivated c(z,z),c(z,z) Φ.Π. ghosts. We recall that δ acts on the space on the fields and their derivatives considered as independent coordinates, as they stand for local Fock representation of the model. The Spectral Sequencie analysys [13,14,15] is a "perturbative-like" method method which allows to recover, by recursion, a space which is isomorphic to the cohomology one. First of all an adjoint procedure is introduced into the game (just copying the Fock-like creation and destruction procedure [14]) by the formal replacement [15] of the formal derivative with respect to the field and their derivatives by the formal multiplication with respect the same quantities and vice versa. (A.29) So if we enlarge the basis of the cohomology by introducing the non local (in µ!) functions: {ln λ, lnλ} , the δ-cohomology will be empty. On the other hand in the Φ, Π uncharged space we have to verify: Theorem The ghost sectors (Φ-Π charge sectors) of the δ-cohomology in the space of analytic functions of the fields, where the fields λ andλ satisfy the equation (1.5), and completed with the terms {ln λ, lnλ},seen as independent fields depend only on terms containing underivated ρ which multiply zero ghost sector elements of the cohomology. The zero ghost sector is, on the other hand, non-trivial only in the part which contains matter fields. Its elements will contain no free z andz indices, i.e. they are "scalar-like" quantities with respect to "little indices" but can hold the tensorial content with respect the "big indices" Z andZ. A generic element of this space will be a (h,h)-conformal quantity of the form where f is an analytic function, (polynomial).
2019-04-02T13:03:04.550Z
1994-10-26T00:00:00.000
{ "year": 1994, "sha1": "c94f10fc731781f8e6c0e79c40f96dad20952f31", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9410190", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b8cb144bb9a76205664f6291770f073b80f06dc7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
148779586
pes2o/s2orc
v3-fos-license
Study of perceived reasons for initiation and continuation of tobacco use among rural population Tobacco consumption is a major public health issue globally. Majority of smokers (81%) of the world are living in low and middle income countries. is the major risk factor for six leading causes of death namely ischemic heart disease, cerebro-vascular diseases, tuberculosis, lower respiratory tract infections, chronic obstructive pulmonary disease, and cancers of trachea, bronchus and lungs. 1 More than 5 million deaths are due to direct use of tobacco, and about 600,000 non-smokers die due to passive smoking. There is an estimated 12 million cases of preventable tobacco related illnesses each year. 2 Most of deaths occur in 35 – 69 years age group due to tobacco use and an average loss of 20-25 years of life. It is estimated that the annual death toll may reach 8 million by the year 2030. 3 Tobacco use has high impact on growing economy and high expenditure on health. 4 There are more than 300 million smokers in India. 2 This includes more than 5 million child smokers, with 55,000 children taking up tobacco use every year. INTRODUCTION Tobacco consumption is a major public health issue globally. Majority of smokers (81%) of the world are living in low and middle income countries. is the major risk factor for six leading causes of death namely ischemic heart disease, cerebro-vascular diseases, tuberculosis, lower respiratory tract infections, chronic obstructive pulmonary disease, and cancers of trachea, bronchus and lungs. 1 More than 5 million deaths are due to direct use of tobacco, and about 600,000 non-smokers die due to passive smoking. There is an estimated 12 million cases of preventable tobacco related illnesses each year. 2 Most of deaths occur in 35 -69 years age group due to tobacco use and an average loss of 20-25 years of life. It is estimated that the annual death toll may reach 8 million by the year 2030. 3 Tobacco use has high impact on growing economy and high expenditure on health. 4 There are more than 300 million smokers in India. 2 This includes more than 5 million child smokers, with 55,000 children taking up tobacco use every year. unhealthy habit. It is essential to elicit the route cause that is why do people initiate and continue the tobacco consumption in their life. So the present study was planned to be carried out in this background to throw light on reasons to start and continue the tobacco consumption among the rural people of this region. METHODS Community based cross sectional study was done in the rural field practice area of Kuppam, Chittoor District, Andhra Pradesh, from November 2012 to January 2014. Total 1500 participants, above 15 years were included after taking the informed written consent. Pre-tested semi structured performa was used to collect the data. A pilot study was undertaken among 30 subjects (aged 15 years and above) in a village. This helped to fine-tune the performa. The finalized performa was then administered to the study subject. For the study, the villages having more than 1500 population were noted. Three directions were chosen randomly such as north, south and east. From each direction, one such village was selected randomly for the study. A sample of 500 persons from each village was taken, so that a total sample of 1500 was achieved. By systematic method, the households on the left hand side of the villages were included in the study. House to house visit was made to contact the subjects. After reaching the village, the first house on the left hand side was visited and subsequently the other houses were visited by following the left hand principle, until the target number of 500 persons was reached. In each household, all the individuals aged 15 years and above were selected for the study, who were willing to participate and are of permanent residents of that village. Individuals below 15 yrs and relatives or friends who are not residing in that area and who are not willing to give the consent were excluded WHO definition of current users of tobacco was applied. That is a person who gave the history of consumption of any tobacco product within 30 days preceding the survey. Data was analyzed by using Epiinfo version 7, proportion, percentage, ANOVAs, Chi square test and multivariate logistic regression test. The results were discussed by comparing with similar studies collected as review of literature and detailed report was prepared. RESULTS Total number of study subjects was 1500. Male 783 (52.2%) and females 717 (47.8%). Most of them were in the age group of 20-29 years (32.3%), followed by 30-39 years (20.9%). Majority of them were 605 (40.3%) belonged to nuclear family. Most of them were married (72%) and, Illiterates were 780 (52.0%). Majority of them, were agricultural laborers 819 (54.6%) and two third of them belonged to class IV and class V socioeconomic status. Regarding prevalence of tobacco consumption among the 1500 study subjects it was 61.3% (919 persons) Hindus, 61.2% (1496) were tobacco consumers. And all the 4 Muslims were tobacco consumers, was highest among the subjects belonging to the marital status of separated/ divorced/ widow/ widower group (86.5%). Prevalence of tobacco consumption was higher among the females (71.7%) compared with males (52.2%), which was statistically significant ( Table 2). Tobacco chewing was very common among most women. 47.1% subjects used chewing tobacco for 10 times or more per day. The mean age at initiation of tobacco use was lower among females (17.6 years) than among males (21.6 years). The difference was statistically significant by applying 't' test (Table 3). About 32.1% of subjects used the chewing products less than 5 times per day, and 47.1% subjects used them for 10 times or more. Most subjects (54.4%) started tobacco consumption below the age of 20 years followed by 28.9% of subjects in the age group of 20-29 years. Only 7.6% of subjects started to use tobacco after the age of 40 years (Table 4). Cultural factors In our study population, a few cultural practices were noticed. When a mother has delivered a baby, the relatives visit the house with taamboolam (betel leaves and areca nuts with tobacco), fruits, flowers and new dress materials. According to culture, mother should consume taamboolam. Adolescent girls in the house are also given taamboolam. This may be the reason for the higher consumption of tobacco among females. Reasons for continuation of tobacco consumption The reasons given by the subjects for continuation of tobacco consumption were almost similar to the reasons for initiation of tobacco consumption ( reason for continuation of tobacco consumption was relief of pain in any part of body, head or teeth. Relief of tension was the next important reason. Other reasons include the following: relief from the cold, to get extra energy, to keep alert while working etc. DISCUSSION Tobacco consumption is a major public health issue and it is a major risk factor for six leading causes of death. Younger generation also getting addicted to it and increasing the economic burden. Moreover it is preventable so the special attention should be given.. If half of the smokers quit tobacco in the next twenty years, one third of tobacco deaths would be avoided. 4 To control the tobacco epidemic in India, the problem should be quantified and various determinants of tobacco use should be identified. The present study was a community based study conducted among 1500 subjects aged 15 years and above in a rural area of Kuppam. In the present study, males were 52.2% and females were 47.8%. In the present study, most subjects (53.2%) were in the age group of 20-39 years. In the present study, most subjects were illiterates (52.0%). In our study 1500 study subjects, 919 persons (61.3%) were consuming tobacco that is prevalence of tobacco consumption and, similar findings (63%) were reported by Sinha et al, Chandra (71%). 6,7 Tobacco chewing in our study was heigh 83.6% in women which is similar to study done at Bombay. Gupta et al prevalence of smoking in our study is about 31.8% and similar findings were reported by Khokhar et al. 8,9 In our study it was found that tobacco consumption is directly related to increasing age. Similar findings were reported by Sinalkar et al. 10 In our study 78.1% tobacco users were illiterate and similar findings were reported by Ansari et al. 11 Our study shows 71.7% were agriculture labourers were tobacco consumers compared to National family health survey 2005-06 with similar findings by. Rooban et al. 12 Tobacco usage is universally related to socioeconomic status in our study which was also found in NFHS-3 reports. So in our study tobacco use was significantly higher in poor, less educated, and both among men and women is similar to study by Rani,Bonu,et al. 13 Our study shows that 54.4% below the age of 20, initiated the tobacco usage where as 33.3% is reported by Sorensen, et al. 14 Studies done in Kolkata , Punjab and in Gujarat, found that subjects had started to consume tobacco by the age of 20 years which is similar to our study findings. [15][16][17] In our study most common cause of tobacco consumption was peer pressure 50.4% which is similar to study in rural Wardha found that the peer pressure was the commonest reason (47.3%). 18 Family plays a major role in initiation of tobacco use. Tobacco use by the elders in the family increases the likelihood that a child begins smoking. The most common reasons for continuation of tobacco consumption included relief from pain (51.4%) and relief from tension (17.1%). A cross sectional study in Sikkim found that 50% of the users and 17.6% of the non-users had wrong belief that there is a benefit from tobacco use by way of relief from stress, toothache and constipation etc. 19 A study in Allahabad had also noted many reasons for continuation of tobacco use among the subjects and the common reason was improvement of bowel movements (73.0%). 20 CONCLUSION The study has found a high prevalence of tobacco use (61.3%). The present study has found the initiation of tobacco use before 20 years of age in most of the subjects. The common reasons for starting tobacco use found in this present study were peer pressure, influence of family members and relatives. It was found that there were certain beliefs and misconceptions that tobacco is helpful in relieving pain, tension etc. Recommendation Community based smoking cessation activities need to be conducted in this region to explain the adverse effects of tobacco consumption as the prevalence is very high. Attention should be focused on the younger age groups like school children and adolescents to control and prevent the tobacco use in the community. Focused group discussions should be held in the target group so that the tobacco users may quit the habit and non-users do not take up the habit. Health education and behavior change communication by medical and paramedical personnel to dispel the misconceptions. With the help of local and community leaders including celebrities and all types of media may be used.
2019-05-11T13:05:42.999Z
2017-09-22T00:00:00.000
{ "year": 2017, "sha1": "a3e9700032985b0c1323c5bfc42e5b2429de94df", "oa_license": null, "oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/1798/1477", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9961a6291fd135575fde4e93ab7ed08f271518ad", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
252505643
pes2o/s2orc
v3-fos-license
The Characteristics of Dissolved Organic Matter and Soil Microbial Communities in the Soils of Larix principis-rupprechtii Mayr. Plantations in the Qinling Mountains, China : Soil microorganisms and dissolved organic matter (DOM) play vital roles in nutrient cycling and maintaining plant diversity. The aim of this study was to clarify the relationship between DOM component characteristics and microbial community structure in the soil of Larix principis-rupprechtii Mayr. plantations. We quantified the responses of the soil microbial and DOM characteristics to stand age in a plantation forest ecosystem using phospholipid fatty acid (PLFA) analyses, ultraviolet-visible spectroscopy, and fluorescence spectroscopy. Three humic-like components and a fulvic-like component were identified from the soil samples, and humic-like substances were the dominant component of the soil DOM of the stands of different ages. The fluorescence index showed that the sources of soil DOM in the stands of different ages throughout the growth stages may be mostly plant residues, with very little contribution from microbial sources. Furthermore, the results demonstrated that stand age and growth season had a significant effect on the contents of the soil PLFA biomarkers of L. principis-rupprechtii Mayr. Additionally, significantly higher contents of different species of soil PLFA biomarkers were observed in the young forest (17a) than in the sapling forest (7a) and half-mature forest (27a), suggesting that stand age differences in the quality and quantity of larch litter and soil physicochemical characteristics affect the microbial community structure. Redundancy analysis (RDA) showed that changes in the soil DOM quality and components that were driven by growth season and stand age were the major drivers of variations in the soil microbial community structure in the study region. Overall, the seasonal variations in DOM quality and components may contribute to the variability of soil microorganisms, and the soil microbial responses to tree age will depend upon the provisioning of these resources. Introduction Dissolved organic matter (DOM), a natural chemical substance in soil, accounts for only approximately 0.5-1% of the soil organic matter (SOM) [1]. DOM consists of various bioactive compounds that are easily utilized by soil microbes but also includes compounds that are difficult for microorganisms to degrade [2] and protein-like, carbohydrate-like, polysaccharide-like, humic-like and fulvic acid-like compounds [3]. Soil DOM comes from multiple sources, such as plant residue, animal waste, root exudate, and soil organic matter decomposition [4]. Although soil DOM accounts for only a minor part of SOM, it is the most mobile and active component in the soil and is the direct source of carbon and potential substrates for soil microbes. Therefore, soil DOM plays an important ecological role in carbon cycling, energy substance for soil microorganisms, and the provision of plant-available nutrients [5]. Forest ecosystems are the largest carbon pool of terrestrial ecosystems and play an important role in maintaining the carbon balance [6]. Soil microbes are important parts of forest ecosystems and play an important role in SOM decomposition, nutrient cycling, and maintaining the structure and function of forest ecosystems [7]. Soil microorganisms are sensitive to changes in forest stand age [8], and forests of different ages may select for different soil microbial communities [9]. Differences in the composition and content of root secretions in forest evolution are an important element altering the soil microbial community structure. Previous studies have shown that SOM is an important carbon source for soil microbes and is the dominant factor regulating the soil microbial community structure [10]. Furthermore, soil microbial metabolites and residues are major sources of soil DOM, and the composition and content of DOM greatly affect soil microbial activity [10,11]. As an important nutrient pool in the soil, DOM has a key influence on nutrient cycling in terrestrial ecosystems [12,13], and the formation, quality, and content of DOM in the soil are affected by many factors, including biological factors (microbiological activity), vegetation factors (vegetation type and stand age) and climatic factors (air temperature and hydrological conditions) [14,15]. Due to the sensitivity of soil DOM to changes in environmental factors, soil DOM characteristics are used as important indicators for evaluating soil quality [1]. The multifunctionality of DOM in maintaining soil functional sustainability and its ecological importance has increased the scientific interest in conducting studies on the variation in DOM characteristics in response to different factors. Thus, information on the soil microbial responses to changes in soil DOM characteristics is important for understanding soil carbon cycling and nutrient cycling. The Qinling Mountains are located in the transitional zone between the subtropical and warm temperate zones of China and are the boundary between the northern and southern regions of China. The forest ecosystem of the Qinling Mountains is the ecological barrier of the northwest region of China and is also one of the main areas of ecological security in China. The Qinling Mountains are rich in natural forest resources, and the total forest area is 2.52 million hectares (the area of natural forest is 2.18 million hectares, and the plantation forest area is 0.34 million hectares). Over the past few decades, most of the natural forests in the Qinling Mountains have been logged and replaced by plantation forests [16]. Currently, in some areas of the Qinling Mountains (especially in North Piedmont), plantation forest areas account for more than 50% of the total forest area, and Larix principis-rupprechtii Mayr. is the main tree species of plantation forests in this area. This plantation tree species is widely distributed in parts of northwestern China. However, larch plantation forests have exhibited significant declines in soil quality during growth, and the degradation of plantation forests in the Qinling Mountains has seriously affected the ecological function of plantation forests in recent years. Previous studies have indicated that soil microbes are sensitive to forest land-use changes and that the soil quality and fertility of plantation forests gradually decline with increasing forest age. A decline in soil quality will upset the dynamic balance of soil microbial communities, which will further aggravate soil degradation [17]. However, only a few studies have explored the relationship between the changes in soil microbes and the changes in soil DOM chemodiversity in plantation forest ecosystems. With the application of fluorescence excitation-emission matrix (EEM) spectroscopy to soil DOM analysis, we can quickly and accurately obtain substantial information on the fluorescence characteristics of soil DOM [18]. Although EEM spectroscopy cannot provide exact information about the chemical structure of DOM, this technique is suitable for studying the differences in soil DOM under different vegetation types [19]. Previous studies have indicated that DOM characteristics are site-specific and affected by the type of land use [20], and these studies have advanced our understanding of the soil biogeochemical cycle by evaluating the soil DOM characteristics. There is no clear information about how Sustainability 2022, 14, 11968 3 of 21 DOM characteristics vary with growth stage and stand age in a plantation forest ecosystem. Information on the influencing factors of DOM characteristics is critical for understanding soil carbon cycling and soil fertility in the Qinling Mountains. However, such information is very limited for soil DOM characteristics in the region. This limits our understanding of the carbon cycles and soil quality in the Qinling Mountains region because of potential differences in soil DOM characteristics resulting from differences in season and stand age. However, variations in DOM composition and content and their effects on the soil microbial community structure have not been described in larch plantation ecosystems with different stand ages. In this study, we investigated the relationship between the composition characteristics of DOM and the microbial community structure changes in the soils of L. principis-rupprechtii Mayr. using excitation-emission matrix fluorescence combined with parallel factor (EEM-PARAFAC) and phospholipid fatty acid (PLFA) analyses. The aims of this study are to: (1) characterize the composition and structure of soil DOM, (2) assess the dynamics of DOM components and the microbial community in the soil of L. principis-rupprechtii Mayr., (3) determine whether the soil microbial community structure correlates with the variations in the soil DOM components, and (4) determine whether the soil microbial community structure and DOM characteristics correlate with the changes in DOM components. Study Site The present study was conducted in a natural larch plantation area (N: 34 • 02 18.1 and E: 107 • 20 51.1 ) in Taibai County, Shaanxi Province, which is located in the northern foothills of the Qinling Mountains ( Figure S1). The study area has a continental monsoonal climate, and the annual average precipitation is 1000 mm. The average annual temperature is 7.7 • C, with average temperatures of 19.0 • C in summer and −3.7 • C in winter. The average high temperatures from May to September 2021 were 20, 23, 25, 24 and 20 • C (monthly weather and the sampling time information are shown in Table S1). The study site is elevated over a range of 1620-1700 m, and the soil layer is less than 60 cm thick and has been historically free from human disturbance. The soil type is a Luvisol in the FAO classification and is classified as brown soil [21]. Experimental Design At different locations within the study site, separate larch plantation forests of different ages were selected for sampling. In April 2021, three representative sampling sites with similar topographies, forest densities and ground cover plants were selected. The tree ages of these larch plantation forests were 7 years (7a, sapling forest), 17 years (17a, young forest) and 27 years (27a, half-mature forest), and the mean forest density of the larch plantation forests was 2500 tree·ha −1 . In this region, the germination period in larch plantation forests is in April, the fast-growth period is in July and August, and the late growth period is in September. In this study, soil sampling was conducted from the middle of May to September. To standardize soil sample collection each month, we chose each sampling day after there had been seven consecutive days without rainfall in the study region. The field experiment used a randomized design with three replicates in each stand age area, and three plots (20 × 20 m each) were established in each stand age area. Generally, most of the roots of L. principis-rupprechtii Mayr. are distributed 0-30 cm deep because it is a shallow-rooted tree species. Therefore, the soil samples were collected from the 0-30 cm soil depth. When sampling every month, we first removed the litter from the soil surface and used a soil auger (5 cm in diameter) to randomly collect soils from the same 0-30 cm layer. A multisampling method (10 sample points at each plot) was used to randomly collect soil samples (0-30 cm), three times from each stand age. The sampling points were located 30 cm from a tree trunk. To ensure the accuracy of the experiment, each sampling point was marked during sampling each month to avoid resampling from the same plot. Soil samples from the same plot were mixed and combined into one mixed sample for each plot, and each mixed sample was divided into two parts: (1) The first part was air-dried at room temperature out of sunlight, and the dried samples were then ground and sieved through a mesh (2 mm). Then, the soil samples were ground into a fine powder, passed through a sieve (0.25 mm) and stored at 4 • C for chemical property analysis. (2) The second part of the samples were sieved (2 mm mesh), placed in sterile plastic bags, cryopreserved on dry ice, and shipped to the laboratory for PLFA analysis. Soil Physicochemical Properties and DOM Analysis Soil organic carbon (SOC) contents were analyzed by the K 2 CrO 7 -H 2 SO 4 method [22]. Soil total nitrogen (TN) contents were measured by the semimicro-Kjeldahl method [23]. We used the water-soil oscillation method to obtain the soil DOM. For soil DOM extraction, 5 g of soil samples and 35 mL of deionized water were well mixed with a shaker (at a temperature of 60 • C) at 300 rpm for 0.5 h [1,24]. The soil-water mixture was centrifuged at 10,000 rpm for 7 min, then the supernatant was filtered using an acetate fiber membrane (0.45 µm), and the filtrate was stored at −20 • C for the analysis of dissolved organic carbon (DOC) content and fluorescence spectra. Generally, soil DOM is quantified on the basis of its DOC content [15]. In this study, the soil DOC content was analyzed using a TOC analyzer (Shimadzu, TOC-L, Japan). Analysis of the EEM fluorescence spectra was performed with a fluorescence spectrometer (Shimadzu, RF-6000, Japan), and the following parameters were used: for the light source, a xenon lamp and 700 V at room temperature; excitation wavelengths of 200-500 nm at a step length of 5 nm; emission wavelengths of 250-550 nm at a step length of 1 nm; and a scan speed of 6000 nm·min −1 . The influences of inner filter effects (IFE) of all the EEMs were corrected by absorbance measurements, and the effects of Raman scattering were eliminated by subtracting the Milli-Q water blank. UV-Vis spectral parameters were measured with a 10 mm quartz cell at a 250 to 400 nm scanning wavelength by a UV-spectrophotometer (Shimadzu, UV-1780, Japan). In this study, we used Milli-Q water as the blank control. UV-Vis spectral parameters, specific UV-Vis absorbance at 254 nm (SUVA 254 ), and slope ratio (S R ) were selected to evaluate the aromaticity and molecular weight of soil DOM. The value of SUVA 254 is positively correlated with the aromaticity of a soil DOM component. A high value of SUVA 254 indicates that there are more benzene-like compounds in the component as well as more aromatic substances [25]. The S R value is negatively correlated with the molecular weight of a soil DOM component [26]. In addition, several important fluorescence spectral parameters (including the fluorescence index (FI), humification index (HIX), freshness index (β:α) and biological index (BIX) were used. The FI is usually used to identify the origin of soil DOM. FI values ≤ 1.2 indicate the DOM that originated from SOM and plant residues. FI values ≥ 1.8 indicate that the DOM originated from microbes, and FI values ranging from 1.2-1.8 indicate that DOM originated from SOM, plant residues and microbes [27]. The HIX is positively correlated with the content of humus or the extent of humification of SOM and is closely related to the activity of soil microbes [24,27]. The BIX can indicate the source of DOM. BIX values ranging from 0.6-0.7 indicate that the DOM has only slight biological/microbial origins, values ranging from 0.7-0.8 indicate a transitional stage of biological/microbial sources, values ranging from 0.8-1 indicate a large proportion of biological/microbial sources, and values >1 indicate exclusively biological/microbial sources [28,29]. The β:α ratio represents the freshness of DOM; a high value of the β:α ratio indicates a high proportion of fresh DOM, and a change in the β:α ratio represents the amount of newly generated DOM [1]. PARAFAC Modeling Three-dimensional fluorescence spectral analysis was performed using the PARAFAC method. PARAFAC analysis was performed with DOM-Fluor v.1.7, a free software package in MATLAB-7.0 (Natick, MA, USA), to analyze the fluorescence characteristics of the soil DOM components. We used core consistency diagnostics and a split-half validation method to identify the DOM components [1], and the maximum peak intensity (Fmax) in Raman units (R.U.) was used to evaluate the relative content of each component of DOM. Statistical Analysis Prior to further analysis, the normality of the distribution of all data was tested using Kolmogorov-Smirnov analysis in SPSS version 23.0 (SPSS Inc., Chicago, IL, USA), and the data were log 10 -transformed to normalize their distribution. One-way analysis of variance (ANOVA) was used to analyze the significant differences in the contents of SOC, TN, DOC, fluorescent components, and fluorescence parameters among different growth stages and stand ages by using Tukey's multiple comparison post hoc tests. All data analyses were performed using SPSS 23.0. The figures were created using Sigma-Plot 14.0 and Origin 2021. Redundancy analysis (RDA) was performed using CANOCO 5.0. Seasonal Changes in the Soil Physicochemical Properties of Stands of Differently Ages Significant seasonal variations in soil DOC and SOC contents were observed during the different months, whereas no significant variation in the soil total N contents and C:N ratio was observed during different seasons ( Figure 1). Generally, DOC contents in the soils of stands of different ages significantly decreased with growth, and the highest soil DOC contents were observed in the early growth stage ( Figure 1). Furthermore, the soil of the sapling forest had a higher DOC content than that of the other larch stands (p < 0.05) ( Figure 2). The SOC contents across larch stands peaked in August, but no obvious seasonal trend in SOC content was observed among the different larch stands ( Figure 1). Overall, the soils of the young forest (17a) had higher DOC contents than those of the other larch stands, and the lowest SOC content was observed in the half-mature forest (27a) (Figure 2). In the fast-growing season (August), we observed higher soil TN contents and C:N ratios in the half-mature forest than in the sapling forest and young forest ( Figure 1). However, the total N content and C:N ratio in soils of different stand ages did not show significant differences (p > 0.05) ( Figure 2 and Table S2). Overall, the soils of the young forest (17a) had higher DOC contents than those of the other larch stands, and the lowest SOC content was observed in the half-mature forest (27a) (Figure 2). In the fast-growing season (August), we observed higher soil TN contents and C:N ratios in the half-mature forest than in the sapling forest and young forest ( Figure 1). However, the total N content and C:N ratio in soils of different stand ages did not show significant differences (p > 0.05) ( Figure 2 and Table S2). Characteristics of the Different Components Identified via EEM-PARAFAC Analysis In this study, the soil DOM components from L. principis-rupprechtii Mayr. were decomposed into a four-component model. Table S3 shows that three humic-like structures (component 1, component 2 and component 3) and a fulvic-like structure (C4) were identified via EEM-PARAFAC analysis. The excitation wavelength (Ex) and emission wavelength (Em) loadings of the main peak locations and detailed information on the four components are summarized in (Table S3). Characteristics of the Different Components Identified via EEM-PARAFAC Analysis In this study, the soil DOM components from L. principis-rupprechtii Mayr. were decomposed into a four-component model. Table S3 shows that three humic-like structures (component 1, component 2 and component 3) and a fulvic-like structure (C4) were identified via EEM-PARAFAC analysis. The excitation wavelength (Ex) and emission wavelength (Em) loadings of the main peak locations and detailed information on the four components are summarized in (Table S3). lowest proportions in July (17.22% and 18.07%, respectively), then increased and peaked in September (68.64% and 51.50%, respectively) ( Figure 4 and Figure S2). However, no clear seasonal variation tendencies in the proportions of component 4 were observed in the sapling forest (7a) and young forest (17a) with the growth of the larch stands. Furthermore, obvious temporal changes were observed in the proportions of the four components The Fmax values (fluorescence intensity at the peak maxima) of the different fluorescent components of DOM from the soil of L. principis-rupprechtii Mayr. are summarized in Figure 5 and Table Variations in UV-Visible Absorbance and Fluorescence Spectra Indicators of DOM Generally, the SUVA254 values of all larch stands increased with growth and peaked in August. The young forest (17a) had the highest SUVA254 values among the larch stands of different ages, and the half-mature forest (27a) had the lowest SUVA254 values among all stands ( Figure 6A). No significant seasonal trends in the SR values of all larch stands were observed. Generally, the young forest (17a) had the highest SR values, and the halfmature forest (27a) had the lowest SR values of all larch stands ( Figure 6B). Significant seasonal changes in the FI values of the young forest (17a) were observed among the different growth seasons (p < 0.05), and the FI values decreased with growth. However, for the sapling forest (7a) and young forest (17a), no clear seasonal trends with The Fmax values (fluorescence intensity at the peak maxima) of the different fluorescent components of DOM from the soil of L. principis-rupprechtii Mayr. are summarized in Figure 5 and Table reaching the lowest values in September ( Figure 6D). Generally, the HIX values of the young forest (17a) were higher than those of the other stand ages; however, no significant differences in HIX values were observed among the stands of different ages across growth seasons (except in May and August, p < 0.05). The β:α values of all larch stands showed a significant increasing trend with growth (p < 0.05) and peaked in September ( Figure 6E). The BIX values of the young forest (17a) and the half-mature forest (27a) showed similar seasonal trends in the β:α values, increasing with growth and peaking in September (Figure 6F). Additionally, no significant differences in FI, β:α and BIX values were observed among the larch stands of different ages across growth seasons (p > 0.05). Significant seasonal changes in the FI values of the young forest (17a) were observed among the different growth seasons (p < 0.05), and the FI values decreased with growth. However, for the sapling forest (7a) and young forest (17a), no clear seasonal trends with growth were observed (p > 0.05). ( Figure 6C). Overall, the HIX values of all the stands in this study were significantly lower than two. The HIX values of all the stand ages showed a seasonal trend of increasing and then decreasing with growth, peaking in August and reaching the lowest values in September ( Figure 6D). Generally, the HIX values of the young forest (17a) were higher than those of the other stand ages; however, no significant differences in HIX values were observed among the stands of different ages across growth seasons (except in May and August, p < 0.05). The β:α values of all larch stands showed a significant increasing trend with growth (p < 0.05) and peaked in September ( Figure 6E). The BIX values of the young forest (17a) and the half-mature forest (27a) showed similar seasonal trends in the β:α values, increasing with growth and peaking in September ( Figure 6F). Additionally, no significant differences in FI, β:α and BIX values were observed among the larch stands of different ages across growth seasons (p > 0.05). Seasonal Variations in Soil Microbial PLFAs in Stands of Different ages Large variations in the PLFA biomarkers of different soil microorganisms were observed in the soils of all larch stands across the growth seasons ( Figure 7). The species and contents of the PLFA biomarkers of bacteria, fungi, actinomycetes, and protozoans varied seasonally. Across different growing seasons, the common PLFA biomarkers of all the larch stands were 14:0, i16:0, 17:1w9c, 18:1w7c, 18:2w6,9c, 18:3w6c and 20:5w3c. The soil of all the larch stands had higher contents of 14:0, i16:0,18:1w7c, 18:2w6,9c and 18:3w6c across growth seasons, and the maximum concentrations of these PLFA biomarkers were observed in July. Furthermore, the content of PLFA biomarkers of different soil microorganisms in the young forest (17a) was higher than that in the sapling forest (7a) and half-mature forest (27a). The half-mature forest (27a) generally had the lowest content of PLFA biomarkers of different soil microorganisms among all the larch stands. Additionally, some PLFA biomarkers of soil microbes in all the larch stands were observed only in specific growth periods; for example, 10:0 2OH, a11:0, 12:0, 13:0, i14:0, 20:3w3c,21:1w6c and 24:1w9c were observed only in August. In addition, the total abundance of different groups of PLFA biomarkers of soil microorganisms of all larch stands showed large seasonal variation (Figure 8). The total microbial abundance of different types of soil microbes across the larch stands increased with growth and peaked in July or August. Total microbial abundance (total PLFAs of different types of soil microbes) showed a distinct difference between different stand ages. Generally, the contents of PLFA biomarkers of different types of soil microbes were higher in the young forest (17a) than in the sapling forest (7a) and half-mature forest (27a) (except for the PLFA biomarkers of protozoa). For all larch stands, the highest content of PLFA biomarkers of the unspecific bacterial, gram-positive bacteria (G+), gram-negative bacteria (G-), actinomycetes, fungi, and protozoa were observed in the-fast growth period (July or August). Furthermore, seasonal differences in PLFA biomarkers of the soil microbe community indicated that the highest proportions of PLFAs were bacteria, of which the gram-negative bacteria (G-) had the largest proportion, followed by the PLFA biomarkers of unspecific bacterial and gram-positive bacteria. Correlations between the Soil Physicochemical Factors, Characteristics of DOM and Soil Microbial Community Composition RDA suggested that changes in soil physicochemical factors and the characteristics of the soil DOM component played a key role in shaping the structure of the soil microbial communities in larch stands of different ages (Figure 9). In the present study, bacterial diversity remained steady over the different stand ages. For different stand ages, the factors that strongly correlated with soil microbial community characteristics were different; for example, the SUVA 254 , HIX values of DOM, and SOC contents were key factors influencing the soil microbial community characteristics of the sapling forest (7a), and the soil C:N ratio, HIX values of DOM, and component 1 and component 2 contents of soil DOM were major factors affecting the soil microbial community characteristics of the young forest (17a). However, for the half-mature forest (27a), SOC, DOC and component 3 content, and β:α, SUVA 254 , HIX values of DOM were major factors affecting the soil microbial community. In particular, the HIX value of the DOM was the common factor affecting the structure and composition of the soil microbial community of all stand ages. encing the soil microbial community characteristics of the sapling forest (7a), and the soil C:N ratio, HIX values of DOM, and component 1 and component 2 contents of soil DOM were major factors affecting the soil microbial community characteristics of the young forest (17a). However, for the half-mature forest (27a), SOC, DOC and component 3 content, and β:α, SUVA254, HIX values of DOM were major factors affecting the soil microbial community. In particular, the HIX value of the DOM was the common factor affecting the structure and composition of the soil microbial community of all stand ages. Seasonal Variations in Soil Physicochemical Properties of Stands of Different Ages In this study, large ranges of SOC and DOC contents across different growing seasons were observed, and soil TN content and C:N were not affected by stand age and the growing seasons; indicating that growing season and stand age can significantly influence SOC turnover, which was consistent with findings in other reports [37][38][39]. Furthermore, this finding indicated that stand age plays a major role in forest ecosystem nutrient cycling and soil microbial community structure. Changes accompanying stand ages and ambient temperature may alter the characteristics of the litter [40,41], affect the availability of soil microbes and the structure of the soil microbial community, and affect nutrient cycling and turnover of SOM [18]. Therefore, the SOC contents across the larch stands peaked in August. In this study, the young forest (17a) had higher DOC contents in the soil than did the other stand ages, indicating that it had a better soil microbial community structure [1]. In forest ecosystems, litter decomposition by soil microorganisms is an important factor affecting the characteristics of soil C and N [24]. Generally, litter with a lower C:N ratio can be decomposed more quickly by soil microorganisms, resulting in a high rate of decomposition of SOM [42]. The optimal C:N ratio for litter to be decomposed by microorganisms is less than or equal to 25; however, the litter C:N of L. principis-rupprechtii Mayr. In this region was greater than 25 (>80) [43], which proved difficult for soil microorganisms to decompose. Furthermore, the SOC and DOC contents decreased significantly with stand age, reflecting a purely consumption-based model of SOC and DOC change in this study. This result is opposite to the finding of [1], indicating that the consumption rate of SOC and DOC was higher than the input rate of SOC and DOC, which may have led to an imbalance in soil nutrient cycling and a consequent decrease in soil fertility in the study region. Furthermore, with soil DOC content as the indicator for evaluating soil DOM content, compared with other studies, we found that the DOC content in soils from different landuse types differed significantly. For example, the DOC contents in the soils from Eucalyptus urophylla plantation forests in Guangdong, China, and upland forests in Alaska, USA were 6.86 and 15.2 mg L −1 , respectively, (in this study, the DOC contents in the soils of larch stands of different ages were 40.3 to 95.93 mg L −1 ) [38,44]. However, reported that the soil DOC content in soils from a Robinia pseudoacacia natural forest in Yan'an, China, was 201.25 mg kg −1 [1]. Significant differences in soil DOC content may be due to different tree types, soil types and soil-water content [45,46]. To better understand the variation in the DOM characteristics caused by differences in environmental factors, we suggest that additional information on geography, environment, climatic factors, and geology should be considered when studying the biogeochemical cycle of DOM. Effects of Seasonal Variations on Soil DOM Characteristics of Stands of Different Ages Generally, DOC is used as an indicator in the quantitative evaluation of DOM content in soil [27]. In this study, significant differences were observed in the soil DOC content of larch stands of different ages, and the soil DOC increased and then decreased with stand age. Previous research has indicated that the activity of soil microorganisms can affect litter decomposition and then affect the content of DOC in soil [47]. A high DOC content in soils can enhance soil microbial activity and microbial abundance [10,14], which means that there were significant differences in the soil microbial activity among larch stands of different ages in this study. The results indicated that the soil microbial activity of the young forest (17a) was higher than that in the other stands. This was confirmed by the soil of the young forest (17a), which had the highest content of total PLFA among the different stand ages. Additionally, the SOC content showed a similar change trend with stand age as did the DOC content in the soil, suggesting that SOC is a regulating factor of DOC content in soil [14]. In this study, three humic acid-like substances and one fulvic acid-like substance were identified from the soil samples. In particular, the main component of the soil DOM of different stand ages was humic acid-like substances, indicating that humic acid-like components were mainly derived from the decomposition of the litter of L. principisrupprechtii Mayr. Differences in soil microbial activity and soil organic matter content led to the differences in the content of humic acid-like components and fulvic acid-like components among different stand ages [10,48]. Although no differences in the DOM components were observed among all larch stands, the relative distribution of the four components exhibited obvious seasonal variations due to the seasonal variations in the quantity and consumption of DOM [49,50]. In the present study, the main components of soil DOM of all the larch stands were humic-like components, and the proportions of the fluorescence intensity of humic-like components were significantly higher than those of the flavonoid components, indicating that the source of soil DOM was probably mainly derived from the decomposition of litter [51]. Previous research has suggested that the SUVA 254 and S R values of soil DOM are significantly affected by soil nutrient characteristics [1]. Higher SOC and DOC contents could accelerate the degree of soil humification and increase the proportion of aromatic components and large molecular weight substances in soil DOM [14,38]. The SUVA 254 and S R values of all the larch stands showed similar upward trends, indicating that the molecular weight and humification degree of the soil DOM increased with growth and peaked in the fast-growth stage, which was confirmed by the seasonal variations in SOC content and HIX indexes. Previous research suggests that litter decomposition by microorganisms is the main source of soil DOM in forest ecosystems, and the degree of soil decay and soil microbial abundance and activity were positively correlated [10]. Therefore, the young forest (17a) had the highest SUVA 254 values of all the larch stands, indicating higher microbial activity in the soil [52]. Seasonal changes in the FI values of the differently aged L. principis-rupprechtii Mayr. Indicated that in the early growth stage (May and June) the soil DOM was derived from mixed sources (as reflected by FI values ranging from 1.20-1.38 and BIX values < 1). However, in the middle and later growth stages (July to September), the soil DOM in all the stands originated mainly from plant residues and SOM rather than from biological/microbial sources (as reflected by FI values for all stands < 1.2 and BIX < 1). Additionally, the BIX further showed that the microbial-derived DOM contribution to the total DOM pool was somewhat limited in all growth stages and that the contributions of other DOM sources were smaller (as reflected by BIX values for all stands < 1). The narrow range of BIX values in all the larch stands of different growth stages indicated that the soil in our study region originated from a single source. The higher HIX values in all the stands in May represented a higher degree of humification than that observed in the other growth stages. However, the HIX values of all the stands decreased with growth, indicating that the degree of humification also decreased with growth. This was due to the decrease in soil DOC content with growth, which limited the microbial activity in the soil [18]. Previous studies have shown that the degree of humification is negatively correlated with the proportion of fresh DOM [53]. Therefore, the highest β:α ratio at all the larch stands generally appeared in the late growth stage in the present study. Effects of Seasonal Variations on Microbial Community Structure In this study, seasonal variation and differences in stand age significantly affected the soil microbial PLFAs of the larch plantation forests. Previous studies have shown that within a certain ambient temperature, soil microbial activity was positively correlated with ambient temperature [41]. Generally, high ambient temperature results in high soil microbial activity. The biomass and activity of soil microbes are higher during the vigorous growing period, which is closely related to the increase in carbon supply due to plant root growth [54]. Suitable soil nutrients can obviously affect the abundance and structure of soil microorganisms; accompanied by the release of a large number of photochemical products into the soil, the activity of soil microorganisms during the fast-growth season is promoted [52]. Therefore, the contents of PLFAs of different species of soil microbes were higher in July or August (the fast-growing season) in this study. However, the actinomycetes content in the soil of all the larch stands peaked in July and then declined. This may be because the soil environment was more conducive to actinomycete growth in July, and the abundance of actinomycetes decreased due to the enhanced inhibition of actinomycetes by other microorganisms in the soil in August [55]. Generally, high-quality soils benefit from the stability of a stable soil microbial community structure due to the high abundance of soil microbes [56]. The variations in the soil microbial community structure reflect the adaptability of soil microorganisms to the soil environment. In this study, the PLFA content of different groups of soil microbes followed the trend of bacteria >fungi >actinomycetes> protozoa across growth seasons. This finding showed that bacteria had strong adaptability to changes in soil nutrients and indicated that bacteria play a key role in soil nutrient cycling [57]. Compared with the PLFA biomarker content of other soil microorganisms, gram-negative bacteria (G-) accounted for the largest proportion of total PLFAs across all the larch stands, which may be because gram-negative bacteria (G-) are better adapted to soil environmental changes [58]. In this study, higher abundances of different groups of PLFAs were observed in the soil of the young forest (17a), indicating that the habitat in the young forest (17a) is suitable for soil microbial growth. Additionally, the highest SOC content was observed in the young forests (17a) due to the quantity of SOC exudates contributing to soil microbial growth [59]. Effect of Soil Environmental Factors on the Soil Microbial Community The stand age of L. principis-rupprechtii Mayr. greatly altered the structures of the soil microbial community and affected the soil DOM characteristics (including the availability and molecular characterization of DOM). The interactions of stand age with soil microbes are vital to soil C dynamics [60]. Overall, the relationship between the soil microbial community and DOM characteristics in the half-mature forest (27a) was remarkably intense compared to that in the sapling forest (7a) and young forest (17a) in this study. The results indicated that stand age altered the soil DOM components and soil microbial community structure, further affecting SOM decomposition [11]. The RDA results indicated that the abundance of soil microbes was significantly linked to soil DOM characteristics in all the larch stands. These results suggest that factors affecting soil microbial communities during different growth periods and stand age heterogeneity should be considered to better understand the response of biogeochemical processes to variations in soil resources in future studies [61]. Across all the soil samples of different stand ages, HXI described 27.5% (7a), 12.2% (17a) and 2.4% (27a) of the soil microbial PLFA variation. This finding further confirmed that stand age had a stronger influence on soil microbial communities by changing the availability of DOM. In addition, in the present study, the changes in soil DOM characteristics due to the difference in stand age were the most important factors explaining the observed variation in the soil bacterial community structure. Several factors were strongly correlated with the soil microbial community structure, including SUVA 254 , HIX, SOC, DOC, C:N, β:α and DOM components. However, for stands of different ages, there were significantly different drivers of soil microbial community variation. This is due to the shifts in the soil physicochemical conditions and availability of DOM among stands of different ages, and these factors may be important factors influencing the soil microbial community structure [11]. Conclusions In this study, we observed that the soil physicochemical factors, DOM characteristics and different groups of PLFA biomarker abundances of soil microbes in all the larch stands were closely related to growth seasons and stand ages. The characteristics of the fluorescence parameters indicated that humic-like components were the dominant components in the soil DOM of all stand ages and that soil DOM originated primarily from terrestrial plants (plant residues) and SOM rather than from microbes. Additionally, stand age had significant effects on the abundances of PLFA biomarkers of soil microbes. Generally, soil DOM characteristics explained the largest changes in the soil microbial community response to growth season and stand age in this study. These results could help clarify the dynamics of the soil microbial community structure and the biogeochemical cycle of DOM in plantation forest ecosystems. Collectively, in the process of plantation forest management (such as soil nutrient management), the effect of the heterogeneity of the growth season and stand age on soil microbes must be considered in the assessment of plantation forest ecosystem functions, and different management methods should be adopted for different stand ages. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/su141911968/s1, Table S1: Variation in precipitation and mean monthly temperature. All data were collected from the China Meteorological Data Service Centre. Table S2: Summary of the soil physicochemical properties. Table S3: Four fluorescent components of DOM in the soil of L. principis-rupprechtii Mayr. identified in this study. Table S4: Summary of the different components (Fmax) identified in this study. Figure S1: Map of the study site. Figure S2: Seasonal changes in the proportions of four fluorescent components of the soil DOM from L. principis-rupprechtii Mayr. of different ages.
2022-09-25T15:16:08.509Z
2022-09-22T00:00:00.000
{ "year": 2022, "sha1": "7153f8ec1ee2a6b956077ae143f7610b12215074", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/14/19/11968/pdf?version=1663854802", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "694dc320eead9505b0ffd3be2f259a2050c5fef6", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
46974925
pes2o/s2orc
v3-fos-license
FEM Analysis of Sezawa Mode SAW Sensor for VOC Based on CMOS Compatible AlN/SiO2/Si Multilayer Structure A Finite Element Method (FEM) simulation study is conducted, aiming to scrutinize the sensitivity of Sezawa wave mode in a multilayer AlN/SiO2/Si Surface Acoustic Wave (SAW) sensor to low concentrations of Volatile Organic Compounds (VOCs), that is, trichloromethane, trichloroethylene, carbon tetrachloride and tetrachloroethene. A Complimentary Metal-Oxide Semiconductor (CMOS) compatible AlN/SiO2/Si based multilayer SAW resonator structure is taken into account for this purpose. In this study, first, the influence of AlN and SiO2 layers’ thicknesses over phase velocities and electromechanical coupling coefficients (k2) of two SAW modes (i.e., Rayleigh and Sezawa) is analyzed and the optimal thicknesses of AlN and SiO2 layers are opted for best propagation characteristics. Next, the study is further extended to analyze the mass loading effect on resonance frequencies of SAW modes by coating a thin Polyisobutylene (PIB) polymer film over the AlN surface. Finally, the sensitivity of the two SAW modes is examined for VOCs. This study concluded that the sensitivity of Sezawa wave mode for 1 ppm of selected volatile organic gases is twice that of the Rayleigh wave mode. Introduction The monitoring of volatile organic compounds (VOCs) is very crucial in various application fields like in industrial safety, fire detection, indoor air quality and health monitoring [1,2]. VOCs are hazardous air pollutants because of their toxic characteristics [3]. Being exposed to VOCs for a short period of time can cause the human nervous system to be badly affected with symptoms like nausea, headaches, visual illnesses, allergies and respiratory tract itching. It can cause permanent damage if one remains unmasked for a considerably long time. Thus, the monitoring and detection of VOCs at an early stage is essential. VOCs evaporate even at room temperature [3]. The source of emittance of VOCs can be fuel, vehicle smoke, industrial sources and paints/cleaning reagents. The existing monitoring systems for VOCs are usually based on infrared spectroscopy, gas chromatography and mass spectrometry. Although these approaches provide good sensitivity and selectivity, they are not very suitable for real time monitoring [3], as these technologies are comprised of sensitive electronic apparatuses and need carrier gas flow throughout the operation. Conversely, Surface Acoustic Wave (SAW) sensors, which have received great attention of late, are quite suitable for real time monitoring. SAW sensors are popular because of their real time This work intends to analyze the Sezawa wave mode, resulting from SiO 2 /Si structure (slow on fast) in AlN/SiO 2 /Si multilayer SAW for gas sensing application. Towards that, we first study the effect of normalized AlN and SiO 2 thin film thicknesses on proposed SAW device propagation properties (i.e., SAW velocity and electromechanical coupling coefficient) without sensing layer and validate our results with published results. In the second part, we present the mass loading effect of polymer film on an AlN/SiO 2 /Si multilayer SAW device and present the comparative study of both surface modes (Rayleigh and Sezawa) in terms of gas sensitivity. A. Geometry of the Problem In this work, FEM simulation of a layered SAW device is performed by using COMSOL Multiphysics. The FEM study for different piezoelectric materials has been used, validated and reported by researchers [7,24,25]. In first phase of our study, a 2D FEM model of AlN/SiO 2 /Si SAW resonator structure without a sensing layer is investigated. The purpose of using 2D modelling is to reduce computational complexity. Moreover, a plane strain supposition is used for the solid mechanics. Consequently, the out of plane strain component is zero. The change in the out of plane path can be considered least if the acoustic waves are produced in the plane of the model. The dimensions of the SAW device used in the simulation are summarized in Table 1 and the geometry of the layered SAW resonator is shown in Figure 1. This work intends to analyze the Sezawa wave mode, resulting from SiO2/Si structure (slow on fast) in AlN/SiO2/Si multilayer SAW for gas sensing application. Towards that, we first study the effect of normalized AlN and SiO2 thin film thicknesses on proposed SAW device propagation properties (i.e., SAW velocity and electromechanical coupling coefficient) without sensing layer and validate our results with published results. In the second part, we present the mass loading effect of polymer film on an AlN/SiO2/Si multilayer SAW device and present the comparative study of both surface modes (Rayleigh and Sezawa) in terms of gas sensitivity. A. Geometry of the Problem In this work, FEM simulation of a layered SAW device is performed by using COMSOL Multiphysics. The FEM study for different piezoelectric materials has been used, validated and reported by researchers [7,24,25]. In first phase of our study, a 2D FEM model of AlN/SiO2/Si SAW resonator structure without a sensing layer is investigated. The purpose of using 2D modelling is to reduce computational complexity. Moreover, a plane strain supposition is used for the solid mechanics. Consequently, the out of plane strain component is zero. The change in the out of plane path can be considered least if the acoustic waves are produced in the plane of the model. The dimensions of the SAW device used in the simulation are summarized in Table 1 and the geometry of the layered SAW resonator is shown in Figure 1. The IDTs are periodic in nature, therefore to model a SAW resonator, one period of the electrode is enough to model whole resonator. For this purpose, a unit cell of 1λ is considered for the SAW resonator as shown in Figure 1. The depth of the unit cell is chosen as 10λ, as the mechanical displacement depth of the Rayleigh and Sezawa wave modes are most confined at the surface and nearly die out at lower boundary. The metallization ratio is selected as 50% (2a/λ) and the relative thickness of Aluminum (Al) electrode (b/λ) is chosen as 2.5%, where a and b are the electrodes' width and height, respectively. The material properties of Al are used from the built-in COMSOL library, that is, a density of 2700 kg/m 3 , Young's modulus of 70 GPa and Poisson's ratio of 0.33. The mechanical boundaries of the top and sides of both electrodes are free. The left electrode's boundaries are set to electrically ground and the right electrode's boundaries are set to a floating potential with zero surface charge accumulation. The edge effects of the electrode can be ignored as the length of the electrode is much higher than its width. After performing a mesh convergence study, an extremely fine mesh (i.e., maximum automatically generated physics-defined triangular elements) was chosen for all the FEM simulations to get more accurate results. The boundary conditions used in the simulation are summarized in Table 2 and the material constants used in the simulation are summarized in Table 3. Primarily, the simulations for the Eigen frequency analysis of the layered SAW resonator are performed to analyze the AlN/SiO 2 /Si acoustic velocity and electrochemical coupling coefficient (k 2 ) for Rayleigh and Sezawa mode as a function of normalized AlN and SiO 2 thicknesses. The SiO 2 layer provides electrical isolation between AlN and Si, CMOS compatibility and fast-slow-fast structure. By varying the AlN and SiO 2 film thicknesses, their effect on the resonance frequency of the SAW resonator is assessed. In general, the acoustic wave velocity is calculated by where f o is center frequency and λ is the wavelength of the acoustic wave. In Eigen mode, the value of v is calculated by v = ( f res. + f ant. )p (2) where f res. and f ant. are resonant and anti-resonant frequencies respectively, which are obtained from the Eigen frequency simulation analysis and p is the pitch (electrode width + spacing between electrodes). In our simulation, the value of p = 2 µm (λ/2) throughout all simulations. The value of k 2 for is calculated by [17] where v o is free surface velocity with electrically free boundary condition and v m is phase velocity with an electrically shorted boundary condition. The calculated acoustic wave velocity and the electromechanical coupling coefficient as a function of the normalized thickness is presented in next section. B. Gas Sensing Phenomena and Equation The gas sensing phenomena is simple, when a PIB coated SAW gas sensor is exposed to different gas concentrations; gas molecules are physically adsorbed at the PIB layer. There are two types of adsorption, that is, chemical adsorption and physical adsorption. The chemical adsorption is irreversible whereas physical adsorption makes the process reversible and desorption occurs when the gas leaves. Thus, physical adsorption based sensors exhibit good repeatability [26]. The rate of diffusion of adsorbed gas in the sensing layer defines the response and recovery time. Generally, the thin sensing layer takes less recovery time as the diffused gas molecules rapidly leave the surface upon removing the gas, thus the sensor quickly reaches its equilibrium state [27]. The sensing layer with adsorbed molecules results in an increase of net mass loading over sensor. The partial density of adsorbed gas in PIB thin layer can be calculated as [10]. where the molar mass of the gas is M, K is the PIB/air partition coefficient of the gas and c is the concentration of gas in air. Gas concentration in air can be calculated as [10] where c o is the concentration of gas in parts per million (ppm), air pressure is denoted by P, gas constant by R and air temperature by T. Other than density, all effects are neglected. The adsorption of gas is described as a minor rise in the overall density of the PIB film, which can be expressed as The above equation gives the total density of PIB after gas adsorption. The resonance frequencies of both surface modes with different heights of PIB film are recorded. Saw Propagation Analysis Initially, in order to validate our simulation method for the AlN/SiO 2 /Si (fast-slow-fast) structure, we first used the AlN/Diamond (slow-fast) structure, which was presented in Reference [19]. In that work, the SAW propagation characteristics were theoretically calculated by PC acoustic wave software from McGill University. This software is based on a transfer matrix method for calculating SAW propagation in multilayer structures. We extracted their numerical data by using a semi-automated open source plot digitizer tool, that is, WebPlotDigitizer v. 3.12, which has been used in many published works. In our study, we performed a FEM simulation analysis of the AlN/Diamond structure by using the unit cell as in Figure 1 (but with two layers, i.e., AlN/Diamond) and the boundary conditions as in Table 2. Same material constants were used as described in Reference [19]. The SAW propagation properties are calculated by (1), (2) and (3). The simulated results are shown in Figure 2, which are quite close to that of Reference [19]. The minor error can be attributed to the difference in the methods used. The method used in Reference [19] relied on a transfer matrix method that makes assumptions of several thin layers in approximating the behavior of interfacial layers, while this paper uses an FEM method. In order to justify the sufficiency of mesh density, a mesh convergence study was also conducted until the values of the velocity of SAW became constant. The mesh profile for a specific thickness of the AlN layer (i.e., t AlN /λ = 0.4) is summarized in Figure 3. As shown in Figure 4, the SAW velocity becomes constant at 6951 m/s, the value that has been used in this work. It can be mentioned that the velocity, 7056.5 m/s, in Reference [19] is obtained at a much coarser mesh. In order to get accurate results for our study, the maximum number of mesh elements are chosen (i.e., 120,503). calculating SAW propagation in multilayer structures. We extracted their numerical data by using a semi-automated open source plot digitizer tool, that is, WebPlotDigitizer v. 3.12, which has been used in many published works. In our study, we performed a FEM simulation analysis of the AlN/Diamond structure by using the unit cell as in Figure 1 (but with two layers, i.e., AlN/Diamond) and the boundary conditions as in Table 2. Same material constants were used as described in Reference [19]. The SAW propagation properties are calculated by (1), (2) and (3). The simulated results are shown in Figure 2, which are quite close to that of Reference [19]. The minor error can be attributed to the difference in the methods used. The method used in Reference [19] relied on a transfer matrix method that makes assumptions of several thin layers in approximating the behavior of interfacial layers, while this paper uses an FEM method. In order to justify the sufficiency of mesh density, a mesh convergence study was also conducted until the values of the velocity of SAW became constant. The mesh profile for a specific thickness of the AlN layer (i.e., tAlN/ = 0.4) is summarized in Figure 3. As shown in Figure 4, the SAW velocity becomes constant at 6951 m/s, the value that has been used in this work. It can be mentioned that the velocity, 7056.5 m/s, in Reference [19] is obtained at a much coarser mesh. In order to get accurate results for our study, the maximum number of mesh elements are chosen (i.e., 120,503). calculating SAW propagation in multilayer structures. We extracted their numerical data by using a semi-automated open source plot digitizer tool, that is, WebPlotDigitizer v. 3.12, which has been used in many published works. In our study, we performed a FEM simulation analysis of the AlN/Diamond structure by using the unit cell as in Figure 1 (but with two layers, i.e., AlN/Diamond) and the boundary conditions as in Table 2. Same material constants were used as described in Reference [19]. The SAW propagation properties are calculated by (1), (2) and (3). The simulated results are shown in Figure 2, which are quite close to that of Reference [19]. The minor error can be attributed to the difference in the methods used. The method used in Reference [19] relied on a transfer matrix method that makes assumptions of several thin layers in approximating the behavior of interfacial layers, while this paper uses an FEM method. In order to justify the sufficiency of mesh density, a mesh convergence study was also conducted until the values of the velocity of SAW became constant. The mesh profile for a specific thickness of the AlN layer (i.e., tAlN/ = 0.4) is summarized in Figure 3. As shown in Figure 4, the SAW velocity becomes constant at 6951 m/s, the value that has been used in this work. It can be mentioned that the velocity, 7056.5 m/s, in Reference [19] is obtained at a much coarser mesh. In order to get accurate results for our study, the maximum number of mesh elements are chosen (i.e., 120,503). The next simulation is performed to determine the velocity and electromechanical coupling coefficient (k 2 ) of AlN/SiO2/Si structure for the modes polarized in the sagittal plane (Rayleigh type). The typical Sezawa wave mode is generated in a slow-fast structure. On the same principle, in the AlN/SiO2/Si (fast-slow-fast) structure, the Sezawa mode is generated by SiO2/Si (slow on fast) layers while the piezoelectric AlN layer generates the acoustic waves. The simulation results for the propagation characteristics in the AlN/SiO2/Si structure are presented in Figure 5. The phase velocity of Rayleigh mode increases with increasing normalized AlN film thickness ( / ), while that of Sezawa mode remains nearly constant. In this simulation, the SiO2 thickness is kept constant. In Figure 5a, the value of the phase velocity changes from 4181 m/s to 5537 m/s of Rayleigh mode when 0.01 < / < 2 and / is 0.25. For 0.01 < / < 0.1, the acoustic velocity in Rayleigh mode decreases from 4181 m/s to 3941 m/s, which suggests that initially the acoustic wave is confined largely in SiO2 and also to some degree in Si, as shown in Figure 6a. Therefore, net velocity is the intermediate of SiO2 velocities (3750 m/s [28]) and Si (5000 m/s [28]). Similarly, with a further increase in / , acoustic velocity gradually increases as the acoustic wave starts confining more in the AlN layer and in SiO2 and no longer in Si, as shown in Figure 6b. It keeps doing so until the whole acoustic wave is confined only in the AlN substrate (as in Figure 6c), reaching a velocity of 5539 m/s at / = 2, close to the theoretical acoustic velocity of AlN (5600 m/s [28]). (a) (b) The next simulation is performed to determine the velocity and electromechanical coupling coefficient (k 2 ) of AlN/SiO 2 /Si structure for the modes polarized in the sagittal plane (Rayleigh type). The typical Sezawa wave mode is generated in a slow-fast structure. On the same principle, in the AlN/SiO 2 /Si (fast-slow-fast) structure, the Sezawa mode is generated by SiO 2 /Si (slow on fast) layers while the piezoelectric AlN layer generates the acoustic waves. The simulation results for the propagation characteristics in the AlN/SiO 2 /Si structure are presented in Figure 5. The phase velocity of Rayleigh mode increases with increasing normalized AlN film thickness (t AlN /λ), while that of Sezawa mode remains nearly constant. In this simulation, the SiO 2 thickness is kept constant. In Figure 5a, the value of the phase velocity changes from 4181 m/s to 5537 m/s of Rayleigh mode when 0.01 < t AlN /λ < 2 and t SiO 2 /λ is 0.25. For 0.01 < t AlN /λ < 0.1, the acoustic velocity in Rayleigh mode decreases from 4181 m/s to 3941 m/s, which suggests that initially the acoustic wave is confined largely in SiO 2 and also to some degree in Si, as shown in Figure 6a. Therefore, net velocity is the intermediate of SiO 2 velocities (3750 m/s [28]) and Si (5000 m/s [28]). Similarly, with a further increase in t AlN /λ, acoustic velocity gradually increases as the acoustic wave starts confining more in the AlN layer and in SiO 2 and no longer in Si, as shown in Figure 6b. It keeps doing so until the whole acoustic wave is confined only in the AlN substrate (as in Figure 6c), reaching a velocity of 5539 m/s at t AlN /λ = 2, close to the theoretical acoustic velocity of AlN (5600 m/s [28]). The next simulation is performed to determine the velocity and electromechanical coupling coefficient (k 2 ) of AlN/SiO2/Si structure for the modes polarized in the sagittal plane (Rayleigh type). The typical Sezawa wave mode is generated in a slow-fast structure. On the same principle, in the AlN/SiO2/Si (fast-slow-fast) structure, the Sezawa mode is generated by SiO2/Si (slow on fast) layers while the piezoelectric AlN layer generates the acoustic waves. The simulation results for the propagation characteristics in the AlN/SiO2/Si structure are presented in Figure 5. The phase velocity of Rayleigh mode increases with increasing normalized AlN film thickness ( / ), while that of Sezawa mode remains nearly constant. In this simulation, the SiO2 thickness is kept constant. In Figure 5a, the value of the phase velocity changes from 4181 m/s to 5537 m/s of Rayleigh mode when 0.01 < / < 2 and / is 0.25. For 0.01 < / < 0.1, the acoustic velocity in Rayleigh mode decreases from 4181 m/s to 3941 m/s, which suggests that initially the acoustic wave is confined largely in SiO2 and also to some degree in Si, as shown in Figure 6a. Therefore, net velocity is the intermediate of SiO2 velocities (3750 m/s [28]) and Si (5000 m/s [28]). Similarly, with a further increase in / , acoustic velocity gradually increases as the acoustic wave starts confining more in the AlN layer and in SiO2 and no longer in Si, as shown in Figure 6b. It keeps doing so until the whole acoustic wave is confined only in the AlN substrate (as in Figure 6c), reaching a velocity of 5539 m/s at / = 2, close to the theoretical acoustic velocity of AlN (5600 m/s [28]). (a) (b) The acoustic velocity in Sezawa mode remains unchanged for 0.01 < < 2. This is because of the fact that the Sezawa mode depends on the slow-on-fast structure, that is, SiO2/Si, which is constant in this case while varying in AlN thickness. In Figure 5b, for the Rayleigh wave, the k 2 firstly increases with an increase in / and reaches its maximum value of 0.55% when / is 0.5 and then it starts reducing with a further increase of / . In Sezawa wave mode, the k 2 remains unchanged except for a small peak at / = 0.875, as / = 0.25 is not sufficient to generate Sezawa mode. Next, the AlN layer thickness is kept constant at / = 0.5 and the effect of varying / on SAW velocity is analyzed, as shown in Figure 5c. It is clear that Sezawa mode exhibits a higher acoustic velocity than the Rayleigh mode. For both wave modes, the acoustic wave velocity reduces with increasing SiO2 layer thickness, as the acoustic energy starts confining more in the SiO2 layer, which has the lowest SAW velocity in the proposed structure. In Figure 5d, for the Rayleigh wave, the k 2 firstly increases with an increase of / and reaches its maximum value of 0.55% when / is 0.375 and then it starts reducing with a further increase of / . In Sezawa wave mode, the k 2 initially increases with an increase of / and reaches its maximum value of 0.44% when / = 0.75 and reduces upon a further increase of / . In our later study of mass loading sensitivity, we used peak values of k 2 for its relevant mode, which are the optimal points for both modes. The displacement profiles of the device are summarized in Figure 7. The mode shapes of the displacement profile are helpful in recognizing the Rayleigh and Sezawa wave modes. The results in Figure 7 are recorded at resonance and anti-resonance modes of Eigen frequency analysis for = 2 µm and = 2.8 µm. The resonance (Figure 7a) and anti-resonance (Figure 7b) modes in The acoustic velocity in Sezawa mode remains unchanged for 0.01 < < 2. This is because of the fact that the Sezawa mode depends on the slow-on-fast structure, that is, SiO2/Si, which is constant in this case while varying in AlN thickness. In Figure 5b, for the Rayleigh wave, the k 2 firstly increases with an increase in / and reaches its maximum value of 0.55% when / is 0.5 and then it starts reducing with a further increase of / . In Sezawa wave mode, the k 2 remains unchanged except for a small peak at / = 0.875, as / = 0.25 is not sufficient to generate Sezawa mode. Next, the AlN layer thickness is kept constant at / = 0.5 and the effect of varying / on SAW velocity is analyzed, as shown in Figure 5c. It is clear that Sezawa mode exhibits a higher acoustic velocity than the Rayleigh mode. For both wave modes, the acoustic wave velocity reduces with increasing SiO2 layer thickness, as the acoustic energy starts confining more in the SiO2 layer, which has the lowest SAW velocity in the proposed structure. In Figure 5d, for the Rayleigh wave, the k 2 firstly increases with an increase of / and reaches its maximum value of 0.55% when / is 0.375 and then it starts reducing with a further increase of / . In Sezawa wave mode, the k 2 initially increases with an increase of / and reaches its maximum value of 0.44% when / = 0.75 and reduces upon a further increase of / . In our later study of mass loading sensitivity, we used peak values of k 2 for its relevant mode, which are the optimal points for both modes. The displacement profiles of the device are summarized in Figure 7. The mode shapes of the displacement profile are helpful in recognizing the Rayleigh and Sezawa wave modes. The results in Figure 7 are recorded at resonance and anti-resonance modes of Eigen frequency analysis for = 2 µm and = 2.8 µm. The resonance (Figure 7a) and anti-resonance (Figure 7b) modes in The acoustic velocity in Sezawa mode remains unchanged for 0.01 < t AlN λ < 2. This is because of the fact that the Sezawa mode depends on the slow-on-fast structure, that is, SiO 2 /Si, which is constant in this case while varying in AlN thickness. In Figure 5b, for the Rayleigh wave, the k 2 firstly increases with an increase in t AlN /λ and reaches its maximum value of 0.55% when t AlN /λ is 0.5 and then it starts reducing with a further increase of t AlN /λ. In Sezawa wave mode, the k 2 remains unchanged except for a small peak at t AlN /λ = 0.875, as t SiO 2 /λ = 0.25 is not sufficient to generate Sezawa mode. Next, the AlN layer thickness is kept constant at t AlN /λ = 0.5 and the effect of varying t SiO 2 /λ on SAW velocity is analyzed, as shown in Figure 5c. It is clear that Sezawa mode exhibits a higher acoustic velocity than the Rayleigh mode. For both wave modes, the acoustic wave velocity reduces with increasing SiO 2 layer thickness, as the acoustic energy starts confining more in the SiO 2 layer, which has the lowest SAW velocity in the proposed structure. In Figure 5d, for the Rayleigh wave, the k 2 firstly increases with an increase of t SiO 2 /λ and reaches its maximum value of 0.55% when t SiO 2 /λ is 0.375 and then it starts reducing with a further increase of t SiO 2 /λ. In Sezawa wave mode, the k 2 initially increases with an increase of t SiO 2 /λ and reaches its maximum value of 0.44% when t SiO 2 /λ = 0.75 and reduces upon a further increase of t SiO 2 /λ. In our later study of mass loading sensitivity, we used peak values of k 2 for its relevant mode, which are the optimal points for both modes. The displacement profiles of the device are summarized in Figure 7. The mode shapes of the displacement profile are helpful in recognizing the Rayleigh and Sezawa wave modes. The results in Figure 7 are recorded at resonance and anti-resonance modes of Eigen frequency analysis for t AlN = 2 µm and t SiO 2 = 2.8 µm. The resonance (Figure 7a) and anti-resonance (Figure 7b) modes in the Eigen frequency analysis of Rayleigh mode were observed as 1.167 GHz and 1.172 GHz respectively. Similarly, for Sezawa wave mode, the Eigen frequency for resonance ( Figure 7c) and anti-resonance mode (Figure 7d) the Eigen frequency analysis of Rayleigh mode were observed as 1.167 GHz and 1.172 GHz respectively. Similarly, for Sezawa wave mode, the Eigen frequency for resonance ( Figure 7c) and anti-resonance mode (Figure 7d) is recorded at 1.2 GHz and 1.214 GHz respectively. Mass Loading Analysis In the next stage of the study, a thin film of PIB is coated over the surface of the AlN film and the resonance frequency of both modes with different thicknesses of the PIB film is recorded. The PIB is preferred as a sensing film due to its low crystalline property, high permeability, low density and good adhesion properties [3,29]. Moreover, in gas sensing applications, PIB has proven to be more sensitive than other polymers [4]. First, the mass loading sensitivity analysis of the SAW device is performed without exposing PIB to any gas. For this purpose, a thin PIB film is placed over the entire surface of the SAW resonator (as shown in Figure 1). The PIB parameters used in simulation are same as in Reference [12], that is, density as 918 kg/m 3 , Poison's ratio is 0.48, Young's modulus is 10 GPa and relative permittivity is 2.2. The thickness of the PIB film ( ) over the SAW resonator's surface is varied from 110 nm to 150 nm in steps of 10 nm. The lower limit of is chosen as 110 nm which is in accordance with the electrode height (100 nm in this case) and the upper limit of is chosen 150 nm; as attenuation occurs with a further increase in . To study the implications of on SAW velocity, different thicknesses of PIB are considered and results are shown in Figure 8. It is clear from Figure 8 that the mass loading on the surface of AlN/SiO2/Si decreases the acoustic velocity in both SAW modes and the Sezawa mode is more sensitive to surface mass loading as compared to Rayleigh mode. The effect of / and / variation on resonant frequencies of Rayleigh (Δ ) and Sezawa mode (∆ ) are summarized in Table 4. In this analysis, it is clear that at / = 0.5, Rayleigh mode is almost 2 times more sensitive than Sezawa mode but at / = 0.75, Sezawa mode is more than 3 times more sensitive than Rayleigh mode. It is concluded that an optimal value of sensitivity can be obtained by carefully selecting of / and / values. Mass Loading Analysis In the next stage of the study, a thin film of PIB is coated over the surface of the AlN film and the resonance frequency of both modes with different thicknesses of the PIB film is recorded. The PIB is preferred as a sensing film due to its low crystalline property, high permeability, low density and good adhesion properties [3,29]. Moreover, in gas sensing applications, PIB has proven to be more sensitive than other polymers [4]. First, the mass loading sensitivity analysis of the SAW device is performed without exposing PIB to any gas. For this purpose, a thin PIB film is placed over the entire surface of the SAW resonator (as shown in Figure 1). The PIB parameters used in simulation are same as in Reference [12], that is, density as 918 kg/m 3 , Poison's ratio is 0.48, Young's modulus is 10 GPa and relative permittivity is 2.2. The thickness of the PIB film (t PIB ) over the SAW resonator's surface is varied from 110 nm to 150 nm in steps of 10 nm. The lower limit of t PIB is chosen as 110 nm which is in accordance with the electrode height (100 nm in this case) and the upper limit of t PIB is chosen 150 nm; as attenuation occurs with a further increase in t PIB . To study the implications of t PIB on SAW velocity, different thicknesses of PIB are considered and results are shown in Figure 8. It is clear from Figure 8 that the mass loading on the surface of AlN/SiO 2 /Si decreases the acoustic velocity in both SAW modes and the Sezawa mode is more sensitive to surface mass loading as compared to Rayleigh mode. The effect of t SiO 2 /λ and t AlN /λ variation on resonant frequencies of Rayleigh (∆ f R ) and Sezawa mode (∆ f s ) are summarized in Table 4. In this analysis, it is clear that at t SiO 2 /λ = 0.5, Rayleigh mode is almost 2 times more sensitive than Sezawa mode but at t SiO 2 /λ = 0.75, Sezawa mode is more than 3 times more sensitive than Rayleigh mode. It is concluded that an optimal value of sensitivity can be obtained by carefully selecting of t SiO 2 /λ and t AlN /λ values. Gas Sensitivity Analysis The study is extended to analyze the sensitivity of the both surface modes to organic gases. For this purpose, we selected / = 0.5, / = 0.5 for Rayleigh mode and / = 0.5, / = 0.75 for Sezawa mode as these appear as the best choices for optimum sensitivity for Rayleigh mode and Sezawa mode independently. The PIB thickness is chosen as 110 nm, to utilize minimum mass loading and fast equilibrium of the sensor. The simulation is performed for selective organic gases that is, trichloromethane, trichloroethylene, carbon tetrachloride and tetrachloroethene. The PIB/air partition coefficient (K), molar mass of gas (M) and partial densities ( , ) of adsorbed gases are shown in Table 5. The measurement is performed for the gas concentrations in the range of 1 ppm to 10 ppm. The selection of the ppm range is made on the basis of the fact that most of the organic gases become hazardous in the range of a few ppm [30]. The results of gas sensing are summarized in Figure 9. The Rayleigh mode ∆ /ppm and Sezawa mode ∆ /ppm for trichloromethane, carbon tetrachloride, trichloroethylene and tetrachloroethene are summarized in Table 6. It can be observed that the sensitivity of Sezawa mode for 1 ppm of volatile organic gases is 2 times greater than that of Rayleigh wave mode, so the Sezawa mode SAW sensor proves more sensitive than the Rayleigh mode SAW sensor. The shifts in resonant frequencies are easily detectable with existing circuit topologies. For example, the authors of Reference [31] presented a circuit topology to detect a frequency shift with a resolution of 6.2 mHz. Gas Sensitivity Analysis The study is extended to analyze the sensitivity of the both surface modes to organic gases. For this purpose, we selected t AlN /λ = 0.5, t SiO 2 /λ = 0.5 for Rayleigh mode and t AlN /λ = 0.5, t SiO 2 /λ = 0.75 for Sezawa mode as these appear as the best choices for optimum sensitivity for Rayleigh mode and Sezawa mode independently. The PIB thickness is chosen as 110 nm, to utilize minimum mass loading and fast equilibrium of the sensor. The simulation is performed for selective organic gases that is, trichloromethane, trichloroethylene, carbon tetrachloride and tetrachloroethene. The PIB/air partition coefficient (K), molar mass of gas (M) and partial densities (ρ gas,PIB ) of adsorbed gases are shown in Table 5. The measurement is performed for the gas concentrations in the range of 1 ppm to 10 ppm. The selection of the ppm range is made on the basis of the fact that most of the organic gases become hazardous in the range of a few ppm [30]. The results of gas sensing are summarized in Figure 9. The Rayleigh mode ∆ f R /ppm and Sezawa mode ∆ f s /ppm for trichloromethane, carbon tetrachloride, trichloroethylene and tetrachloroethene are summarized in Table 6. It can be observed that the sensitivity of Sezawa mode for 1 ppm of volatile organic gases is 2 times greater than that of Rayleigh wave mode, so the Sezawa mode SAW sensor proves more sensitive than the Rayleigh mode SAW sensor. The shifts in resonant frequencies are easily detectable with existing circuit topologies. For example, the authors of Reference [31] presented a circuit topology to detect a frequency shift with a resolution of 6.2 mHz. Sensors 2018, 18, x FOR PEER REVIEW 11 of 13 Figure 9. Plot of resonance frequency shift versus gas concentration in ppm. The achieved sensitivity of Rayleigh mode in CMOS compatible AlN/SiO2/Si based SAW sensor for VOCs is in good agreement with the sensitivity of the Rayleigh mode ZnO/SiO2/Si based SAW sensor as in [32]. A ZnO/SiO2/Si based SAW sensor is designed to detect VOCs. The polyepichlorohydrine (PECH) is used as a sensing film. The toluene VOC is targeted for analysis. The achieved sensitivity of the sensor is ~2 Hz/ppm with the ZnO film prepared with a 10% O2 concentration. However, there is no evidence of the Sezawa mode ZnO/SiO2/Si based SAW gas sensor in literature. The results illustrate that the Sezawa mode in AlN/SiO2/Si multilayer SAW structure is a good candidate for high sensitive SAW gas sensor applications. In our targeted application scenario of indoor air quality measurement, we did not require selectivity to a specific gas. In the case of such scenarios where the selectivity of gas is required with a background of interfering gases, a number of methods are possible which are summarized in Reference [1] and are as follows: (i) using a multi-sensor array and pattern recognition; (ii) using an analytical tool like a Gas Chromatography (GC) tube to separate various gases; and (iii) by using a dynamic operation such as temperature cycling. Our proposed methodology will be beneficial in attaining a healthier and safer environment. Conclusions This study is performed to analyze the Sezawa wave mode propagation characteristics in AlN/SiO2/Si structure and its potential to be used in gas sensing applications. It is observed that not only does Sezawa mode exist in an AlN/SiO2/Si structure but it also exhibits high SAW velocity with a moderate electromechanical coupling coefficient as compared to Rayleigh mode. The selection of AlN and SiO2 layer thicknesses is very important for obtaining optimum sensor performance. Moreover, the analysis of PIB mass loading and its sensitivity towards VOCs are analyzed and it is found that Sezawa mode is more sensitive to VOCs than SAW mode. The sensitivity of the Sezawa wave mode to VOCs is shown to be twice that of the Rayleigh wave mode. The achieved sensitivity of Rayleigh mode in CMOS compatible AlN/SiO 2 /Si based SAW sensor for VOCs is in good agreement with the sensitivity of the Rayleigh mode ZnO/SiO 2 /Si based SAW sensor as in [32]. A ZnO/SiO 2 /Si based SAW sensor is designed to detect VOCs. The polyepichlorohydrine (PECH) is used as a sensing film. The toluene VOC is targeted for analysis. The achieved sensitivity of the sensor is~2 Hz/ppm with the ZnO film prepared with a 10% O 2 concentration. However, there is no evidence of the Sezawa mode ZnO/SiO 2 /Si based SAW gas sensor in literature. The results illustrate that the Sezawa mode in AlN/SiO 2 /Si multilayer SAW structure is a good candidate for high sensitive SAW gas sensor applications. In our targeted application scenario of indoor air quality measurement, we did not require selectivity to a specific gas. In the case of such scenarios where the selectivity of gas is required with a background of interfering gases, a number of methods are possible which are summarized in Reference [1] and are as follows: (i) using a multi-sensor array and pattern recognition; (ii) using an analytical tool like a Gas Chromatography (GC) tube to separate various gases; and (iii) by using a dynamic operation such as temperature cycling. Our proposed methodology will be beneficial in attaining a healthier and safer environment. Conclusions This study is performed to analyze the Sezawa wave mode propagation characteristics in AlN/SiO 2 /Si structure and its potential to be used in gas sensing applications. It is observed that not only does Sezawa mode exist in an AlN/SiO 2 /Si structure but it also exhibits high SAW velocity with a moderate electromechanical coupling coefficient as compared to Rayleigh mode. The selection of AlN and SiO 2 layer thicknesses is very important for obtaining optimum sensor performance. Moreover, the analysis of PIB mass loading and its sensitivity towards VOCs are analyzed and it is found that Sezawa mode is more sensitive to VOCs than SAW mode. The sensitivity of the Sezawa wave mode to VOCs is shown to be twice that of the Rayleigh wave mode.
2018-06-21T14:10:36.024Z
2018-05-24T00:00:00.000
{ "year": 2018, "sha1": "9609dbc5db33a71e8450146b73bab672f74f98aa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/18/6/1687/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9609dbc5db33a71e8450146b73bab672f74f98aa", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Computer Science", "Medicine" ] }
11757192
pes2o/s2orc
v3-fos-license
tRNA/mRNA Mimicry by tmRNA and SmpB in Trans-Translation Since accurate translation from mRNA to protein is critical to survival, cells have developed translational quality control systems. Bacterial ribosomes stalled on truncated mRNA are rescued by a system involving tmRNA and SmpB referred to as trans-translation. Here, we review current understanding of the mechanism of trans-translation. Based on results obtained by using directed hydroxyl radical probing, we propose a new type of molecular mimicry during trans-translation. Besides such chemical approaches, biochemical and cryo-EM studies have revealed the structural and functional aspects of multiple stages of trans-translation. These intensive works provide a basis for studying the dynamics of tmRNA/SmpB in the ribosome. Introduction Translation from the genetic information contained in mRNA to the amino acid sequence of a protein is performed on the ribosome, a large ribonucleoprotein complex composed of three RNA molecules and over 50 proteins. The ribosome is a molecular machine that catalyzes the synthesis of a polypeptide from its substrate, aminoacyl-tRNA. Ribosomes that translate a problematic mRNA, such as that lacking a stop codon, can stall at its 3 end and produce an incomplete, potentially deleterious protein. Trans-translation is known as the highly sophisticated system in bacteria to recycle ribosomes stalled on defective mRNAs and add a short tag-peptide to the C-terminus of the nascent polypeptide as the degradation signal [1][2][3][4] (Figure 1). Thus, the tagged polypeptide from truncated mRNA is preferentially degraded by cellular proteases including ClpXP, ClpAP, Lon, FtsH, and Tsp [1,[5][6][7], and truncated mRNA is released from the stalled ribosomes to be degraded by RNases [8]. The process of trans-translation is facilitated by transfermessenger RNA (tmRNA, also known as 10Sa RNA or SsrA RNA), which is a unique hybrid molecule that functions as both tRNA and mRNA ( Figure 2). It comprises two functional domains, the tRNA domain partially mimicking tRNA [9] and the mRNA domain, which includes the coding region for the tag-peptide, surrounded by four pseudoknot structures [10][11][12][13][14]. As predicted from the tRNA-like secondary structure, the 3 end of tmRNA is aminoacylated by alanyl-tRNA synthetase (AlaRS) like that of canonical tRNA [15,16]. The function as tRNA is a prerequisite for the function as mRNA, indicating the importance of the elaborate interplay of the two functions [2]. Thus, "transtranslation" has been proposed: Ala-tmRNA somehow enters the stalled ribosome, allowing translation to resume by switching the original mRNA to the tag-encoding region on tmRNA. Various questions about the molecular mechanism of this process have been raised. How does tmRNA enter the stalled ribosome in the absence of a codon-anticodon interaction? How is tmRNA switched from the original mRNA in the ribosome? How is the resume codon on tmRNA for the tag-peptide determined? How does tmRNA, 4-or 5-fold larger than tRNA, work in the narrow space in the ribosome? Here, we review recent progress in our understanding of the molecular mechanism of trans-translation facilitated by tmRNA and SmpB, which is being revealed by various chemical approaches such as directed hydroxyl radical probing and chemical modification as well as other biochemical and structural studies. In Vitro Trans-Translation System A cell-free trans-translation system coupled with poly (U)dependent polyphenylalanine synthesis was developed using Escherichia coli crude cell extracts [2]. Later, several transtranslation systems were developed using purified factors from E. coli [31,34,35] or from Thermus thermophilus [25]. These systems have revealed that EF-Tu and SmpB, in addition to the stalled ribosome and Ala-tmRNA, are essential and sufficient for the first few steps of transtranslation including the binding of Ala-tmRNA to the ribosome, peptidyl transfer from peptidyl-tRNA to Ala-tmRNA, and decoding of the first codon on tmRNA for the tag peptide. Besides, these systems have also provided a basis for investigating the molecular mechanism of transtranslation by chemical approaches. Molecular Mimicries of tRNA and mRNA Revealed by Directed Hydroxyl Radical Probing Ivanova et al. [36] performed chemical probing to analyze the interaction between SmpB and a ribosome. Bases of rRNA are protected from chemical modification with dimethylsulfate or kethoxal by SmpB, indicating that there are two SmpB-binding sites on the ribosome; one is around the P-site of the small ribosomal subunit and the other is under the L7/L12 stalk of the large ribosomal subunit. The capacity of two SmpB molecules to bind to a ribosome is in agreement with results of other biochemical studies [37,38]. Gutmann et al. [29] showed a crystal structure of Aquifex aeolicus SmpB in complex with the tmRNA fragment corresponding to TLD, which confirmed results of earlier biochemical studies showing that TLD is the crucial binding region of SmpB [23]. It also suggested that SmpB orients toward the decoding center of the small ribosomal subunit and that SmpB structurally mimics the anticodon arm. This is in agreement with a cryo-EM map of the accommodated state complex of ribosome/Ala-tmRNA/SmpB [39][40][41]. A truncation of the unstructured C-terminal tail of SmpB leads to a loss of trans-translation activity [42,43]. In spite of its functional significance, cryo-EM studies have failed to identify the location of the C-terminal tail of SmpB in the ribosome due to poor resolution. We performed directed hydroxyl radical probing with Fe(II)-BABE to study the sites and modes of binding of E. coli SmpB to the ribosome ( Figure 3). Fe(II)-BABE is a specific modifier of the cysteine residue of a protein, which generates hydroxyl radicals to cleave the RNA chain. Cleavage sites on RNA can be detected by primer extension, allowing mapping of amino acid residues of a binding protein on an RNAbased macromolecule. This is an excellent chemical approach to study the interaction of a protein with the ribosome [44][45][46][47]. We prepared SmpB variants each having a single cysteine residue for attaching it to an Fe(II)-BABE probe. Using directed hydroxyl radical probing, we succeeded in identifying the location of not only the structural domain but also the C-terminal tail of SmpB on the ribosome [48]. It was revealed that there are two SmpB-binding sites in a ribosome, which correspond to the lower halves of the Asite and P-site and that the C-terminal tail of A-site SmpB is aligned along the mRNA path towards the downstream tunnel, while that of P-site SmpB is located almost exclusively around the region of the codon-anticodon interaction in the P-site. This suggests that the C-terminal tail of SmpB mimics mRNA in the A-site and P-site and that these binding sites reflect the pre-and posttranslocation steps of trans-translation. The probing signals appear at interval 3, residues of the latter half of the C-terminal tail, suggesting an α helix structure, which has been predicted from the periodical occurrence of positively charged residues [42]. Consequently, the following model has been proposed. The main body of SmpB mimics the lower half of tRNA, and the C-terminal tail of SmpB mimics mRNA both before and after translocation, while the upper half of tRNA is mimicked by TLD. Upon entrance of tmRNA into the stalled ribosome, the C-terminal tail of SmpB may recognize the vacant A-site free of mRNA to trigger trans translation. After peptidyl transfer to Ala-tmRNA occurring essentially in the same manner as that in canonical translation, translocation of peptidyl-Ala-tmRNA/SmpB from the A-site to the P-site may occur. During this event, the extended C-terminal tail folds around the region of the codon-anticodon interaction in the P-site, which drives out mRNA from the P-site. Early Stages of Trans-Translation Ala-tmRNA/SmpB forms a complex with EF-Tu and GTP in vitro, and this quaternary complex is likely to enter the empty A-site of the stalled ribosome [22]. This complex forms an initial binding complex with the stalled ribosome like the ternary complex of aminoacyl-tRNA, EF-Tu, and GTP does with the translating ribosome. In normal translation, the correct codon-anticodon interaction is recognized by universally conserved 16S rRNA bases, G530, A1492 and A1493, which form the decoding center. When a cognate tRNA binds to the A-site, A1492, and A1493 flip out from the interior of helix 44 of 16S rRNA, and G530 rotates from a syn to an anticonformation to monitor the geometry of the correct codon-anticodon duplex [53]. This induces GTP hydrolysis by EF-Tu, allowing the CCA terminal of tRNA to be accommodated into the peptidyl transferase center. In the context of tRNA mimicry, SmpB should orient toward the decoding center in trans-translation. We have recently shown that interaction of the C-terminal tail of SmpB with the mRNA path in the ribosome occurs after hydrolysis of GTP by EF-Tu [49]. According to a chemical probing and NMR study, SmpB interacts with G530, A1492, and A1493 [54]. How these bases recognize SmpB to trigger the following GTP hydrolysis is yet to be studied. It should be noted that recent crystal structures have revealed that these bases recognize the A-site ligands (aminoacyl-tRNAs, IF-1, RF-1, RF-2 and RelE) in different ways during translation [50,55,56]. Cryo-EM reconstructions of the preaccommodated state of the ribosome/Ala-tmRNA/SmpB/EF-Tu/GDP/ kirromycin complex of T. thermophilus have shown that two SmpB molecules present in a complex, one binding to the 50S ribosomal subunit at the GTPase-associated center and the other binding to the 30S subunit near the decoding center [39,41]. The latter SmpB is not found in the accommodation complex of T. thermophilus and E. coli [39][40][41]. Thus, the following model has been proposed: two molecules of SmpB are required for binding of Ala-tmRNA to the stalled ribosome and one of them is released from the ribosome concomitant with the release of EF-Tu after hydrolysis of GTP, so that the 3 -terminal of tmRNA is oriented toward the peptidyl-transferase center. However, several reports have argued against the requirement of two SmpB molecules for trans-translation: SmpB has been reported to interact with tmRNA in a 1 : 1 stoichiometry in the cell [57,58], and crystal structures of SmpB in complex with TLD have been reported to exhibit a 1 : 1 stoichiometry of tmRNA and SmpB [29,59]. Further studies are required to assess the stoichiometry of SmpB in the preaccommodation state complex. We have recently shown that the C-terminal tail of SmpB is required for the accommodation of Ala-tmRNA/SmpB into the A-site rather than the initial binding of Ala-tmRNA/SmpB/EF-Tu/GTP to the stalled ribosome [49]. We have also shown that the tryptophan residue at 147 in the middle of the C-terminal tail of E. coli SmpB has a crucial role in the step of accommodation. Our results further suggest that the aromatic side chain of Trp147 is required for interaction with rRNA upon accommodation. It has been shown that trans-translation can occur in the middle of an mRNA in vitro, although the efficiency of trans-translation is dramatically reduced with increase in the length of the 3 extension from the decoding center [34,35]. This may be a result of competition of the 3 extension of mRNA and the C-terminal of A-site SmpB for the mRNA path. The ribosome stalled on the middle of intact mRNA in a cell might be rescued by trans-translation via cleavage of  Coding region for tag-peptide [48,49]. The N-terminal globular domain of SmpB mimics the lower half of tRNA in the A-site. The tertiary structures of TLD-SmpB from T. thermophilus [50] and 70S ribosome from E. coli [51] were used. (b) Location of the C-terminal tail of SmpB from directed hydroxyl radical probing. Cleavage sites by Fe(II)-tethered A-site and P-site SmpB are colored yellow and green, respectively. The C-terminal tails are located on the mRNA path, suggesting that the C-terminal tail of SmpB mimics mRNA in both the A-site and P-site. P-site SmpB and mRNA are colored red and pink, respectively. The tertiary structure model of 70S ribosome from T. thermophilus [52] is used. mRNA at the A-site [60] or by alternative ribosome rescue systems [61][62][63]. Determination of the Resume Codon In trans translation, the ribosome switches template from a problematic mRNA to tmRNA. How does the stalled ribosome select the first codon on tmRNA without an SDlike sequence? It is reasonable to assume that some structural element on tmRNA is responsible for positioning the resume codon in the decoding center just after translocation of peptidyl-Ala-tmRNA/SmpB from the A-site to the P-site. In E. coli, the coding region for the tag peptide starts from position 90 of tmRNA, which is 12 nucleotides downstream of PK1. Indeed, PK1 is important for efficiency of transtranslation [14], whereas changing the span between PK1 and the resume codon does not affect determination of the initiation point of tag-translation [64]. A genetic selection experiment has revealed strong base preference in the singlestranded region between PK1 and the resume codon, especially −4 and +1 (position 90) [65]. The importance of this region has also been shown by an in vitro study [64]. Several point mutations in this region encompassing −6 to −1 decrease the efficiency of tag-translation, while some of them shift the tag-initiation point by −1 or +1 to a considerable extent [59,60], indicating that the upstream sequence contains not only the enhancer of trans-translation but also the determinant for the tag-initiation point. Evidence for interaction between the upstream region and SmpB has been provided by a study using chemical probing [66]. E. coli SmpB protects U at position −5 from chemical modification with CMCT. The structural domain of SmpB rather than the C-terminal tail is involved in this protection. The protection at −5 was suppressed by a point mutation in the TLD critical to SmpB binding, suggesting that SmpB serves to bridge two separate domains of tmRNA to determine the resume codon for tag-translation. Mutations that cause −1 and +1 shifts of the start point of tag-translation also shift the site of protection at −5 from chemical modification by −1 and +1, respectively, indicating the significance of the fixed span between the site of interaction on tmRNA with SmpB and the resume point of translation: translation for the tag-peptide starts from the position 5 nucleotides downstream of the site of interaction with SmpB. Such a functional interaction of the upstream region in tmRNA with SmpB is also supported by the results of another genetic study showing that A-to-C mutation at position 86 of E. coli tmRNA that inactivates trans-translation both in vitro and in vivo is suppressed by some double or triple mutations in SmpB [67]. In agreement with these studies, recent cryo-EM studies have suggested that the upstream region in tmRNA interacts with SmpB in the resume (posttranslocation) state [68,69]. The initiation shift of tag-translation can also be induced by the addition of a 4,5-or 4,6-disubstituted class of aminoglycoside such as paromomycin or neomycin [70,71], which usually causes miscoding of translation by binding to the decoding center on helix 44 of the small subunit to induce a conformational change in its surroundings [72]. Aminoglycosides also bind at helix 69 of the large subunit, which forms the B2a bridge with helix 44 in close proximity of the decoding center in the small subunit, to inhibit translocation and ribosome recycling by restricting the helical dynamics of helix 69 [73]. Taken together, these findings suggest the significance of interaction of the proximity of the decoding center with any portion of SmpB or tmRNA for precise tag-translation. It should be noted that hygromycin B, which binds only to helix 44, does not induce initiation shift of tag-translation [71]. Trajectories of tmRNA/SmpB Along with the functional mimicry of TLD/SmpB, a similar behavior of tmRNA/SmpB to that of canonical tRNA+mRNA in the ribosome through several hybrid states, The C-terminal tail of SmpB is not located on the mRNA path in the processes before accommodation. After GTP hydrolysis by EF-Tu, the C-terminal tail is located on the mRNA path mimicking mRNA to recognize the stalled ribosome free of mRNA. Following translocation of tmRNA/SmpB from the A-site to P-site, the C-terminal tail undergoes drastic conformational change to accommodate the resume codon of tmRNA into the decoding center. SmpB and the tag-encoding region are shown by red and blue, respectively. White circles indicate amino acids encoded by truncated mRNA, and a white square indicates amino acid designated by the resume codon of tmRNA. A/T, A/A, A/P, P/P, and P/E, has been assumed. Cryo-EM studies have shown the location of the complex of tmRNA with the main body of SmpB in the A/T and A/A states [39,40], and a directed hydroxyl radical probing has revealed the positions of SmpB in the A/A and P/P states [48]. The existence of stable SmpB binding sites in the A-site and Psite suggests the requirement of translocation, as in canonical translation. It might possibly involve EF-G. Concomitantly with translocation, mRNA and P-site tRNA are released from the stalled ribosome [74]. Considering the different C-terminal tail structures of A-site SmpB and P-site SmpB, the C-terminal tail would somehow undergo conformational change from the extended form to the folded form [48]. The next translocation is thought to move tmRNA/SmpB to the E-site. These ribosomal processes should involve extensive changes in the conformation of tmRNA [75] as well as in the modes of interactions of tmRNA with SmpB and the ribosome [76,77]. According to chemical probing studies, secondary structure elements of tmRNA remain intact in a few steps of trans-translation including preand posttranslocation states [77][78][79]. Another study has suggested 1 : 1 stoichiometry of tmRNA to SmpB throughout the processes of translation for the tag peptide [80]. Recently, the movement of tRNA during translocation has been revealed by using time-resolved cryo-EM [81]. Not only classic and hybrid states but also various novel intermediate states of tRNAs were revealed. Although the intermediate states during trans-translation remain unclear, results of future structural studies including chemical approaches should reveal tmRNA/SmpB and ribosome dynamics. Conclusion Various chemical approaches in addition to cryo-EM and Xray crystallographic studies have been revealing the molecular mechanism of trans-translation. tmRNA forms a ribonucleoprotein complex with SmpB, which plays an essential role in trans-translation. Based on a directed hydroxyl radical probing towards SmpB, we have proposed a novel molecular mechanism of trans-translation (Figure 4). In this model, an elegant collaboration of a hybrid RNA molecule of tRNA and mRNA and a protein mimicking a set of tRNA and mRNA facilitates trans-translation. Initially, a quaternary complex of Ala-tmRNA, SmpB, EF-Tu, and GTP may enter the vacant A-site of the stalled ribosome to trigger trans-translation, when a set of Ala-TLD of tmRNA and the main body of SmpB mimicking the upper and lower halves of aminoacyl-tRNA, respectively, recognizes the A-site free of tRNA. After hydrolysis of GTP by EF-Tu, the C-terminal tail of SmpB mimicking mRNA interacts with the decoding center and the downstream mRNA path free of mRNA, allowing Ala-TLD/SmpB to be accommodated. While several proteins including SmpB have been proposed to mimic tRNA or its portion, SmpB is the first protein that has been shown to mimic mRNA. SmpB is also the first protein of which stepwise movements in the ribosome are assumed to mimic those of tRNA in the translating ribosome. Our model depicts an outline of the trans-translation processes in the ribosome, although the following issues should be addressed. How do the intermolecular interactions between tmRNA and ribosome, between tmRNA and SmpB, and between ribosome and SmpB as well as the intramolecular interactions within tmRNA and within SmpB change during the course of the trans-translation processes? Is EF-G required for translocation of tmRNA/SmpB having neither an anticodon nor the corresponding codon from the Asite to the P-site? If EF-G is required, how does it promote translocation? These questions remain to be answered in the future works.
2014-10-01T00:00:00.000Z
2011-01-05T00:00:00.000
{ "year": 2011, "sha1": "ba51bcc3b9e97eaacf1be75b2952404b14d7eee9", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jna/2011/130581.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f21bdde5fd4699e08cb05aea2e5f60ba0a22583", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
267573249
pes2o/s2orc
v3-fos-license
Healthcare provisions associated with multiple HIV‐related outcomes among adolescent girls and young women living with HIV in South Africa: a cross‐sectional study Abstract Introduction Adolescent girls and young women (AGYW) living with HIV experience poor HIV outcomes and high rates of unintended pregnancy. Little is known about which healthcare provisions can optimize their HIV‐related outcomes, particularly among AGYW mothers. Methods Eligible 12‐ to 24‐year‐old AGYW living with HIV from 61 health facilities in a South African district completed a survey in 2018–2019 (90% recruited). Analysing surveys and medical records from n = 774 participants, we investigated associations of multiple HIV‐related outcomes (past‐week adherence, consistent clinic attendance, uninterrupted treatment, no tuberculosis [TB] and viral suppression) with seven healthcare provisions: no antiretroviral therapy (ART) stockouts, kind and respectful providers, support groups, short travel time, short waiting time, confidentiality, and safe and affordable facilities. Further, we compared HIV‐related outcomes and healthcare provisions between mothers (n = 336) and nulliparous participants (n = 438). Analyses used multivariable regression models, accounting for multiple outcomes. Results HIV‐related outcomes were poor, especially among mothers. In multivariable analyses, two healthcare provisions were “accelerators,” associated with multiple improved outcomes, with similar results among mothers. Safe and affordable facilities, and kind and respectful staff were associated with higher predicted probabilities of HIV‐related outcomes (p<0.001): past‐week adherence (62% when neither accelerator was reported to 87% with both accelerators reported), clinic attendance (71%−89%), uninterrupted ART treatment (57%−85%), no TB symptoms (49%−70%) and viral suppression (60%−77%). Conclusions Accessible and adolescent‐responsive healthcare is critical to improving HIV‐related outcomes, reducing morbidity, mortality and onward HIV transmission among AGYW. Combining these provisions can maximize benefits, especially for AGYW mothers. I N T R O D U C T I O N Adolescent girls and young women (AGYW) aged 15-24 represented nearly one-quarter of new HIV infections in 2022 in sub-Saharan Africa [1].In parallel, AGYW living with HIV in sub-Saharan Africa continue to experience early pregnancy: nearly 30% of AGYW have had a child before age 20 [2,3].Recent multi-country analyses of nationally representative Eastern and Southern Africa datasets identified strong asso-ciations between HIV prevalence and early motherhood, highlighting the importance of these overlapping vulnerabilities for HIV programming and maternal care [2,4].AGYW living with HIV experience multiple adverse HIVrelated health outcomes, including poorer adherence to antiretroviral therapy (ART), retention in care, clinic attendance and viral suppression compared to older women [5].Enhancing their health and survival requires simultaneously improving multiple health outcomes.Until recently, no systematic reviews identified interventions that effectively improved multiple HIV-related outcomes among adolescents living with HIV (ALHIV) [6,7].Recently, promising community-based interventions have emerged, such as the peer-facilitated psychosocial support for adolescents and young people living with HIV in Zimbabwe, and a livelihoodsfocused support package in Uganda [7][8][9]. However, no studies have identified which healthcare factors can improve HIV-related outcomes among AGYW living with HIV, nor among mothers.The World Health Organization has set valuable and aspirational global standards for quality healthcare services for adolescents [10].However, in over-burdened, under-resourced health systems, such as South Africa's, successfully achieving these standards at scale remains challenging.Studies with ALHIV have identified potential healthcare factors associated with improved adolescent HIV-related outcomes: no medication/ART stockouts, kind and respectful healthcare providers, confidentiality, short travel time to facilities, short waiting time at facilities and accessible facilities [11,12].We need to understand which factors have the greatest impact on multiple HIV-related outcomes in real-world healthcare delivery-government primary healthcare settings within high HIV-burden communities.Identifying healthcare provisions that act as "healthcare accelerators"-provisions that improve multiple health outcomes concurrently-can help us maximize healthcare investments, and guide provider training and facility improvements. This study was co-designed with the South African National Department of Health and UNICEF Eastern and Southern Africa Office, responding to a need to identify which healthcare provisions to prioritize for AGYW, including AGYW who are mothers, living with HIV [13].This analysis investigates associations of seven healthcare provisions with five HIVrelated outcomes in a large study of AGYW living with HIV in South Africa.We conduct additional sub-analyses focusing on AGYW living with HIV who are mothers, an understudied and priority population [4]. Procedures and data We analysed data from all AGYW living with HIV in the Mzantsi Wakho and HEY BABY studies in the Eastern Cape province of South Africa.All AGYW living with HIV, aged 12-24, from 52 government clinics and nine maternity obstetric units in a health district in South Africa's Eastern Cape Province were invited to participate in the study.Although conventionally AGYW refers to 15-to 24-year-olds, we included participants <15 years old given the importance of this growing cohort of very young mothers, and limited research on their outcomes.Over 90% of eligible participants were enrolled in the study in each facility type; 90.1% and 96%, respectively [14,15], completed a survey on their multidimensional experiences of health, HIV and healthcare experiences.HIV status was ascertained through medical records, including either a confirmed HIV-positive test result, CD4 count or viral load (VL) at treatment initiation prior to the interview [16].We interviewed both AGYW living with HIV mothers (participants who had had their first child before age 20, excluding n = 18 who had their first child >20) and nulliparous AGYW living with HIV (participants who have never given birth to a live child).A final sample of n = 774 AGYW living with HIV participated in the study from 2018 to 2019.Self-reported questionnaires-using validated tools where available-were piloted with AGYW living with HIV, including AGYW who were mothers [17]. Participant medical records on VL, CD4 count and WHO staging were individually linked using unique study identifiers, following stringent data protection, consent and management protocols.Medical records data extracted from n = 67 health facilities were supplemented by routine laboratory test data from the National Health Laboratory Services (NHLS) data warehouse which archives all routinely collected public sector laboratory data from South Africa's National HIV Programme.Demographic information (name, surname, sex, date of birth, health facility location) for participants in the study accessing public sector HIV care were linked to laboratory test records in the NHLS and used to extract adolescents' HIV VL records, following consent by participants and caregivers.Laboratory tests performed both within and outside of the Eastern Cape were accessed through national record linkages.Test record linkages in the data warehouse are achieved by a rule-and probabilistic matching-based algorithm using patient demographics to assign multiple tests to a single patient [18].In total, VL results were available for 60% (467/774) of participants included in this analysis.Following the merge, data were de-identified before analyses.If multiple records were available for each participant, the record closest to the interview date (within a year) was included in this analysis. Voluntary informed consent was obtained from adolescents and their caregivers when adolescents were under 18 years, following international and national guidelines, including data linkages approval.Ethical approvals were obtained from the Universities of Oxford (R48876/RE001,SSD/CUREC2/12-21) and Cape Town (HREC226/2017,CSSR2013/4,CSSR 2017/01), Eastern Cape Departments of Health and Basic Education, NHLS Academic Affairs and Research Management System (2019/08/07) and participating health and educational facilities.Participants received a certificate and a gift pack selected by the study's Teen Advisory Group, including toiletries for AGYW and their infants. Measures We measured five HIV-related outcomes: (i) past-week ART adherence; (ii) consistent clinic attendance; (iii) uninterrupted ART treatment; (iv) viral suppression; and (v) no tuberculosis (TB) symptomatology.Past-week ART adherence was defined based on self-report of currently taking ART and not having missed any doses in the past 7 days (including weekdays and weekends) [19].Self-reported consistent clinic attendance was measured as participants reporting missing none of their clinic appointments in the last year.Past-year uninterrupted ART treatment was coded as 1 if participants self-reported no treatment interruptions of 2+ days in a row, or 0 otherwise [20].Viral suppression was defined as having a VL <1000 copies/ml at the most recent VL measurement up to 12 months following the interview date.Given very low rates of TB testing among ALHIV, no TB symptomology was measured using an algorithm based on the five most common pulmonary TB symptoms (i.e.dry cough for >2 weeks, weight loss, night sweats, chest pain, fever) [21].Participants experiencing TB symptoms based on the algorithm were coded as 0, with no symptoms coded as 1.Self-reported ART outcomes data can pose reliability and validity challenges, but in this study, longitudinal analyses indicated strong associations between the above self-reported HIV-related outcomes and VL data over time, for the subsample with available VL data [22].Socio-demographic factors included: age (grouped: <15, 15-19 and 20-24 years), residence (urban/rural, using the 2011 South African census); housing (informal/formal); household poverty, measured as missing one of the seven highest socially perceived necessities for adolescents (e.g.enough clothes), validated in South Africa; and food insecurity, measured by combining (1) participant did not have enough food for the entire week and (2) participants could not afford three daily meals at home.Participants who reported having at least one child <20 years old were coded as AGYW mothers. Other HIV-related factors included as control variables: time on treatment, measured as years since ART initiation (based on medical records or, when no records were available, self-reported age at ART initiation).This was coded as recently initiated (0−3 years on treatment) versus all others.Mode of HIV acquisition (recent/perinatal) was computed via an algorithm based on the age of ART initiation, validated with self-reported data such as the age of first sex, orphanhood cause and experiences of sexual assault [23]. Seven healthcare provisions were co-identified with adolescent advisors during piloting based on a participatory activity to design the Dream Clinic [13].All provisions were coded so that 1 = positive healthcare experiences and were measured in the past year.No ART stockouts were computed as a dichotomous variable if the participant reported experiencing no ART stockouts at the clinic (coded as 1) in the past year.Kind and respectful providers were measured based on adolescent satisfaction with the quality of care and "never in the past year" reporting either of the two negative clinic experiences: "Clinic staff got angry and scolded me because of how I take my pills," and "Clinic staff got angry with me because I am having sex and shouted at me." Support group attendance was measured using adolescent self-report of attending a facility-linked HIV support group in the past year.Short travel time and short waiting time were both measured as <60 minutes typically spent travelling to a health facility and time spent waiting to see a healthcare provider.Confidentiality was computed based on responses to adolescents' response to the following question, linked to facility-based services: "I felt that my information would be kept confidential (never, once or twice, several times, and most of the time)."Participants who reported "several times, and most of the time" to the question were coded as experiencing confidentiality.Given the importance of confidentiality in shaping adolescent healthcare access, we conducted sensitivity analyses with different cutoff levels (never vs. any experiences of confidentiality); how-ever, the results did not change considerably.Safe and affordable facilities were defined based on adolescents reporting whether they could afford to get to the doctor, clinic or hospital, and whether they felt safe at the clinic/hospital in the past year. Analysis Analyses were conducted in Stata Version 17.0.First, descriptive statistics for all variables (HIV-related factors, sociodemographic, HIV-related outcomes and healthcare provisions) were computed comparing AGYW mothers living with HIV (n = 336) to nulliparous AGYW living with HIV (n = 438) using Chi-square tests.Supplementary analyses explored differences between participants with matched medical records. Matched participants (n = 467) were, on average, a few months younger than those with no VL data (n = 307), more likely to live in rural settings, less likely to live in a poor household and more likely to have acquired HIV recently through presumed sexual exposure (Table S1).We adjusted for these factors in the consequent analysis.Analysis for the viral suppression outcome was conducted in a sub-sample of participants with available data (n = 467), with n = 193 AGYW mothers living with HIV and n = 274 AGYW living with HIV matched.Second, multicollinearity checks for the five outcomes of interest were conducted using tetrachoric correlations, since all the outcomes were categorical.All correlations were weak-moderate (<0.7), except for the correlation between past-week adherence, consistent clinic attendance and uninterrupted ART treatment.To adjust for these correlations, p-values for each model were adjusted for multiple outcome testing using the Benjamini-Hochberg approach [24].Third, associations between each HIV-related outcome and healthcare provisions were explored using multivariate logistic regression models, controlling for sociodemographic characteristics, motherhood and HIV-related factors.We used multivariate logistic regression analysis to examine associations between the above seven healthcare provisions and five HIV-related outcomes among (i) AGYW living with HIV, and (ii) disaggregated by motherhood.We followed an empirical approach to identify "development accelerators"provisions associated with two or more outcomes simultaneously, refined and used with multiple observational datasets available in Open Science Framework [25,26]. In the final step, healthcare provisions associated with two or more HIV-related outcomes of the above regressions were considered "healthcare accelerators" and included in a model controlling for the above HIV and socio-demographic covariates.To explore associations between healthcare accelerators and HIV-related outcomes for AGYW mothers, we conducted sub-group analyses by motherhood status, quantifying associations between accessing the seven healthcare provisions on the five HIV-related outcomes for nulliparous and mothering AGYW living with HIV in two separate models.Finally, predicted probabilities to model the impact of healthcare provisions-alone and in combination-on each HIV-related outcome were computed only for healthcare accelerators, for the full sample and the two subgroups by motherhood status: mothers and nulliparous AGYW living with HIV, separately. Sample characteristics, HIV-related factors and healthcare provisions Of all AGYW living with HIV (N = 774), 43.4% were mothers.Socio-demographic characteristics and HIV-related outcomes by motherhood are shown in Table 1.AGYW mothers were more likely to be older, have recently acquired HIV and recently initiated treatment (<3 years).They were also more likely to report past-week food insecurity, living in informal housing and poorer households.Regarding HIV-related outcomes, AGYW mothers living with HIV were more likely to report lower rates of past-week ART adherence (p = 0.013), consistent clinic attendance (p = 0.007), uninterrupted ART treatment (p<0.001) and TB symptoms (p<0.001).About 60% of all participants had a VL test and there was no difference in available VL data by motherhood status (Table S1).A lower proportion of VL data was available among participants with recently acquired HIV and who lived in poorer households. Among those with VL results, viral suppression rates were lower among mothers compared to nulliparous AGYW living with HIV (p<0.001).AGYW mothers living with HIV were less likely to report receiving four of the seven healthcare provisions-adolescent-responsive services (p = 0.018), support group attendance (p = 0.005), accessible care (p = 0.033) and short waiting times (p<0.001)-compared to AGYW living with HIV. Multivariable associations between healthcare provisions and HIV-related outcomes In multivariable analyses, of the seven healthcare provisions, three were associated with any HIV-related outcomes (Table 2).No ART stockouts were associated with one outcome: uninterrupted ART treatment (aOR: 2.56, 95% CI 1.12−5.81,p = 0.025).Kind and respectful providers, and safe and affordable clinics, were associated with improvements in three out of five included HIV-related outcomes.Adolescentresponsive services were associated with higher odds of were not associated with any of the HIV-related outcomes, when controlling for covariates. Healthcare accelerators for AGYW living with HIV: individual and combined effect of healthcare provisions and HIV-related outcomes To investigate the individual and combined effect of accessing only the healthcare accelerators identified above with improvements in multiple HIV-related outcomes, we modelled adjusted probabilities for each of the five HIV-related The adjusted probabilities when comparing the scenario with neither adolescent-responsive services nor accessible care (i.e.no healthcare accelerator) to the scenario with a combination of both healthcare accelerators increased from 62% to 87% (+25 percentage point [pp] increase) for past-week ART adherence, from 71% to 89% (+18 pp) for consistent clinic attendance, from 57% to 85% (+28 pp) for uninterrupted ART treatment, from 49% to 70% (+21 pp) for no TB symptoms and from 60% to 77% (+17 pp) for viral suppression. In a stratified analysis, the effect of healthcare accelerators for AGYW all living with HIV by motherhood status differed on only TB symptomology: mothers were less likely to experience TB symptoms.However, accessing both healthcare accelerators compared to accessing neither had a greater effect for nulliparous AGYW than mothers: the adjusted probability of no TB symptoms increased from 32% to 54% (+22 pp) for nulliparous AGYW living with HIV compared to an increase from72% to 87% (+15 pp) among AGYW who were mothers (Figure 1). D I S C U S S I O N AGYW are at considerable risk of mortality and morbidity due to HIV-related illness [27].Supporting them to survive and thrive is an urgent public health and human rights concern [28,29].This paper explored healthcare provisions associated with improved HIV-related outcomes among AGYW living with HIV, including AGYW who are mothers as a priority group.First, we investigated rates of HIV-related outcomes among our sample of n = 774 AGYW living with HIV by motherhood.A high proportion of study participants reported suboptimal HIV-related outcomes, similar to other studies from the region [30].Differences in TB symptoms by motherhood status should be investigated in cohort studies with medical TB records.Three healthcare provisions were associated with improvements in at least one HIV-related outcome: no medication stockouts, kind and respectful providers, and safe and affordable facilities.Fully stocked facilities were associated with uninterrupted ART treatment, highlighting the need for continued access to life-saving medication, echoing findings with younger ALHIV in South Africa [11].This provision remains tenuous even as the COVID-19 pandemic, during which many supply chains were interrupted, has subsided [31].Addressing medication/ART stockouts promptly is critical to keeping AGYW living with HIV alive.Advocacy and support for providers is needed to help them report medication/ART stockout rather than informally manage them [12]. The other two healthcare provisions were each associated with improvements across multiple HIV-related outcomes for AGYW living with HIV, meeting the study's criteria of "healthcare accelerators."Kind and respectful providers were associated with improvements in four of the five HIV-related outcomes: past-week adherence, consistent clinic attendance, uninterrupted ART treatment and viral suppression.These findings support prior studies showing associations between respectful healthcare providers and retention in HIV care in younger ALHIV [11,32].Our findings further demonstrate improvements across multiple HIV-related outcomes among AGYW living with HIV, including among mothers. The second healthcare accelerator, reporting safe and affordable facilities, was associated with improvements in clinic attendance, uninterrupted treatment and TB symptoms.This finding supports and expands prior evidence that has found associations between affordable transport and retention in care among younger adolescents [11], and evidence from a randomized trial in Uganda showing positive impacts of an economic intervention on viral suppression [9].New models of HIV care delivery, such as differentiated service delivery, peer-and community-based care and mobile clinics, may increase acceptability, affordability and safety for AGYW [33][34][35][36], and be important steps to improving their HIV outcomes. AGYW living with HIV who were mothers reported consistently poorer adherence and more ART treatment interruptions than nulliparous AGYW.The same two healthcare provisions were associated with improved outcomes for AGYW living with HIV independent of motherhood status: kind and respectful providers and safe and affordable facilities.However, AGYW mothers reported lower levels of access to these healthcare provisions than other nulliparous AGYW living with HIV.This finding suggests that all adolescent girls living with HIV will benefit from the same healthcare provisions, but that additional efforts may be needed to ensure that they reach AGYW who are mothers [35]. Importantly, the two healthcare accelerators identified in this study are similar to healthcare factors that improve outcomes among all adolescents [11,37].This finding suggests that strengthening healthcare systems and quality of care for adolescents, in general, may benefit the highest-risk groups, such as AGYW living with HIV, including young mothersan important finding given the higher likelihood of poorer HIV-related outcomes among AGYW living with HIV who are mothers.AGYW living with HIV, particularly mothers, who experienced both kind and respectful staff and safe and affordable facilities were more likely to report higher rates of multiple HIV-related outcomes, including viral suppression, than those accessing each individual healthcare accelerator.Future analyses should investigate whether combinations have additive or synergistic relationships using longitudinal and experimental designs. This study has several limitations.First, the data were collected through a cross-sectional survey, and so all findings must be interpreted with caution.However, we were careful in selecting variables and their reporting or recall periods to minimize the risk of reverse causality.Second, HIVrelated measures were self-reported, as VL data were only available for a part of the sample, due to limited VL monitoring and data quality in the study catchment area.Although self-reported adherence and related measures have poor reliability and validity, longitudinal analyses of these measures showed a strong association with VL data over time [22].Efforts are underway to improve data infrastructure, which would allow for longitudinal analyses to explore the longterm outcomes of these healthcare provisions.Participants living in poorer households and who had recently acquired HIV were less likely to have matched VL measures, so we accounted for these two variables in multivariate analysis.Third, the data were collected just before the COVID-19 pandemic, and the pandemic may have shifted some of the dynamics we observed in these data, especially access to VL testing.Additional research is needed to document the postpandemic experiences of AGYW living with HIV, particularly mothers.Fourth, while we accounted for covariates, there may be unmeasured factors, including household and community services and support.However, we conducted multivariate analyses controlling for key covariates and accounted for multiple outcome testing.Finally, this study was conducted in the Eastern Cape province of South Africa, and its findings may not be generalizable to other settings.However, the challenges facing our participants, including HIV risk, poverty and early motherhood, are similar to those in other parts of sub-Saharan Africa. C O N C L U S I O N S Despite these limitations, our study has important implications for supporting a large and growing cohort of AGYW living with HIV in the region.In light of increasing rates of adolescent motherhood [28], we urgently need research on which interventions are effective, in supporting healthcare providers to deliver adolescent-sensitive services, reduce morbidity and mortality among AGYW living with HIV.Reaching AGYW living with HIV, particularly mothers, with safe, affordable HIV and health services requires kind, responsive healthcare service provision, and care that respects their dignity and supports them reach their full potential. A U T H O R S ' C O N T R I B U T I O N S ET and LC conceptualized the overall study.ET designed the analyses for this manuscript, which was conducted by SZ and ET, with support from WR and BHB.CW, NL and JJ were involved in study realization, data preparation and together with LS, WS, CAL, AA and LG contributed to data interpretation and writing.SZ and OE led the NHLS data merge with GS.All authors have reviewed the manuscript. Figure 1 . Figure 1.Associations between healthcare accelerators and HIV-related outcomes by motherhood. Table 1 . Socio-demographic characteristics, HIV-related outcomes and healthcare services by motherhood status a Self-reported measures.b Analysis for the viral suppression outcome were conducted in a sub-sample of participants with available data (n = 467), with n = 193 AGYW mothers living with HIV and n = 274 AGYW living with HIV matched. Table 2 . Summary of associations between healthcare provisions and HIV-related outcomes (N = 774) Past-week adherence a Consistent clinic attendance a Uninterrupted ART treatment a No TB symptoms a Viral suppression (<1000 copies/ml) b a Self-reported measures.b Analyses for the viral suppression outcome were conducted in a sub-sample of participants with available data (n = 467), with n = 193 AGYW mothers living with HIV and n = 274 AGYW living with HIV matched.c Adjusted for multiple testing using Benjamini-Hochberg approach with a false discovery rate (FDR) of 5%.dance (aOR: 1.87, 95% CI 1.01−3.44,p = 0.045), uninterrupted ART treatment (aOR: 1.95, 95% CI 1.11−3.42,p = 0.019) and no TB symptoms (aOR: 1.82, 95% CI 1.03−3.20 p = 0.038).Accessing support groups, short travel and waiting time (<1 hour for each), and confidential healthcare services
2024-02-10T06:17:31.460Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "64962db44c166184e43f4c879994d0901b2a3474", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4ead90ccc85541533af31c437b4ba7df77d25acd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7689899
pes2o/s2orc
v3-fos-license
Hierarchical Design Based Intrusion Detection System For Wireless Ad hoc Network In recent years, wireless ad hoc sensor network becomes popular both in civil and military jobs. However, security is one of the significant challenges for sensor network because of their deployment in open and unprotected environment. As cryptographic mechanism is not enough to protect sensor network from external attacks, intrusion detection system needs to be introduced. Though intrusion prevention mechanism is one of the major and efficient methods against attacks, but there might be some attacks for which prevention method is not known. Besides preventing the system from some known attacks, intrusion detection system gather necessary information related to attack technique and help in the development of intrusion prevention system. In addition to reviewing the present attacks available in wireless sensor network this paper examines the current efforts to intrusion detection system against wireless sensor network. In this paper we propose a hierarchical architectural design based intrusion detection system that fits the current demands and restrictions of wireless ad hoc sensor network. In this proposed intrusion detection system architecture we followed clustering mechanism to build a four level hierarchical network which enhances network scalability to large geographical area and use both anomaly and misuse detection techniques for intrusion detection. We introduce policy based detection mechanism as well as intrusion response together with GSM cell concept for intrusion detection architecture. INTRODUCTION There has been a lot of research done on preventing or defending WSN from attackers and intruders, but very limited work has been done for detection purpose. It will be difficult for the network administrator to be aware of intrusions. There are some Intrusion Detection Systems that are proposed or designed for Wireless Ad hoc network. Most of them work on distributed environment; which means they work on individual nodes independently and try to detect intrusion by studying abnormalities in their neighbors' behavior. Thus, they require the nodes to consume more of their processing power, battery backup, and storage space which turn IDS to be more expensive, or become unfeasible for most of the applications. Some of the IDS use mobile agents in distributed environment [8]. Mobile Agent supports sensor mobility, intelligent routing of intrusion data throughout the network, eliminates network dependency of specific nodes. But this mechanism still is not popular for IDS due to mobile agents' architectural inherited security vulnerability and heavy weight. Some of the IDSs are attack-specific which make them concentrated to one type of attack [1] . Some of them use centralized framework which make IDS capable exploiting a personal computer's high processing power, huge storage capabilities and unlimited battery back up [21]. Most of the IDS are targeted to routing layer only [7] [21], but it can be enhanced to detect different types of attacks at other networking layers as well. Most of the architectures are based on anomaly detection [18] [2] which examine the statistical analysis of activities of nodes for detection. Most of the IDS techniques utilize system log files, network traffic or packets in the network to gather information for Intrusion detection. Some detects only intrusion and some do more like acquiring more information e.g. type of attacks, locations of the intruder etc. Though a handsome number of IDS mechanisms are proposed in Wireless ad hoc network but very few of them can be applicable for Wireless Sensor network because of their resource constrains. Self-Organized Criticality & Stochastic Learning based IDS [2], IDS for clustering based sensor Networks [3], A non-cooperative game approach [4], Decentralized IDS [5] are distinguished among them. EXISTING CHALLENGES Existing intrusion detection systems are not adequate to protect WSN from Inside and Outside attackers. None of them are complete. E.g. most of the approaches offer clustering techniques without mentioning how they will be formed and how will they behave with rest of the system. Most of the existing IDSs deal with wired architecture except their wireless counterpart. The architecture of WSN is even more sophisticated than ad hoc wireless architecture. So, an IDS is needed with capability of detecting inside and outside, known and unknown attacks with low false alarm rate. Existing IDS architecture that are specifically designed for sensor networks are suffering from lack of resources e.g. high processing power, huge storage capabilities, unlimited battery backup etc. WIRELESS SENSOR NETWORK -AN OVERVIEW According to NIST (National Institute of Standards and Technology) "a wireless ad hoc sensor network consists of a number of sensors spread across a geographical area" [8]. The term sensor network refers to a system which is a combination of sensors and actuators with some general purpose computing elements. A sensor network can have hundreds or even thousands of sensors; mobile or fixed locations; deployed to control or monitor [7]. A wireless sensor network comprises of sensor nodes to sense data from their ambience, and passes it on to a centralized controlling and data collecting identity called base station. Typically, base stations are powerful devices with a large storage capacity to store incoming data. They generally provide gateway functionality to another network, or an access point for human interface [21]. A base station may have an unlimited power supply and high bandwidth links for communicating with other base stations. In contrast, wireless sensors nodes are constrained to use low power, low bandwidth, and short range links. SECURITY THREATS AND ISSUES Various security issues and threats that are considered for wireless ad hoc network can be applied for WSN. This is recited in some previous researches. But the security mechanism used for wireless ad hoc networks cannot be deployed directly for WSNs because of their architectural inequality. First, in ad hoc network, every node is usually held and managed by a human user. Whereas in sensor network, all the nodes are independent and communication is controlled by base station. Second, Computing resources and batteries are more constrained in sensor nodes than in ad hoc nodes. Third, the purpose of sensor networks is very specific e.g. measuring the physical information (such as temperature, sound etc.). Fourth, node density in sensor networks is higher than in ad hoc networks [10]. Architectural aspect of WSN makes the security mechanism more prosperous as the base station could be used intelligently. According to the basic need of security attacks in WSN can be categorized: • DoS, DDoS attacks which affect network availability • Eavesdropping, sniffing which can threaten confidentiality • Man-in-the-middle attacks which can affect packet integrity • Signal jamming which affects communication There are many research work has been done in the area of significant security problems. Here summery of existing well-known threats are discussed. The attacker forwards messages on the basis of some Preselected criterion Simple Broadcast Flooding The attacker floods the network with broadcast Messages. Simple Target Flooding The attacker tries to flood through some specific nodes. False Identity Broadcast Flooding Similar to simple broadcast flooding, except the attacker deceives with wrong source ID. False Identity Target Flooding Similar to simple target flooding, except the attacker deceives with wrong source ID. Misdirection Attack The attacker misdirects the incoming packets to a distant node. IDS ARCHITECTURE According to the Network Security Bible -"Intrusion detection and response is the task of monitoring systems for evidence of intrusions or inappropriate usage and responding to this evidence" [22]. The basic idea of IDS is to observe user as well as program activities inside the system via auditing mechanism. Depending on the data collection mechanism IDS can be classified into two categories: Host based IDS monitors log files (applications, Operating system etc.) and then compare with logs of present signature of known attacks from internal database. Network based IDS works in different way. It monitors packets within communication and inspects suspicious packet information. Depending on how attacks are detected, IDS architecture can be categorized into three types: Signature based IDS which monitors an occurrence of signatures or behaviors which is matched with known attacks to detect an intrusion. This technique may exhibit low false positive rate, but not good to detect previously unknown attacks. Anomaly based IDS defines a profile of normal behavior and classifies any deviation of that profile as an intrusion. The normal profile of system behavior is updated as the system learns the behavior. This type of system can detect unknown attacks but it exhibits high false positive. In [11] another type of Intrusion detection has been introduced. Specification based IDS defines a protocol or a program's correct operations. Intrusion is indicated according to those constraints. This type of IDS may detect unknown attacks, while showing low false positive rate. In [11] wireless ad hoc network architecture is defined into three basic categories which can be adjusted to IDS in WSN architecture. Stand alone Each node acts as an independent IDS and detects attacks for itself only without sharing any information with another IDS node of the system, even does not cooperate with other systems. So, all intrusion detection decisions are based on information available to the individual node. Its effect is too limited. This architecture is best suited in an environment where all the nodes are capable of running an IDS [11]. Distributed and cooperative Though each node runs its own IDS, finally they collaborate to form a global IDS. This architecture is more suitable for flat wireless sensor networks, where a global IDS is initiated due to the occurrence of inconclusive intrusions detected by individual node. Hierarchical This architecture has been proposed for multilayered wireless network. Here network is divided into cluster with cluster-heads. Cluster-head acts like a small base station for the nodes within the cluster. It also aggregates information from the member nodes about malicious activities. Cluster-head detects attacks as member-nodes could potentially reroute, modify or drop packet in transmission. At the same time all cluster-heads can cooperate with central base station to form a global IDS. To build an effective IDS model, several considerations take place. First of all Detection Tasks: How will they be separated? Local agent or Global agent. Whether Local or global agent, an IDS needs to consider how these agents would analyze the threats. And what would be right sources of information? Local Agent detects vulnerability of node's internal Information. It supposed to be active 100% of the time to ensure maximum security. Here Physical/Logical Integrity, Measurement Integrity, Protocol Integrity, Neighborhood are analyzed from nodes' status. Global Agent: To detect anomaly from external information of a node to achieve 100% coverage of a sensor network. Here main challenges are balancing tasks and network coverage. In case of hierarchical network, cluster head (CH) controls its section of the network. CH is the part of global network. In case of flat network Spontaneous Watchdogs concept is applied. Here premise is "For every packet circulating in the network, there are a set of nodes that are able to receive both that packet and the relayed packet by the next hop." Second consideration is Sharing Information between agents. Information between agents can be transmitted through cryptography, voting mechanism or trust depending on the network's resource constraint. Third consideration is how to Notify Users. Generally users are behind Base stations. So, different algorithms can be used to notify base station. E.g. uTesla use secure broadcast algorithm. There are different techniques for IDS in Wireless Sensor Network (WSN). Here we represent some existing IDS models for WSN. OUR MODEL In this paper we propose a new model for IDS which concentrates on saving the power of sensor nodes by distributing the responsibility of intrusion detection to three layer nodes with the help of policy based network management system. The model uses a hierarchical overlay design(HOD). We divided each area of sensor nodes into hexagonal region (like GSM cells). Sensor nodes in each of the hexagonal area are monitored by a cluster node. Each cluster node is then monitored by a regional node. In turn, Regional nodes will be controlled and monitored by the Base station. Figure 1: Hierarchical Overlay Design This HOD based IDS combines two approaches of intrusion detection mechanisms (Signature and anomaly) together to fight against existing threats. Signatures of well known attacks are propagated from the base station to the leaf level node for detection. Signature repository at each layer is updated as new forms of attacks are found in the system. As intermediate agents are activated with predefined rules of system behavior, anomaly detection can take part from the deviated behavior of predefined specification. Thus proposed IDS can identify known as well as unknown attacks. Detection Entities Sensor Nodes have two types of functionality: Sensing and Routing. Each of the sensor nodes will sense the environment and exchange data in between sensor nodes and cluster node. As sensor nodes have much resource constraints, in this model, there is no IDS module installed in the leaf level sensor nodes. Cluster Node plays as a monitor node for the sensor nodes. One cluster node is assigned for each of the hexagonal area. It will receive the data from sensor nodes, analyze and aggregate the information and send it to regional node. It is more powerful than sensor nodes and has intrusion detection capability built into it. Regional Node will monitor and receive the data from neighboring cluster heads and send the combined alarm to the upper layer base station. It is also a monitor node like the cluster nodes with all the IDS functionalities. It makes the sensor network more scalable. If thousands of sensor nodes are available at the leaf level then the whole area will be split into several regions. Base Station is the topmost part of architecture empowered with human support. It will receive the information from Regional nodes and distribute the information to the users based on their demand. Policy based IDS Policy implies predefined action pattern that is repeated by an entity whenever certain conditions occur [13]. The architectural components of policy framework include a Policy Enforcement Point (PEP), Policy Decision Point (PDP), and a Policy repository. The policy rules stored in Policy repository are used by PDP to define rules or to show results. PDP translates or interprets the available data to a device-dependent format and configures the relevant PEPs. The PEP executes the logical entities that are decided by PDP [12]. These capabilities provide powerful functions to configure the network as well as to re-configure the system as necessary to response to network conditions with automation. In a large WSN where Hierarchical Network Management is followed can be realized by policy mechanism to achieve survivability, scalability and autonomy simultaneously. So in case of failure the system enables one component to take over the management role of another component. One of the major architectural advantages of hierarchical structure is any node can take over the functionality of another node dynamically to ensure survivability. A flexible agent structure ensures dynamic insertion of new management functionality. Hierarchical network management integrates the advantage of two (Central and Distributed) management models [14] and uses intermediate nodes (Regional and Cluster) to distribute the detection tasks. Each intermediate manager has its own domain called Regional or Cluster agent which collects and processed information from its domain and passes the required Policies are disseminated from the BPDP to RPA to LPA as they are propagated from PDP to LPA. Policy agents described above helps IDS by reacting to network status changes globally or locally. It helps the network to be reconfigured automatically to deal with fault and performance degradation according to intrusion response. Structure of Intrusion Detection Agent ( IDA ) The hierarchical architecture of policy management for WSN is shown in the above figure. It comprises of several hierarchical layers containing Intrusion Detection Agent (IDA) at each layer. They are Base Policy Decision Point (BPDP), Regional Policy Agent( RPA), Local Policy Agent (LPA), Sensor Node(SN). An IDA consists of the following components: Preprocessor, Signature Processor, Anomaly processor and Post processor. The functionalities are described as follows. Figure 3: Intrusion Detection Agent Structure Pre-Processor either collects the network traffic of the leaf level sensor when it acts as an LPA or it receives reports from lower layer IDA. Collected sensor traffic data is then abstracted to a set of variables called stimulus vector to make the network status understandable to the higher layer processor of the agent. Signature Processor maintains a reference model or database called Signature Record of the typical known unauthorized malicious threats and high risk activities and compares the reports from the preprocessor against the known attack signatures. If match is not found then misuse intrusion is supposed to be detected and signature processor passes the relevant data to the next higher layer for further processing. Anomaly Processor analyzes the vector from the preprocessor to detect anomaly in network traffic. Usually statistical method or artificial intelligence is used in order to detect this kind of attack. Profile of normal activity which is propagated from Base station is stored in the database. If the activities arrived from preprocessor deviates from the normal profile in a statistically significant way, or exceeds some particular threshold value attacks are noticed. Intrusion detection rules are basically policies which define the standard of access mechanism and uses of sensor nodes. Here database acts as a Policy Information Base(PIB) or policy repository. Post Processor prepares and sends reports for the higher layer agent or base station. It can be used to display the agent status through a user interface. Selection of IDS node Activating every node as an IDS wastes energy. So minimization of number of nodes to run intrusion detection is necessary. In [15] three strategies are mentioned involving selection of Intrusion detection node. Core defense selects IDS node around a centre point of a subset of network. It is assumed that no intruder break into the central station in any cluster. This type of model defends from the most inner part then retaliates to the outer area. Boundary defense selects node along the boundary perimeter of the cluster. It provides defense on intruder attack from breaking into the cluster from outside area of the network. Distributed defense has an agent node selection algorithm which follows voting algorithm from [16] in this model. Node selection procedure follows tree hierarchy. Our model follows Core Defense strategy where cluster-head is the centre point to defend intruders. In core defense strategy ratio of alerted nodes and the total number of nodes in the network drops, this makes energy consumption very low which make it more economical in their use of energy as it shows least number of broadcast message in case of attack. It has strong defense in inner network. Here IDS needs to wait for intruder to reach the core area [16] which is one of the drawbacks of this strategy as nodes can be captured without notice. IDS mechanism in sensor nodes Intrusions could be detected at multiple layers in sensor nodes (physical, Link, network and application layer). In Physical layer Jamming is the primary physical layer attack. Identifying jamming attack can be done by the Received Signal Strength Indicator (RSSI) [17] [18], the average time required to sense an idle channel (carrier sense time), and the packet delivery ratio (PDR). In case of wireless medium, received signal strength has relation with the distance between nodes. Node tampering and destruction are another physical layer attack that can be prevented by placing nodes in secured place. During the initialization process Cluster node's LPA will store the RSSI value for the communication between Cluster node to leaf level sensor nodes and sensor to sensor node. Later, at the time of monitoring, Anomaly processor in LPA will monitor whether the received value is unexpected. If yes, it will feedback RPA by generating appropriate alarm. Figure 4: IDS mechanism Link Layer attacks are collision, denial of sleep and packet replay etc. Here SMAC and Time Division Multiple Access (TDMA) can be used to detect the anomaly. TDMA [18] is digital transmission process where each cluster node will assign different time slots for different sensor nodes in its region. During this slot every sensor node has access to the radio frequency channel without interference. If any attacker send packet using source address of any node, e.g. A, but that slot is not allocated to A then LPA's Anomaly Processor can easily detect that intrusion. S-MAC [18] protocol is used to assign a wakeup and sleep time for the sensor nodes. As the sensor has limited power, S-MAC can be implemented for the energy conservation. If any packet is received from source e.g. A in its sleeping period then LPA can easily detect the inconsistency. In Network Layer route tracing is used to detect whether the packet really comes from the best route. If packet comes to the destination via different path rather than the desired path then the Anomaly Processor can detect possible intrusion according to predefined rules. Application Layer uses three level watchdogs. They are in base station, regional node, cluster node. Sensor nodes will be monitored by upper layer watchdog cluster node and cluster nodes will be monitored by regional node watchdog and finally the top level watchdog base station will monitor the regional nodes. So, if any one node is compromised by the attacker then higher layer watch dog can easily detect the attack and generate alarm. INTRUSION RESPONSE There are differences between intrusion detection and intrusion prevention. If a system has intrusion prevention, it is assumed that intrusion detection is built in. IDSs are designed to welcome intrusion to get into system; where as Intrusion Prevention System (IPS) actually attempts to prevent access to the system from the very beginning. IPS operates similar to IDS with one critical difference: "IPS can block the attack itself; while an IDS sits outside the line of traffic and observes, an IPS sits directly in line of network traffic. Any traffic the IPS identifies as malicious is prevented from entering the network [19]." So in case of IDS "Intrusion Response" should be the right title for recovery. There are two different approaches for intrusion response: Hot response or Policy based response [20]. Hot response reacts by launching local action on the target machine to end process, or on the target network component to block traffic. E.g. kill any process, Reset connection etc. It does not prevent the occurrence of the attack in future. On the other hand Policy based response works on more general scope. It considers the threats reported in the alert, constraints and objectives of the information system of the network. It modifies or creates new rules in the policy repository to prevent an attack in the future. In our proposed IDS, Base station's Policy decision point and other policy decision modules take part in the response mechanism together. BPDP and PDM take part in response mechanism. Intrusion can be detected either in Cluster node or Regional node. Finally base stations can be involved anytime if network administrator wants to do so or to update signature database or policy stored in intermediate agent. Intrusions are detected automatically according to the policy implemented by BPDP. Re-action is also automatic but administrator may re-design the architecture according requirements. In [21] a novel intrusion detection and response system is implemented. We have applied their idea in our response mechanism with some modification. Our IDS system considers each sensor nodes into one of five classes: Fresh, Member, Unstable, Suspect or Malicious. We have Local Policy Agent, Regional Policy Agent and finally Base Policy Decision Point to take decision about the sensor node's class placement. Routeguard mechanism use Pathrating algorithm to keep any node within these five classes [21]. In our model, we have policy or rules defined in Base station's BPDP to select any node to be within these five classes as shown in figure 4. When a new node is arrived, it will be classified as Fresh. For a preselected period of time this new node will be in Fresh state. By this time LPA will check whether this node is misbehaving or not. In this period the node is permitted to forward or receive packets from another sensor node, but not its own generated packet. After particular time its classification will be changed to Member automatically if no misbehave is detected. Otherwise the node's classification will be changed to Suspect state. In Member state nodes are allowed to create, send, receive or forward packets. In this time Member nodes are monitored by Watchdog at LPA in Cluster node. If the node misbehaves its state will be changed to Unstable for short span of time. During Unstable state nodes are permitted to send and receive packets except their own packets. In this state the node will be kept under close observation of LPA. If it behaves well then it will be transferred to Member state. A node in Unstable state will be converted to Suspect state in two cases: Either the node was in Unstable state and interchanged its state within Member and Unstable state for a particular amount of times (threshold value defined in LPA) within a predefined period or the node was misbehaving for long time (threshold value). LPA's Post processor sends "Danger alert" to RPA whenever Suspect node is encountered. The suspected node is completely isolated from the network. It is not allowed to send, receive, or forward packets and temporarily banned for short time. Any packets received from suspected node are simply discarded. After a certain period of time the node is reconnected and is monitored closely for extensive period of time by Intrusion Detection Agent in all three layers. If watchdogs report well then node status will be changed to Unstable. However if it continues misbehaving then it will be labeled as Malicious. After declaring any node malicious that node permanently banned from this network. To ensure that this malicious node will never try to reconnect, its MAC address or any unique ID will be added to Signature Record Database of LPA. Figure 5 : Operation of Intrusion Response Survivability is one of the major factors that are predicted from every system. We consider base stations to be failure free. But the Regional nodes or cluster nodes may be unreachable due to failure or battery exhaustion. So, in case of failures or any physical damage of Regional nodes or Cluster nodes, control of that node should be taken over by another stable node. So in our proposed architecture if any Regional node fails, then its control is shifted to the neighbor Regional node dynamically. So, control of the Cluster nodes and sensor nodes belonging to that Regional node will be shifted automatically to the neighbor node. In the same way if any cluster node fails then control of that cluster node will be transferred to the neighbor Cluster node. So in the proposed architecture if any LPA is unreachable due to failure or battery exhaustion of cluster nodes, neighbor LPA will take the charges of leaf level sensor nodes which was in the area of fault cluster node. In the same way due to Regional nodes failure neighbor Regional node's RPA will take over the functionality of all the cluster node's LPA and sensor nodes belonged to the faulty Regional node dynamically. As we mentioned before Cluster nodes or regional nodes havenumberdirect communications between them. So how will Cluster node or Regional node determine about the failure of its neighbor? Actually in the proposed architecture Base station has direct or indirect connections with all its leaf nodes. Base station has direct connection with Regional node. So if any Regional node fails Base station can identify the problem and select one of its neighbor nodes dynamically according to some predefined rule in BPDP. Then BPDP needs to supply the policy, rules, or signatures of failed node to the selected new neighbor Regional node. In the same way if any cluster node fails then neighbor cluster node will not be informed about its failure. So in this case Regional node will take necessary action of selecting suitable neighbor cluster node. Here policy, rules or signatures of the failed cluster node will be supplied by the BPDP through relevant RPA. So RPA has the only responsibility to select appropriate neighbor LPA of unreachable LPA. The rest of the work belongs to BPDP of Base station. As Base station is much more powerful node with large storage; all the signatures, anomaly detection rules or policies are stored primarily as backup in Base station. This back up system increases reliability of the whole network system. CONCLUSION WSN are prone to intrusions and security threats. In this paper, we propose a novel architecture of IDS for ad hoc sensor network based on hierarchical overlay design. We propose a response mechanism also according to proposed architecture. Our design of IDS improves on other related designs in the way it distributes the total task of detecting intrusion. Our model decouples the total work of intrusion detection into a four level hierarchy which results in a highly energy saving structure. Each monitor needs to monitor only a few nodes within its range and thus needs not spend much power for it. Due to the hierarchical model, the detection system works in a very structured way and can detect any intrusion effectively. As a whole, every area is commanded by one cluster head so the detection is really fast and the alarm is rippled to the base station via the region head enabling it to take proper action. In this paper we consider cluster nodes or Regional nodes to be more powerful than ordinary sensor nodes. Though it will increase the total cost of network set up, but to enhance reliability, efficiency and effectiveness of IDS for a large geographical area where thousands of sensor nodes take place, the cost is tolerable. Policy based mechanism is a powerful approach to automating network management. The management system for intrusion detection and response system described in this paper shows that a well structured reduction in management traffic can be achievable by policy management. This policy-based architecture upgrades adaptability and re-configurability of network management system which has a good practical research value for large geographically distributed network environment. The IDS in wireless sensor network is an important topic for the research area. Still there are no proper IDS in WSN field. Many previous proposed systems were based on three layer architecture. But we introduced a four layer overlay hierarchical design to improve the detection process and we brought GSM cell concept. We also introduced hierarchical watch dog concept. Top layer base station, cluster node and regional node are three hierarchical watchdogs. Our report proposes IDS in multiple layers to make our system architecture robust. FUTURE WORK This paper provides a first-cut solution to four layer hierarchical policy based intrusion detection system for WSN. So there is much room for further research in this area. Proposed IDS system is highly extensible, in that as new attack or attack pattern are identified, new detection algorithm can be incorporated to policy. Possible venues for future works include: • Present model can be extended by exploring the secure communication between base station, Regional node and cluster node. • The setting of management functions of manager station more precisely. • Election procedure to select cluster and regional node: Instead of choosing the cluster node and regional node manually, there will be an election process that will automatically detect the cluster node and regional node. • Implementation of Risk Assessment System in the manager stations to improve the reaction capability of intrusion detection system. • In this paper we actually focus on the general idea of architectural design for IDS and how a policy management system can be aggregated to the system. But an extensive work needs to be done to define Detection and Response policy as well. • Overall, more comprehensive research is needed to measure the current efficiency of IDS, in terms of resources and policy, so that improvements of its future version(s) are possible. • Further study is required to determine IDS scalability. To the best of knowledge, its scalability highly correlates with the scalability of the WSN application and the policy management in use. • Building our own Simulator: As all the previous research were based on three layer architecture, so we are planning to create our own simulator that will simulate our four layer design.
2012-08-18T11:28:25.000Z
2010-07-09T00:00:00.000
{ "year": 2012, "sha1": "3d8bf425015125bdf90d30a6998b3275041fa4c7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1208.3772", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3d8bf425015125bdf90d30a6998b3275041fa4c7", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
259453771
pes2o/s2orc
v3-fos-license
Patient Preferences and Osteoarthritis Care: What Do We Know About What Patients Want from Osteoarthritis Treatment? Patient-centred care for people with osteoarthritis requires shared decision making. Understanding and considering patients’ preferences for osteoarthritis treatments is central to this. In this narrative review, we present an overview of existing research exploring patient preferences for osteoarthritis care, discuss clinical and research implications of existing knowledge and future research directions. Stated preference studies have identified that patients place more importance on reducing or eliminating negative side effects rather than reducing pain, other clinical benefits or cost. Patients’ treatment preferences are influenced by characteristics such as age, symptom severity and beliefs about their osteoarthritis. Preferences appear to be largely stable over time and are not easily altered by single-point interventions. Research exploring patient preferences for osteoarthritis treatments has increased in recent years. Treatment preferences appear to be primarily driven by patients’ wish to avoid adverse side effects and by symptom severity. Individualised, evidence-based information about potential treatments, delivered over the course of disease, is required. Introduction Osteoarthritis (OA) is one of the leading causes of disability worldwide [1]. As there is currently no cure for OA, treatments primarily aim to reduce joint pain and maintain mobility and quality of life. Evidencebased guidelines recommend non-pharmacological treatments (specifically therapeutic exercise, weight management and information and support) as core treatments, with pharmacological management alongside if required, at the lowest effective dose for the shortest possible time [2]. Referral for consideration of joint replacement is recommended if non-surgical management has not been effective and joint symptoms are substantially impacting the individual's quality of life [2]. Given the increasing rates of hip and knee joint replacement for OA, finding ways to optimise the effectiveness of non-surgical treatments is urgently needed [3,4]. Decisions about which OA treatments to use have historically been made by medical professionals, adopting an authoritative patient-practitioner relationship. More recently, the concept of shared decision making has been recognised as essential for patient-centred care. Shared decision making involves information exchange between patients and healthcare professionals. Healthcare professionals bring technical information about the disease and available treatments, while patients bring their personal experience, their concerns, expectations, and preferences about treatments, to together make treatment decisions [5]. Preferences are defined as "the expression of values for alternative options for action after informed deliberation of their risks and benefits" [6]. Considering patients' preferences for OA treatments is central to shared decision making. Each treatment option for OA differs in terms of benefits and risks, so patients must frequently make decisions about what they need, what they prefer, and how they value the different aspects of each treatment. Patients' preferences and beliefs about treatments are particularly important when there is a lack of certainty about treatment outcomes, or when there are multiple treatment options and patients need to balance the benefits and risks of each [2]. At an individual level, discussing and addressing patients' concerns about treatments, and involving them in treatment decision making may improve treatment adherence and consequently treatment effectiveness [7••]. However, patient's perspectives are becoming increasingly important in all aspects of OA care, from policy decisions to designing and evaluating healthcare programs and establishing treatment guidelines [8]. Improving our understanding of patients' preferences for treatment is therefore critically important. Several different methods have been used to examine patients' preferences for OA treatments. This narrative review presents an overview of existing research exploring patient preferences for osteoarthritis care, including patient use of, and satisfaction with, OA treatments as measured by survey studies; findings of studies using stated preference methods, results of qualitative studies (which offer an in-depth exploration of individual patient preferences, and patient preferences in trials and practice. Brief clinical and research implications of existing knowledge and future research directions are also discussed. Use of, and satisfaction with, OA treatments Several studies have used survey measures to explore patient use of and/or satisfaction with, treatments for OA. Findings highlight a mismatch between treatments most frequently used by patients with OA and those recommended in guidelines. An Internet-based survey among people with knee OA in France, Germany, Spain, and the United Kingdom (UK) (n=2073) reported the most common treatments that respondents had used were non-prescription oral pain medication (74%), exercise (70%) and physical therapy (68%) [9]. Gökçe Kutsal et al. reported similar findings in a cross-sectional survey of OA patients in Turkey (n=305) [10•]. Most frequently reported treatments were oral drugs (80%), topical drugs (74%), a home-based exercise program (63%) and outpatient physical therapy (61%) [10•]. In the UK, Mitchell and Hurley surveyed 415 patients who had consulted a primary care physician for knee pain of more than 6-month duration. They also found that drugs (analgesic or NSAIDs) were the treatment most frequently received (83%), followed by physiotherapy (41%). All other therapies were used by less than 10% of respondents [11]. Hinman and colleagues explored the use of American College of Rheumatology (ACR) recommended non-drug, nonoperative interventions by people with hip and/or knee OA (n=591) in Australia [12]. The most common interventions that respondents were currently using were making efforts to lose weight (50%) and shoe orthoses (30%). Strengthening (26%) and stretching exercises (23%) were the interventions that participants had most commonly previously used [12]. Of note, 12% of respondents had never used any of the interventions. Similarly, among 202 patients awaiting orthopaedic consultation for hip or knee OA in Australia, Haskins et al found that 22% of respondents had not previously used any form of non-pharmacological conservative management. When responses were compared to clinical guidelines 33% indicated that they had never used any of the non-pharmacological management strategies that were classified as core guideline recommendations [13]. Patient satisfaction with treatments for their OA has been found to be variable. Haskins et al. reported that only 20% of respondents felt that they had received sufficient education about the diagnosis, their treatment options and prognosis [13]. When asked to rank which of the treatments they had used was most beneficial, respondents in the survey by Gökçe Kutsal et al. found that physical therapy was ranked the highest, followed by oral drugs and home-based exercise programs [10•]. Similarly, physical therapy was the most preferred treatment (41%) by respondents in Mitchell and Hurley's survey, while only 4% reported drugs as their most preferred option [11]. Surveys have also been used to examine patient preferences for delivery of OA treatments. Ackerman and colleagues explored preferences for, and use of, disease-related education and support by younger people with hip and knee OA via a cross-sectional postal questionnaire in Australia (n=147) [14]. In relation to obtaining OA information, social media had been used by only a small proportion of respondents (5%), as had group self-management programs (3%), or telephone helplines (2%). Information packs delivered by post and online education programs were rated as the most useful by respondents, while social media was rated as the least useful and accessible. Both mailed and online information the advantage that people can access information at a time that suits them [14]. Studies using stated preference methods Stated preference studies originated in economics but are increasingly being used in healthcare to capture individual preferences related to services, treatments, and outcomes [15]. An important assumption of stated preference methods is that a treatment can be broken down into attributes (such as effectiveness in reducing pain, length of treatment required and risk of adverse side effects), and that the value of a treatment depends on the levels of the attributes [8]. The idea behind stated preference methods is that they resemble the many decisions that people make daily when choosing between potential options [7]. Several different stated preference methods can be used, including discrete choice experiments (DCE), conjoint analysis (CA), best-worst scaling, adaptive conjoint analysis (ACA) and adaptive choice-based conjoint (ACBC). All approaches require participants to compare between two or more hypothetical treatments with different levels of the attributes of interest and make trade-offs to select which treatment they prefer. Stated preference methods allow researchers to quantify the relative importance of the different attributes that make up a treatment by quantifying the trade-offs that respondents make [8]. A recent systematic review of studies using CA techniques to explore patients' preferences for OA treatments included 16 studies, with sample sizes ranging from 11 to 3895 [7]. The majority of the included studies investigated the side effects and features of medications, specifically NSAIDs, disease-modifying drugs and supplements. Overall, patients placed more importance on eliminating or reducing negative side effects (both common and rare) than on reducing pain, time to benefit, costs, how the medication was administered or the medication label [7]. Where investigated, studies found that patient characteristics including age and severity of OA symptoms had a significant impact on preferences. Respondents who were older appeared to be more willing than younger respondents to accept a higher risk of negative side effects in exchange for improvement in OA symptoms. People who reported less OA symptoms were more influenced by the potential side effects associated with NSAID than those who reported more severe OA symptoms. Interestingly, when respondents were asked to choose between an exercise program and OA medications, the potential side effects were still more influential on their decision than the potential benefits [7]. When presented with surgical treatments compared to non-surgical treatments, respondents with the highest pain levels, those whose function was the most limited and those of younger age were more likely to opt for the surgical option [7••]. Almost 20 years ago, Ratcliffe and colleagues were among the first to use stated preference methods to explore patient preferences for attributes of a number of treatment options for OA [16]. Survey respondents (n=412) appeared to place greater importance on the risk of serious negative side effects (including rare side effects) than mild to moderate side effects. When the authors analysed preferences by subgroup, they identified significant variation. The level of importance respondents placed on relief of joint aches increased with increasing severity of the respondents OA symptoms. As was identified in the systematic review, increasing age was associated with increasing willingness to accept a higher risk of serious side-effects in exchange for improvement in OA symptoms. Respondents in lower income brackets appeared to place more importance on treatments easing joint aches and increasing their mobility compared to those in higher income brackets. Respondents who had experienced gastrointestinal side effects from treatments previously were more willing to accept a higher risk of them than those who had not [16]. In their 2008 paper titled "If you want patients with knee osteoarthritis to exercise tell them about NSAIDS," Fraenkel and Fried reported that exercise was the most preferred and NSAIDs were the least preferred treatment options, and that the risk of negative side effects more strongly influenced patients' preferences rather than the likelihood of benefits [17]. Similarly, Pinto et al. found that respondents were more likely to choose exercise rather than drug treatments [18]. They found that combined risk of indigestion and bleeding ulcer accounted for relative importance of 41.3% compared to 28.9% for decrease pain and improvement in strength combined [18]. An earlier study explored the maximum acceptable risk increments (MARI) that respondents were willing to accept for various potential adverse effects from OA medications, using a probabilistic threshold technique [19]. Heart attack/ stroke had the lowest MARI (between 3 and 5%, depending on initial risk and the level of pain relief) and dyspepsia had the highest (23% to 35%). Higher initial-risk levels were associated with increased willingness to accept a higher level of risk if it was coupled with pain relief benefits [19]. Two recent stated preference studies have focused on the preferences of stakeholders from different parts of health systems. In a DCE conducted in the Netherlands patients with knee or hip OA, those who had previously had a joint replacement, healthcare providers, and insurance company employees evaluated six attributes of OA treatments: waiting times, out of pocket costs, travel distance, involved healthcare providers, duration of consultation and access to specialist equipment [20•]. Findings showed that patients and healthcare providers placed the most importance on lower out of pocket costs, while insurance company employees rated including a joint consultation by GP and orthopaedic consultant as the most important. The duration of consultation was less important to patients than it was to healthcare providers and insurance company employees [20•]. In multi-criteria decision analysis survey in New Zealand and Australia, Chua and colleagues compared stakeholders' preferences for interventions to manage knee OA with existing guideline recommendations and published evidence [21•]. Fifteen guidelinerecommended interventions were rated by patients with knee OA, indigenous health advocates, healthcare providers, policy informants and OA researchers. Land-based exercise, topical NSAIDs and total joint replacement were rated the highest. Concerningly, weight management and self-management education, both recommended core interventions, were ranked 11th and 15th out of the 15 interventions. Notably, preferences did not differ between the included stakeholder groups [21•]. Qualitative studies Qualitative studies offer an in-depth exploration of patient preferences for OA care. While the nature of qualitative studies means that they include considerably smaller sample sizes, they allow a more nuanced examination of patient preferences, and can include insight from patients as to why they have such preferences and what might be done to improve management. A systematic review of qualitative studies exploring patient beliefs about exercise interventions identified several points that patients felt would improve the delivery and uptake of exercise interventions [22]. The identified points included: providing better information and recommendations about the safety and importance of exercise, providing individually tailored exercise and challenging unhelpful health beliefs [22]. Bunzli and colleagues interviewed patients with end stage knee OA awaiting total knee replacement (TKR) in Australia, to explore why patients may feel that nonsurgical interventions are not valuable for treating knee OA [23]. Participants who reported that they believed their knee joint to be "bone on bone," or that the damage was caused by "wear and tear" which would be worsened by increased loading through the knee and would worsen over time tended to avoid physiotherapy and exercise interventions. These participants instead sought experimental or surgical treatments which they believed would replace lost cartilage and consequently cure their knee pain [23]. In a separate analysis of the same interviews, the researchers explored which patient factors impacted on the decision to progress to TKR [24]. Participants described the referral from GP or other health professional to see an orthopaedic surgeon as being simple, whereas non-surgical intervention pathways were described as complex and unknown. Participants' commonly felt that non-surgical interventions were "Band-Aid fixes" that would not repair the damage in their knee. In contrast, surgery was viewed as the "only true-blue fix" and was felt by many participants to be "inevitable." Participants who actively took part in exercise and saw this as the best way to manage their pain most commonly described themselves as having been very active in the past. Ease of referral pathway was highlighted as a determining factor for participants [24]. Yeh et al. specifically interviewed patients with knee OA who reported that they were undecided about whether to go ahead with a TKR which had been recommended by a surgeon [25]. They found that participants' indecision was most related to four areas: concerns related to treatments, concerns related to their physical condition, concerns related to surgical outcomes, and concerns related to postsurgical care. Participants who felt that they had not had their concerns addressed during the decision-making process reported that they wished to have access to more information regarding preparation for surgery, care after surgery, medicines and rehabilitation [25]. Patient preferences in trials and practice Given the perceived importance of patient preference, it would be useful to know whether outcomes from OA treatments are better if treatment allocation is based on choice or preference. To the authors knowledge, there are currently no published preference randomised controlled trials (RCTs) in the OA field. A small number of RCTs have completed exploratory secondary analyses to examine relationships between patient preferences and clinical outcomes from OA treatment. Foster et al., in a RCT comparing an exercise intervention to acupuncture among 352 patients with knee OA, assessed treatment preferences at baseline [26]. They found that 20% of participants reported a treatment preference; of these, 10% preferred advice and exercise, 13% preferred acupuncture and 44% reported that they would prefer combined treatment. No evidence was observed of a relationship between the patients' baseline treatment preferences or expectations and pain reduction at 6 or 12 months [26]. Moreton and colleagues developed and tested the utility of a multicriteria patient decision aid for people who were in the process of deciding on treatments for their OA [27•]. A shorter (n=625 respondents) and a longer form (n=180 respondents) of the decision aid were tested. The most important treatment outcomes across both forms of the decision aid were serious side effects, pain and function. Strength training was the highest rated treatment option overall, and arthroscopy was the lowest rated. Only one-third of respondents reported that the decision aid had changed their view about treatment. Interestingly, almost half of respondents (48%) felt that the decision aid would improve their future decision making about OA treatments [27•]. In a retrospective cohort study in the USA, Hurley at al. explored whether including a decision aid in primary care consultations was associated with changes in patients' treatment preferences compared to including a decision aid in orthopaedic consultations [28•]. Results showed that almost 20% of patients with knee OA and 17% of patients with hip OA reported that they were still uncertain about their treatment preferences after completing the decision aids. Subgroup analyses found that patients who reported higher pain levels and those who were older were more likely to express a strong preference for surgery. Older patients who complete the decision aids during primary care consultations were less likely to prefer surgery afterwards compared to those who completed the decision aids during an orthopaedic consultation. The authors concluded that patients' treatment preferences were generally stable over time, and that a single point decision aid may not necessarily shift preferences [28•]. Findings also highlighted that initiating treatment conversations in primary care settings, rather than only during secondary care consultations, may have important implications for engaging patients with shared decision making, and with using decision aids. In the first RCT to use a DCE as an intervention in the OA field, Dowsey et al. are evaluating the effect of administering a DCE containing information on risks of postoperative complications and health status to patients awaiting TKR, compared to a control survey on patient-reported pain and function and satisfaction following TKR [29]. Results of the trial are pending. Discussion Considering patients' preferences for OA treatments is a core component of shared decision making and patient-centred care. Research exploring patient preferences for OA treatments has increased over recent years. Survey studies have highlighted a mismatch between the most commonly used OA treatments and those that are recommended in evidence-based guidelines. Satisfaction with OA treatment is also variable. Stated preference studies commonly identified that reducing or eliminating adverse side effects is the primary driving force behind patient preferences for treatments, rather than reducing pain, cost, or increasing other clinical benefits. Patient characteristics appear to significantly influence treatment preferences, and preferences appear to require sustained and tailored input to change. However, the role of patient preferences in determining outcome from OA treatments remains unknown. Several reasons might explain the mismatch between the most used OA treatments and those that are recommended in evidence-based guidelines. These include lack of a robust evidence base, lack of awareness of guidance on the part of healthcare providers, beliefs of healthcare professionals, structure of healthcare systems, but also patient preferences. Patient preferences may also partially explain lack of satisfaction with care. In some studies medication was shown to be a least preferred treatment option, but one that is commonly provided. Qualitative studies suggest some people do not like analgesics for OA due to concern over side effects, and a belief that it is masking rather than curing the problem [30]. These beliefs could therefore reduce adherence and could be a contributing factor to the overall small treatment effects seen in RCTs and meta-analyses of simple pain killers such as paracetamol [31]. The identified importance of potential side effects in determining patient preferences for treatments highlights the need for patients to have access to clear evidence-based information about potential treatments. The impact of presenting information to patients focusing specifically on the associated risks of adverse side effects (or the minimal risk of such effects in the case of treatments such as exercise) alongside the expected benefit is worth further exploration. However, communicating risk is difficult to achieve well, and there is currently no best practice approach [32]. Careful consideration should be given to the commonly held fear of numbers and lack of understanding of statistical concepts among both clinicians and patients, loss framing versus gain framing, presenting more versus fewer data points and whether to present relative risk versus absolute risk [32]. Being guided by patient preferences in selecting treatment options theoretically offers great potential to increase engagement and adherence to OA treatments, which in turn could optimise outcomes. Existing exploratory secondary analysis of OA treatment RCT data suggests no association, but this is limited and underpowered. Beyond OA, a 2019 systematic review and meta-analysis of the effect of treatment preferences across all RCTs found that allowing patients to select which treatment they took part in resulted in better clinical outcomes for mental health and pain compared to assigning patients to their non-preferred treatment [33]. For patients to make informed decisions about which OA treatments they prefer, they need to fully understand the treatment options. As Haskins and colleagues identified, many patients may feel that they have not received enough education about their diagnosis, their treatment options and the short and long-term prognosis for their condition, highlighting the importance of evidence-based information provision [13]. Patient characteristics appear to significantly impact treatment preferences, suggesting individualised information is required. Decision aids, which are commonly presented as a printed pamphlet, videos or as an online program, offer great potential for providing education to patients about their conditions and helping them to be active in decision making. However, the study by Hurley et al. found that up to 20% of participants remained uncertain after completing the decision aid [28•]. The authors acknowledged that this may have been due to the non-randomised nature of the study. When planning pragmatic implementation careful consideration needs to be given to organisational contexts, and specifically to factors such as cultural differences, competing demands and the presence of champions that may influence patient engagement with decision aids [28•]. An additional complexity is that treatment preferences seem stable over time, and difficult to change with a single timepoint intervention. Future research utilising tailored information interventions and decision aids provided over multiple timepoints is needed. Over recent years, increasing studies have focused on using stated preference methods to explore patient preferences for OA treatments. These methods have several advantages including the ability to replicate real-life choices, the ability to gather data across large numbers of participants and the ability to adapt to participant responses as they complete the questionnaire. However, there are also disadvantages to these methods that should be considered. The usefulness of any stated preference method is reliant on appropriate design, in particular the selection of suitable attributes and levels. Stated preference methods are commonly conducted online, meaning that those without internet access, or those who are not able to use an online platform are excluded. Alternative methods such as questionnaires and qualitative studies continue to offer important insight into patient preferences alongside stated preference methods. Gaps in knowledge and future work Whilst there has been an increasing focus on understanding patient preferences in the field of OA, gaps in knowledge remain around optimal content and delivery of core treatments including self-management, exercise and weight loss. Greater understanding of what people want to know to support self-management and weight loss might help these treatments to be perceived as more important. Greater knowledge about preferences around types of therapeutic exercise (e.g. strengthening, general aerobic and mind-body exercise) and mode of delivery (e.g. supervised versus unsupervised, exercise setting and exercise deliver) might facilitate design of exercise interventions that are most acceptable to patients. This could increase engagement and exercise adherence, thus improve effect sizes on pain and physical function from therapeutic exercise which currently, in comparison to non-exercise controls, are small and reduce over time [34]. This would need to be tested in a new clinical trial. Future research should also explore where outcomes from OA treatments are better if treatment allocation is based on patient preference. Conclusions Exploring and considering patient preferences are essential for shared decision making for OA treatments. Treatment preferences appear to be primarily driven by patients' wish to avoid adverse side effects and by symptom severity. Individualised, evidence-based information about potential treatments, delivered over the course of disease, is required. Conflict of interest Philippa Nicolson declares that she has no conflict of interest. Melanie Holden declares that she has no conflict of interest. Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-07-11T00:12:48.110Z
2023-06-19T00:00:00.000
{ "year": 2023, "sha1": "310fe689553d80c5d7668ea3b1ad56d3a27f3f50", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40674-023-00208-w.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "cc368acb479a5655798d88833f54af2b82a74c4f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
119041336
pes2o/s2orc
v3-fos-license
On the CMB circular polarization. I. The Cotton-Mouton effect Generation of cosmic microwave background (CMB) elliptic polarization due to the Cotton-Mouton (CM) effect in a cosmic magnetic field is studied. We concentrate on the generation of CMB circular polarization and on the rotation angle of the CMB polarization plane from the decoupling time until at present. For the first time, a rather detailed analysis of the CM effect for an arbitrary direction of the cosmic magnetic field with respect to photon direction of propagation is done. Considering the CMB linearly polarized at the decoupling time, it is shown that the CM effect is one of the most substantial effects in generating circular polarization especially in the low part of the CMB spectrum. It is shown that in the frequency range $10^8$ Hz $\leq \nu_0\leq 10^9$ Hz, the degree of circular polarization of the CMB at present for perpendicular propagation with respect to the cosmic magnetic field is in the range $ 10^{-13}\lesssim P_C(t_0)\lesssim 7.65\times 10^{-7}$ or Stokes circular polarization parameter $2.7 \times 10^{-13}$ K $\lesssim |V(t_0)|\lesssim 2 \times 10^{-6}$ K for values of the cosmic magnetic field amplitude at present in the range $10^{-9}$ G $\lesssim B\lesssim 8\times 10^{-8}$ G. On the other hand, for not perpendicular propagation with respect to the cosmic magnetic field we find $10^{-15}\lesssim P_C(t_0)\lesssim 6\times 10^{-12}$ or $2.72 \times 10^{-15}$ K $\lesssim |V(t_0)| \lesssim 10^{-11}$ K, for the same values of the cosmic magnetic field amplitude and same frequency range. Estimates on the rotation angle of the CMB polarization plane $\delta\psi_0$ due to the CM effect and constraints on the cosmic magnetic field amplitude from current constraints on $\delta\psi_0$ due to a combination of the CM and Faraday effects are found. Introduction In the last two decades, there have been many established observational facts about the nature and properties of the CMB and their possible implications in cosmology. Among these, it has already been established the fact that the CMB has a linear polarization with a degree of polarization at present of the order P L (t 0 ) 10 −6 . This linear polarization is believed to have been generated at the decoupling time mostly due to the Thomson scattering of the CMB photons on electrons. In general, if the incident electromagnetic radiation has an isotropic intensity distribution, Thomson scattering does not generate a net linear polarization. In the specific case of the CMB the fact that linear polarization has been initially observed by DASI, WMAP and BOOMERANG collaborations [1] and then re-confirmed by other collaborations, implies that at the decoupling time the CMB intensity did not have an isotropic distribution, a fact which is widely confirmed from the observation of the CMB temperature anisotropy. Another important consequence of the Thomson scattering is that it does not generates circular polarization in the case when electrons are assumed to be unpolarized. Based on this fact, during these years it has been erroneously assumed, at least from the theoretical point of view, that the CMB does not have a circular polarization at all even though there have been initial studies that might support its existence [2] and also initial experimental efforts to detect it [3]. In the recent years there have been several other theoretical studies exploring the possibility of CMB circular polarization from standard and non-standard effects and also new experiments such as MIPOL [4] and SPIDER [5] aiming to detect it. The MIPOL [4] collaborations reported an upper limit on the degree of circular polarization at present of P C (t 0 ) 7 × 10 −5 − 5 × 10 −4 at the frequency 33 GHz and at angular scales between 8 • and 24 • . On the other hand, the SPIDER collaboration reported an upper limit on the CMB circular polarization power spectrum ( + 1)C V V /(2π) < 255(µK) 2 for multipole momenta 33 < < 307 at the CMB frequencies ν 0 = 95 GHz and ν 0 = 150 GHz. From the theoretical point of view, studies based on non-standard effects that generate circular polarization include; the interaction of the CMB with a vector field via a Chern-Simons term [6], non commutative geometry [7] and free photon-photon scattering due to the Euler-Heisenberg Lagrangian term [8]. On the other hand, some theoretical studies of standard effects include; the electron-positron scattering in magnetized plasma at the decoupling time [9], the propagation of the CMB photons in magnetic field of supernova remnants of the first stars [10], the scattering of the CMB photons with cosmic neutrino background [11] and also the alignment of the cosmological matter particles in the post-decoupling epoch which results in an anisotropic susceptibility matter tensor [12]. For a recent and not complete review of the CMB circular polarization see Ref. [13]. Apart from the circular polarization generation effects mentioned above, there is a class of effects called magneto-optic effects which generate CMB circular polarization as well. In Ref. [14] and Ref. [15], I studied the most important magneto-optic effects which can generate CMB circular polarization when the CMB interacts with large-scale cosmic magnetic fields. Among the effects which I studied one of them is a standard effect, namely the CM effect, and the other effects are non-standard and include the vacuum polarization in an external magnetic field due to one loop electron-positron, one loop millicharged fermion-antifermion and the photon-pseudoscalar mixing in a magnetic field. For all these effects to occur it is necessary the presence of a magnetic field which gives rise to birefringence effects due to the fact that each of the photon states acquires different indexes of refraction in the presence of the magnetized plasma. While it is well known that it does exist a magnetic field in galaxies and galaxy clusters with an order of magnitude of few µG, it is still not known if such a field is present also in the intergalactic space. The only information that we have about intergalactic magnetic fields are only in forms of upper and lower limits on the field magnitude at the present epoch. The upper limits on the magnetic field amplitude are found from observations of the CMB temperature anisotropy and from the rotation angle of the CMB polarization plane due to the Faraday effect. The temperature anisotropy upper limit is usually stronger than the Faraday effect limit, as reported by the Planck collaboration [16], where the limit from CMB temperature anisotropy is B e0 3 nG while the limit from the Faraday effect is B e0 1380 nG. One important aspect of these limits is that they differ from each other roughly speaking by three orders of magnitude and most importantly these limits do not mutually exclude each other from the simple fact that they are model depended. For a general review on large-scale cosmic magnetic field see Ref. [17]. One key aspect which distinguishes the CMB linear polarization with the CMB circular polarization, is that the former being generated at the decoupling time due to the Thomson scattering does not depend on the CMB photon frequency because of the nature of Thomson scattering, while the latter in most cases strongly depends on the CMB frequency. Because of this frequency dependence of the circular polarization, there is in some sense a kind of uncertainty on how to use and interpret the current limits obtained by experiments such as MIPOL and SPIDER since their limits are usually derived by observing the CMB in a specific frequency and it is not known how much substantial could be the signal at other frequencies. In order to study and detected CMB circular polarization, it is very important to first identify the circular polarization (possibly standard) effects that generate substantial CMB circular polarization and identify their frequency band where the signal is the strongest. So far, there has been a tendency in the literature to study the circular polarization in the high-frequency range, namely for frequencies above ten or few hundred GHz. This tendency has been partially influenced by the fact that most important CMB experiments such as WMAP and Planck operates at these frequencies and therefore their data in these frequencies might be useful in some way. In addition, there are some effects such as photon-photon scattering in a magnetic field [14] and the free photonphoton scattering [8], [12] which are linearly proportional to the CMB frequency and one might hope that the higher is the frequency the stronger is the circular polarization signal. Even though this is true, the signal for such effects is still very weak even at very high frequencies to be detected in the near future. Based on the facts discussed above, it is rather logical to explore the CMB circular polarization at low frequencies and study the magnitude of the signal. In this work, I study such possibility and concentrate on the CM effect in a large-scale magnetic field. As we will see, the CM effect is proportional to the square of the magnetic field amplitude, B 2 , and inversely proportional to the third power of the CMB frequency, namely ν −3 . It is especially the scaling law with the frequency of ν −3 which makes the CM effect one of the most important effects in generating circular polarization of the CMB. I partially studied this effect in a previous work [14] where some estimates of the degree of circular polarization were made for a specific configuration of the magnetic field with the respect to the photon direction of propagation. In this work, I study the CM effect in details for an arbitrary configuration of the magnetic field direction and for arbitrary magnetic field amplitude profile. By generalizing the CM effect to an arbitrary direction of the magnetic field with respect to the observer's direction, the system of differential equations for the Stokes parameters has additional terms with respect to the case studied in Ref. [14]. In addition, I also study in details the impact that the CM effect has on the rotation angle of the CMB polarization plane and its interaction with the Faraday effect. This paper is organized in the following way: in Sec. 2, I discuss in a concise way the propagation of the electromagnetic radiation in a magnetized plasma and derive the elements of the photon polarization tensor in a cold magnetized plasma. In Sec. 3, I derive the system of differential equations for the Stokes parameters in an expanding universe. In Sec. 4, I find perturbative solutions of the equations of motion in various regimes. In Sec. 5, I calculate in details the generation of the CMB circular polarization due to the CM effect at present. In Sec. 6, I study the rotation angle of the CMB polarization plane due to the CM effect alone and also due to a combination of the CM and Faraday effects. In Sec. 7, I conclude. In this work I use the metric with signature η µν = diag[1, −1, −1, −1] and work with the rationalized Lorentz-Heaviside natural units (k B = = c = ε 0 = µ 0 = 1) with e 2 = 4πα. In addition in this work we use the values of the cosmological parameters found by the Planck collaboration [18] with Ω Λ 0.68, Ω M 0.31, h 0 0.67 with zero spatial curvature with Ω κ = 0. Propagation of the electromagnetic waves in a magnetized plasma In this section we give a detailed description of the propagation of electromagnetic waves in a cold magnetized plasma. This description is useful because it would allow us to understand in details how electromagnetic waves propagate in a cold magnetized plasma and which are the most common effects which give rise to birefringence effects in the medium. In this section we use the same notation as in Ref. [19] where basics of propagation of the electromagnetic waves in a cold magnetized plasma are presented in the appendix. When electromagnetic waves (photons) propagate in a medium several effects manifest which include dispersion, absorption and scattering of the electromagnetic radiation. In connection with the dispersion phenomena, the effects of the medium on the incident electromagnetic wave are usually described in terms of the photon polarization tensor Π ij (i, j = x, y, z) with components in a given cartesian coordinate system where the medium is at rest. Consequently, in a medium the free Maxwell equations in momentum space, in absence of external currents, get modified to for a plane electromagnetic wave travelling into the medium. Here ω is the photon energy and we used the expression k ij = ωn ij with k ij being the photon momentum tensor and n ij being the index of refraction tensor of photons in medium. We may see that the role of Π ij in (1) is to give to photons an "effective mass" in the medium. In the case when the medium is isotropic, we have that n ij is a diagonal tensor with diagonal entries corresponding to the photon indexes of refraction in medium where n ii = 1. In the case when photons propagate in vacuum, we have that n ij = δ ij and we get the on-shell photon relation ω = k 2 where k is the photon wave-vector and Π ij = 0. The explicit expression of the photon polarization tensor Π ij depends on the induced currents that enter a given problem. In this work we are interested in a cold magnetized plasma which is quite common situation in astrophysics and cosmology. We assume that the magnetized plasma is with almost no collisions, globally neutral and homogeneous. In addition, there is not an external electric field, namely E e = 0 and the presence of the external magnetic field B e locally breaks the isotropy of the plasma since it singles out a preferred direction in a given region of space where the plasma is located. In the cold magnetized plasma approximation, consider now an incident electromagnetic wave propagating along the observer's z axis which points to the East, in a magnetized plasma with external magnetic field vector B e = B en . Heren = [cos(Θ), sin(Θ) cos(Φ), sin(Θ) sin(Φ)] is a unit vector in the direction of the external magnetic field B e and Θ, Φ are, respectively, the polar and azimutal angles between the magnetic field B e and x and y axes. As shown in Ref. [19], the medium polarization vector P satisfies the equation of motion where E is the electric field of the incident electromagnetic wave, ω pl = 4παn e /m e is the plasma frequency, n e is the free electron number density and ω c = eB e /m e is the cyclotron frequency. In Eq. (2) the dot symbol (·) above P denotes the derivative with respect to the time. Assume that the fields evolve in time harmonically at a given point x and then let us write where ω is the incident electromagnetic wave energy. By using the expressions in (3) into Eq. (2) and then solving for the components of P , after we get the following solution in terms of the incident electric field components E j , in the case when ω = 0 and ω = ω c where χ ij (ω) are the components of the electric susceptibility tensor , χ yx = χ * xy , The expressions for the components of χ ij in (5) are valid for an incident electromagnetic wave with an arbitrary direction of propagation with respect to B e . In addition, the components χ ij do not explicitly depend on x but only implicitly through B e (x, t) which enters in ω c . Another fact is that the expressions for χ ij in (5) are valid for arbitrary external magnetic profile B e (x, t). After these general comments about (5), let us find the components of the photon polarization tensor in a cold magnetized plasma. In order to do that we have to relate the components of χ ij with Π ij . It is well known that the components of the index of refraction tensor n ij are related to the relative permittivity tensor 1 ε ij through the relation n 2 ij = ε ij . On the other hand, the relative permittivity tensor ε ij is related to the electric susceptibility tensor χ ij , through the relation χ ij = ε ij − δ ij . By using these relations into the relation (1), we get Π ij = −ω 2 χ ij . By using the expressions for χ ij in (5) into (6), we get Now by using the constraint (10) into the (i = x, y) components of the electric field in (9), we get the following equation for the transverse components of the electric field whereΠ ij for (i, j = x, y) is the effective photon polarization tensor of the transverse electromagnetic field in the cold magnetized plasmaΠ where the components of Π ij are given in (7). The effective expression for the polarization tensor in (12) takes into account the mixing of the longitudinal electromagnetic wave in plasma with the usual transverse electromagnetic waves. From expressions (12) and (7), we find the following expressions for the components ofΠ ij The expressions given in (13) are the most general form of the elements ofΠ ij in a cold magnatized plasma. As already mentioned above they take into account the mixing of the longitudinal electric field with the usual transverse electric field. We may note that this contribution in (13) is inversely proportional to ω 4 − ω 2 ω 2 pl − ω 2 ω 2 c + ω 2 c ω 2 pl sin 2 (Θ) sin 2 (Φ) = 0. The latter condition is satisfied as far as ω > 0 and where we must have ω 2 pl + ω 2 c ≥ 2ω c ω pl | sin(Φ) sin(Θ)| in order to have real and positive roots of the quadratic equation. Another important question to ask is for what minimum frequencies we have propagating transverse electromagnetic waves? This can be seen by requiring that all spatial derivatives on the left hand side in Eq. (9) are zero, namely a non propagating electric field in space. In that case we would have Π ij − ω 2 δ ij E j = 0 where nontrivial solution exist only if det(M ij ) = 0 with M ij ≡ Π ij − ω 2 δ ij . However, the solution of det(M ij ) = 0 in terms of ω would be quite complicated in the case when all components of Π ij = 0. For this reason it would be convenient to rotate the coordinate system in such a way that Φ = π/2 and Θ = π/2, namely B e is along the direction of propagation of the electromagnetic wave. Under a rotation of the coordinate system we have that M ij in the new coordinate system is related to the old M ij through M ij = R il R jm M lm and E j = R jk E k where R il is an orthogonal rotation matrix with unit determinant. In the rotated coordinate system the equation M ij E j = 0 becomes M ij E j = M ij (Φ = π/2, Θ = π/2)E j = 0. Consequently, the condition det(M ij ) = 0 is equivalent to det[M ij (Φ = π/2, Θ = π/2)] = 0. Now by requiring that det[M ij (Φ = π/2, Θ = π/2)] = 0 and after doing some algebra we find that the lower bound on the frequencies for propagation are ω > ω pl and ω > ±ω c /2 + ω 2 c + ω 2 pl /4 /2. Solutions of the equations of motion of the Stokes parameters In the previous section we derived the most general form of the elements of the photon polarization tensor in a cold magnetized plasma for arbitrary direction of propagation of the electromagnetic waves with respect to the external magnetic field B e . In this section, we focus on our attention on deriving the equations of motion of the Stokes parameters in an expanding universe and provide perturbative solutions of the equations of motion. As in the previous section, let us consider an electromagnetic wave propagating along the z direction in a cartesian reference system with wave vector k = (0, 0, k) in a cold magnetized plasma with arbitrary direction of the external magnetic field B e . The linearized equations of motion for the vector potential transverse components A x and A y in an unperturbed FRW metric for the CMB photons are given by [14] i∂ where A x and A y are respectively the transverse components of the vector potential A of the CMB photons with respect to the x and y axes, is the Hubble parameter, I is a 2 × 2 identity matrix and M is the mixing matrix which is given by where M x = −Π xx /(2ω), M y = −Π yy /(2ω) and M CF = −Π xy /(2ω). The term M CF = M C + iM F takes into account the combination of the CM and Faraday effects in a magnetized plasma. In order to describe the polarization of the light and more precisely in our case of the CMB photons, it is better to work with the Stokes parameters rather than the wave equation (15). The procedure in obtaining the equations of motion of the Stokes parameters has been presented in [14] and it consists on two steps; first write the equations of motion for the polarization density matrix ρ based on the wave equation (15) and second, express the polarization density matrix in terms of the Stokes parameters in order to get the equations of motion of the latter quantities. The equations of motion of the polarization density matrix in an unperturbed FRW metric are given by [14] ∂ρ ∂t where D = (3/2)H(t)I is the damping matrix which takes into account the damping of the electromagnetic waves in an expanding universe due to the Hubble friction. In our case the field mixing matrix M is Hermitian, namely M = M † since in our case we do not include any process which might change the number of photons due to decay or absorption in the medium 2 . Now by using the connection between the Stokes parameters and the polarization density matrix elements as shown in Ref. [20], see also the appendix of Ref. [14], we get the following equations of motion of the effective Stokes parameterṡ I(k,n, t) = −3H(t)I(k,n, t), where we have defined ∆M ≡ M y − M x with the dot sign above Stokes parameters indicating the time derivative with respect to the cosmological time t. For simplicity, in (18) we have dropped the symbols B e , Φ and Θ which do appear in the elements of M . The system of linear differential equations (18) can be written in a more compact form asṠ(k,n, t) = A(k, t)S(k,n, t) where S = (I, Q, U, V ) T is the Stokes vector formed with the Stokes parameters and A(k, t) is the time dependent coefficient matrix which is given by In most cases is more convenient to express the quantities in A as a function of the photon temperature T rather than the cosmological time t, so, in this case one needs to express the time derivative in an expanding universe as ∂ t = −HT ∂ T in the equations of motion of the Stokes vector, namely S (k,n, T ) =Ã(k, T )S(k,n, T ). At this stage is more convenient to write the matrixÃ(k, T ) as the sum ofÃ(k, where in an expanding universe the wave-vector k = k(T ) is a function of the temperature T . We may note that with respect to the case when the direction of B e is in the xz plane as studied in Ref. [14], for arbitrary magnetic field direction, do appear the terms 2M C in the matrix B. The appearance of these terms which makes possible the mixing of the Q parameter with U and V parameters, complicate the situation with respect to the case when M C = 0. Series solution of the polarization equations of motion In the previous section, Sec. 3, we found the equations of motion of the Stokes parameters in an expanding universe for an arbitrary direction of the external magnetic field B e with respect to the electromagnetic wave direction of propagation. In this section we focus on our attention on perturbative solutions of the equations of motion in some limiting cases. Before aiming to find these solutions, it is very important to explicitly calculate each term which enters the matrix B(k, T ) since it will be very useful in what follows. Let us recall the definitions of M F ≡ −Im{Π xy }/(2ω), M C ≡ −Re{Π xy }/(2ω) and ∆M ≡ M y − M x = (Π xx −Π yy )/(2ω). Now by using the expressions of the photon polarization tensor given in (13) we get The expression for the elements of the matrix B given in (21), which are the most general ones for arbitrary magnetic field direction and magnitude, can be further simplified by making some reasonable assumptions on the parameters. Since in this work we concentrate on the CMB frequency spectrum we have that ω ω pl and ω ω c . In order to see this, let us calculate explicitly the numerical values of the parameters. The numerical value of the angular plasma frequency which enters the expressions in (21) can be written as ω pl = 5.64 × 10 4 n e /cm 3 (rad/s) or ν pl = ω pl /(2π) = 8976.33 n e /cm 3 (Hz) for the frequency. On the other hand the numerical value of the cyclotron angular frequency is given by ω c = 1.76 × 10 7 (B/G) (rad/s) or ν c = 2.8 × 10 6 (B/G) (Hz). However, in the case of CMB photons propagating in an expanding universe, we can express the time t in terms of the cosmological temperature T as t = t(T ) as we did in the previous section. Therefore, the conditions ω ω pl and ω ω c , in an expanding universe, are respectively satisfied when where we expressed ν(t) = ν 0 [a(t 0 )/a(t)] = ν 0 (T /T 0 ) with ν 0 being the frequency of the electromagnetic radiation at the present time t = t 0 at the temperature T = T 0 , a(t) being the universe expansion scale factor and B 0 = B(t 0 ) = B(T 0 ) is the magnetic field strength at the present time. Here we expressed the number density of free electrons as n e (t) = n e (T ) 0.76 n B (T 0 )X e (T )(T /T 0 ) 3 where n B (T 0 ) is the total baryon number density at the present time and X e (T ) is the ionization function of the free electrons. The factor of 0.76 takes into account the contribution of hydrogen atoms to the free electrons at the post decoupling time. By taking for example n B (T 0 ) 2.47 × 10 −7 cm −3 as given by the Planck collaboration [18], and expressing a(t 0 )/a(t) = T /T 0 , we can write the conditions (22) as Given the fact that the present day CMB photon frequencies are in the frequency part above ν 0 ≥ 10 8 Hz, the condition given in (23) is well satisfied for physically reasonable values of X e (T ) and B 0 . With these considerations in mind, we can simplify (21) for ω ω pl and ω ω c . Consequently, we can write the expressions in (21) as From the expressions (24) we may note that each expression within the square brackets is composed of a first term of trigonometric functions and a second term which is the product of trigonometric functions with terms ω 2 pl /ω 2 or ω 2 pl ω 2 c /ω 4 . However, since we are in the regime when ω ω pl and ω ω c , we also have ω 2 pl /ω 2 1 and ω 2 pl ω 2 c /ω 4 1. This fact tells us that in the case when the trigonometric functions in the first and second terms within the square brackets in (24) are different from zero, the second term is usually much smaller than the first term. In order to see this, let us consider the case when Θ = 0, namely when the magnetic field has components only along the x. In this case 2 1, so the contribution coming from the second term can be completely neglected. One can see that by making similar examples, the contribution of the second terms within the square brackets in (24), which arise due to the mixing of the longitudinal electromagnetic wave with the transverse waves, can be neglected with respect to the first terms. Consequently, in the regime studied in this work ω ω pl and ω ω c , we have that Neumann series solutions Here we present a Neumann series solutions of the equations of motion by making use of the perturbation theory. Let us concentrate on the full equation S (k,n, T ) = [B(k, T ) + (3/T )I 4×4 ] S(k,n, T ) and omit from now on the dependence of the Stokes vector onn and k and matrix B on k. From the equation of motion of the Stokes vector, the term 3/T is a term which takes into account the damping of the fields in an expanding universe. In the case when there is not a magnetic field the solution of the equation It is worth to stress since now that the effective scaling of the Stokes vector in an expanding universe is not (T /T i ) 3 but (T /T i ) 2 as discussed in details in Ref. [14]. In the case when the magnetic field is present, namely when T ). In this case, the equations of motion for S (T ) = [B(T ) + (3/T )I 4×4 ] S(T ) in components becomeS The system of the first order of linear differential equations given in (26) cannot be solved exactly except in some particular cases. However, one of the main characteristic of a linear system of first order of differential equations is that its general solution is given byS(T ) =M (T )S(T i ), whereM (T ) is the solution matrix. Consequently if we put the general solutionS(T ) =M (T )S(T i ) in (26), we get that the solution matrixM satisfies the equatioñ with the initial conditions thatM lj (T i ) = I 4×4 . Therefore the solution of the system (26) is reduced to the solution of the differential equations for the matrixM lj in (27). The system of differential equations given in (27) can formally be solved as a convergent Neumann series in the case when the non zero elements of the matrix B lm (T ) satisfy In order to find the parameter space arising from the conditions T T i dT B lm (T ) < 1, we need to evaluate explicitly each element in the matrix B ij (T ). In each element in B ij (T ) enters the product H(T )T , where the Hubble parameter in the case of zero spatial curvature is given by where after the decoupling epoch the contribution of relativistic particles to the total energy density and consequently to the Hubble parameter can be safely neglected. In addition since the contribution of the cosmological parameter to the Hubble parameter is important only for low redshifts, we may approximate the Hubble parameter in our calculations as The conditions that The condition |M F (T )| < 1 is also satisfied by the following stronger condition where we used the fact that |sin(Θ) sin(Φ)| ≤ 1 in |M F (T )| < 1. So, the condition (31) is a stronger condition on the parameter space with respect to the case when the term |sin(Θ) sin(Φ)| is taken into account. In case when |sin(Θ) sin(Φ)| → 0, the condition |M F (T )| < 1 is in principle satisfied for any finite value of the parameters B e0 , ν 0 and T . On the other hand, the conditions |M C (T )| < 1 and |∆M(T )| < 1 are respectively satisfied by the much stronger conditions 6.05 × 10 31 Hz ν 0 where we used the fact | sin(2Θ) cos(Φ)| ≤ 1 in |M C (T )| < 1 and that sin 2 (Θ) cos 2 (Φ) − cos 2 (Θ) ≤ 1 in |∆M (T )| < 1. Again, in the cases when | sin(2Θ) cos(Φ)| → 0 and sin 2 (Θ) cos 2 (Φ) − cos 2 (Θ) → 0, the conditions |M C (T )| < 1 and |∆M(T )| < 1 are in principle satisfied for any finite values of the parameters B e0 , ν 0 and T . In order to find the parameter space for the conditions T T i dT B lm (T ) < 1 it is necessary to know the expression for the free electron ionization function X e (T ). This function satisfies a complicated differential equations as shown in Ref. [21] and in general it is calculated by solving the differential equation numerically. In Fig. 1a the plots of X e (T ), X e (T )T 1/2 and X e (T )T 3/2 as a function of the CMB temperature T are shown. In the temperature interval 57.22 K≤ T ≤ 2970 K the curve of the ionization function X e (T ) is obtained by solving the differential equation for X e (T ) as given in Ref. [21], where the lower limit T = 57.22 K corresponds to the start of reionization epoch at redshift z ion ∼ 20 and the upper limit corresponds to the CMB decoupling temperature T i = 2970 K for redshift 1 + z 1090. The complete re-ionization is reached approximately at z ion 7. The evolution of X e (T ) in the temperature interval 21.8 K ≤ T ≤ 57.22 K has been obtained by a smooth interpolation of the curve X e (T ) in the interval 57.22 K ≤ T ≤ 2970 K with X e (T ) = 1 in the interval 2.725 K ≤ T ≤ 21.8 K. By using the numerical solutions found for X e (T ) as descibed above and plotted in Fig. 1a, we get the following values for 4.45 × 10 6 (K 5/2 ) and With these values of the integrals, the stronger conditions (31) and (32) are respectively satisfied when Having found the parameter space where the condition for the convergence of the Neumann series certainly holds, we are at the position now to calculate the Stokes parameters. Since the Stokes vector is given byS(T ) = M (T )S(T i ), the only thing that we need to calculateS(T ) is to calculate the elements of the matrixM (T ) given in (28). Since we are in the regime where T T i dT B lm (T ) < 1, it will be sufficient for our purposes to truncate the Neumann series (28) at the second order. Now by looking at the structure of the matrix B(T ) in (20), we may note that B 1j = B j1 = B jj = 0 with the rest of the elements different from zero. Let us define for commodity Then the vanishing elements ofM areM 1j =M j1 = 0 while the non zero elements ofM are given bỹ (35) The matrix elements found in (35) allow us to find explicit expression for the Stokes vectorS(T ) up to second order in the perturbation theory. Consequently, the expression of the Stokes parameters are given bỹ It is very important to stress that the expressions found of the Stokes parameters in (36) are valid for arbitrary direction of the external magnetic field with respect to the photon propagation, namely for arbitrary Θ, Φ and for arbitrary profile of B e (T ) and n e (T ). Power series solution for dominant Faraday effect In the previous section we found Neumann series solutions in the case when the conditions |M F (T 0 )| < 1, |M C (T 0 )| < 1 and |∆M(T 0 )| < 1 are satisfied. However, we did not make any specific assumptions on the relative magnitude among |M F (T 0 )|, |M C (T 0 )| and |∆M(T 0 )|, namely we did not specify which of these terms is bigger than the others. As we can see from The conditions in (37) After by inserting the expansion (38) and B(T ) = B 1 (T ) + B 2 (T ) in the equation (27) and collecting the terms with the appropriate power in , we get the following matrix system of equations where for simplicity we suppressed the matrix elements indexes in B 1 , B 2 andM . The system of equations 39 has to be solved with the initial conditionsM In order to solve the system in (41) let us multiply the third equation with the imaginary unit (i) and after sum it with the second equation. Then we get (42) We may observe that (42) is a first order non homogeneous linear differential equation for the functionM Now by equating the real and imaginary parts of the left hand side of (43) with the those of the right hand side and directly integrating the first and the fourth equations in (43) The expressions in (44) allows us to find recursively the elements ofM ij (T ) to the order m + 1 in the case when the elements at the order m are known. Since we already know the elements ofM ij at the order m = 0 as given in (40), we can recursively calculate those at the order m + 1. Let us define for simplicity By using the definitions in (45) and the expressions in (44) we get the following expressions for m = 0 of the matrix elements of M (1) We can proceed in the same way to find the matrix elements of 2M (2) ij starting from the elements of M (1) ij given in (46). However, for our purposes it will be sufficient to consider only the elements ofM ij up to the first order in . By using the expressionS j (T ) =M jl (T )S l (T i ) M (0) jl (T ) + M (1) jl (T ) S l (T i ), we get for the elements of the Stokes vector the following expressionsĨ It is worth to remind that the expressions found in (47) are valid for M F (T ) = 0 or when | sin(Θ) sin(Φ)| = 0. T ) S (T ). Then the equationS(T ) = [B 1 (T ) + B 2 (T )]S(T ) becomes As we may note, so far we did not make any assumption on the matrix B 1 (T ) and in principle it can be also a null matrix depending on the situation. After doing some lengthy calculations we get We can solve Eq. (48) as a convergent Neumann series as we did in sec. 4.2 as far as T T i dT M ij (T ) < 1. By using the matrix expression (49) in (48), we get the following solution forS(T ) up to the first order We may note that the solution (51) exactly coincides with the solutions found in (47) which we found by using the regular perturbation theory. The solution (51) has been found without any restriction on the magnitude and sign of M F (T ) which is different from the result of Sec. 4.2 where we worked under the assumption that M F (T ) = 0, which for fixed and non zero values of B e0 , ν 0 and T , is equivalent to | sin(Θ) sin(Φ)| = 0. This fact tells us that the condition on the Faraday effect term M F (T ) = 0 is not necessary in order to find the solution (51) and that the condition M F (T ) = 0 comes out only in the regular perturbation theory. On the other hand, in this section in order to use the Neumann series expansion we required that T T i dT M ij (T ) < 1 or equivalently that |L Degree of circular polarization In the previous section we found perturbative solutions of the equations of motion of the Stokes parameters in two different regimes by using perturbation theory. In this section, we focus on our attention on generation of circular polarization, where in specific, we study the expected degree of circular polarization at present time and the expected rotation angle of the CMB polarization plane. We separate our analysis by first studying the solutions found in Sec. 4.1 and second study those found in Sec. 4.2. In what follows, we consider the evolution of the CMB polarization and rotation angle of the polarization plane starting from the decoupling epoch at the temperature T = T i until at the present time at the temperature T = T 0 . Moreover, we consider the CMB at the decoupling epoch partially polarized where it acquires only a linear polarization due to the Thomson scattering off the CMB photons on electrons with no initial circular polarization, namely Q i = 0, U i = 0 and V i = 0 as studied in Ref. [14]. Case when |M Let us consider first the generation of the circular polarization and calculate its degree of polarization at present time where T = T 0 in the case when |M F (T 0 | < 1, |M C (T 0 | < 1 and |∆M (T 0 )| < 1. By using the expression for the Stokes parametersĨ(T ) andṼ (T ) found in (36), the degree of circular polarization of the CMB at present is given by It is quite convenient at this stage to normalize the CMB intensity at the decoupling time to unityĨ i = 1. In addition, we have thatĨ In what follows, we assume that V i = 0 if not specified otherwise. In order to calculate P C (T 0 ) we need to calculate explicitly the matrix elementsM 42 (T 0 ) and M 43 (T 0 ). For the first term entering inM 42 (T 0 ) we have while for the second term we have while the second term inM 43 (T 0 ) is given by Since all terms inM 42 andM 43 depend on the angles Θ and Φ and because some terms inM 42 andM 43 are equal to zero when averaged over the angles Θ and Φ, it is more convenient to calculate the root mean square of the degree of circular polarization instead of the mean value. By using the expressions (54)-(57) inM 42 andM 43 we get (by suppressing for the moment the units) One important thing about expression (58) is that the second terms proportional to Q 2 i and U 2 i must be in magnitude smaller that the first terms. The reason is because these terms correspond to second order terms in perturbation theory where their magnitudes must be smaller than the first order terms in order to have a convergent series. This fact implies that care must be used in order to choose the values of B e0 and ν 0 in order to evaluate P rms C (T 0 ). However, since we are in the regime where the constraints (33) must be satisfied, usually there is not reason to worry about since the values of the parameters ν 0 and B e0 that satisfy (33) automatically keep the magnitudes of the second order terms smaller than the first ones. In Figs. 2 and 3 plots of the root mean square of the degree of circular polarization P rms C (T 0 ) as functions of the magnetic field amplitude B e0 and ν 0 are shown. In obtaining the plots we used the expression 58 where we expressed U i = rQ i with r being a parameter which can have either sign and which value is not a priori known. In addition, we have chosen those values of B e0 and ν 0 that satisfy the constraints (33). Usually if the stronger constraint on the Faraday effect term is satisfied, namely the first constraint on the left hand side in (33), the remaining two stronger constraints which arise from |M C (T 0 )| < 1 and |∆M(T 0 )| < 1 are also satisfied. We may observe from Figs. Case when |M In the case when |M F (T 0 )| = 0 and |M C (T 0 )| < 1, |∆M (T 0 )| < 1 the constraints on ν 0 and B e0 are much less stringent than in the previous section. In fact, for finite values of ν 0 and B e0 which interest us, the only possibility for the condition |M F (T 0 )| = 0 to hold is only when | sin(Θ) sin(Φ)| = 0 which occurs either when Θ = nπ or Φ = nπ with n ≥ 0. In both cases the direction of the magnetic field is perpendicular to the direction of photon propagation where M F (T 0 ) = 0 and M C (T 0 ) = 0. Consequently, the constraints in (33) reduce to only the constraint (Hz/ν 0 ) 3 (B e0 /G) 2 < 8.35 × 10 −39 which correspond to the stronger constraint on |∆M(T 0 )| < 1, namely the region within the black line in 1b. In order to calculate the degree of circular polarization, let us use the results obtained in 4.3 that we found without any restriction on the magnitude of M F (T 0 ). For absent Faraday effect M F (T 0 ) = 0 and consequently a vanishing M C (T 0 ) = 0, the degree of circular polarization is given by (59) As we can see from (59) the degree of circular polarization for transverse magnetic field depend only on ∆M(T 0 ). The most important thing is that we do not have anymore the constraints on M (F, C) but only those on |∆M(T 0 )| < 1. In Figs. 4 and 5 plots of the degree of circular polarization for transverse magnetic field as a function of ν 0 , B e0 and |r| are shown. We may observe in Fig. 4 that for higher values of B e0 and lower values of ν 0 , the acquired degree of circular polarization of the CMB is quite substantial and be comparable with that of the linear polarization for some values of the parameters. For example, as we can see from 4, for B e0 = 8 × 10 −8 G, we get P C (T 0 ) 7.65 × 10 −7 for ν 0 = 10 8 Hz and P C (T 0 ) 7.65 × 10 −10 for ν 0 = 10 9 Hz. It is worth to point out that the expression (59) can also be obtained by using the perturbative approach used in the previous section. Case when |M In the case when |M F (T 0 )| ≥ 1 the situation is more complicated with respect to the previous cases. One aspect is that in (33) only the last two inequalities must be satisfied while the first inequality has not to be satisfied anymore. This fact tells us that the allowed region of parameters is that within the black line in Fig. 1b and that outside the region within the dotted line. In this case the degree of circular polarization can be calculated by using the results of Sec. 4.3 that we derived for arbitrary values of M F (T ) The expression (60) is valid for any value of M F (T ) and for |M C (T 0 )| < 1, |∆M(T 0 )| < 1 even though in this section we study the case when |M F (T )| ≥ 1. The main difficulty on calculating P C (T 0 ) analytically stands from the fact that in all terms M F , G C and ∆G enters the ionization function which does not have any known analytic expression. In order to find an analytic expression for P C , in this section we approximate X e (T ) X e whereX e Then we have that (62) where we used the identity cos(α − β) = cos(α) cos(β) + sin(α) sin(β). Let us define x ≡ AT 3/2 where dT = (2/3)A −2/3 x −1/3 dx and get ν 0 (Hz) Be0=10 -9 G, |r|=0.1 Be0=10 -9 G, |r|=10 Rotation angle of the polarization plane In the previous section we studied the generation of the CMB circular polarization by calculating explicitly P C (T 0 ) in various regimes. In this section, we focus on our attention on the rotation angle of the CMB polarization plane from the decoupling epoch until today. Apart from generating circular polarization, the CM effect also generates linear polarization with non zero Stokes parametersQ(T ) andŨ (T ). At a given cosmological temperature T the rotation angle of the polarization plane is given by where we must haveQ(T ) = 0. Let us write ψ(T ) = ψ(T i )+δψ(T ) where ψ(T i ) is the angle of the CMB polarization plane at the temperature T i = 2970 K corresponding to the decoupling time in the common reference frame used to study the CMB and ψ(T ) is the angle of the polarization plane at temperature T < T i . Here δψ(T ) is the amount of the rotation angle of the polarization plane from the decoupling time until at the time corresponding to the temperature T and it is the quantity which interests us. Since for the frequency range of interest in this work the magnitude of the effects which we study are in general small, namely |M F (T )| < 1, |∆M(T )| < 1, |M C (T )| < 1, and because experimentally δψ(T 0 ) is constrained to a small quantity (in radians), we expect that the rotation angle of the CMB polarization plane from decoupling epoch until at present to be a small quantity |δψ(T )| 1. In this case by using the trigonometric identity we can write . In order to calculate δψ(T 0 ) we need to calculate each matrix element in (70) at T = T 0 . Consequently, we have thatM where we numerically calculated T i T 0 dT X e (T )T 3/2 T i T dT X e (T )T 3/2 = 9.88 × 10 12 (K 5 ). We also get the following expression for On the other hand, we get the following expressions for where we used the numerically integrated value of The last expression to calculate is One important thing to observe about expressions (73) and (74) is that the second order of iteration terms must be smaller than unity because we are in the regime when 0 ≤ |M F (T )| < 1, |∆M(T )| < 1 and |M C (T )| < 1. Consequently one must choose the values of the parameters carefully in such way that such conditions are met. However, as far as the conditions in (33) are satisfied we do not have to worry about what we said above. On the other hand for the terms appearing in (72) and (71) the second order of iteration terms can be equal or even larger than the first order terms since we are in the situation where only 0 ≤ |M F (T )| < 1, |∆M(T )| < 1 and |M C (T )| < 1 must be met and no further conditions between the relative magnitude of these terms is needed at this stage. All told, let us consider first the case when M F (T 0 ) = 0 which occurs when Θ = 0, namely absent Faraday effect. In this case we also have B = 0. From expression (70) and dropping for simplicity the units we obtain where we neglected the sub-leading order term proportional to δψ 2 on the left hand side in expression (70). It is evident that since we are working under the condition |∆M(T 0 )| < 1 we have that the right hand side of (75) is less than one for any value of r. In Fig. 8 plots of the present epoch CMB rotation angle of the polarization plane δψ 0 = δψ(T 0 ) given by expression (75) are shown for various values of the parameters. We may note that substantial rotation of the polarization plane occurs only at low frequencies and for higher values of B e0 . On the other hand for higher values of the frequency and lower values of the magnetic field amplitude δψ 0 is extremely small. ). There are several interesting facts about (76) that we need to clarify. First thing is that δψ(T 0 ) depends on the angles Θ and Φ through the temperature independent constants A, B and C. If we take for simplicity first the average over the angles Θ and Φ where A = 0, BC = 0 and then the absolute value we get | 2δψ(T 0 ) | 1 1 + r 2 r 1 − 1.6 × 10 6 (9 A 2 /(4X 2 e )) − 9.88 × 10 12 C 2 X −2 e 1 − 1.6 × 10 6 (9 A 2 /(4X 2 e )) − 9.88 × 10 12 B 2 X −2 where we used again geometric expansion and neglected the terms proportional to B 2 − C 2 A 2 and to B 2 − C 2 B 2 . Therefore in the case when we take the average value over the angles Θ and Φ, the average value of the rotation angle is still dominated by the CM effect and the contribution of the Faraday effect is completely negligible at the second order in perturbation theory. Second thing, is that in the case when we do not take the average value over the angles we have that δψ(T 0 ) is dominated by the Faraday effect term and we can derive from (76) where again we used geometric series expansion and used the fact in the case when |M F (T 0 )| < 1 we have that also that |∆M(T 0 )|, |M C (T 0 )| |M F (T 0 )|. Indeed, the latter condition can be easily understood because the values of |∆M(T 0 )| and |M C (T 0 )| for those values of the parameters that satisfy the condition |M F (T 0 )| < 1 are much smaller than the values of |M F (T 0 )| as far as the angles Θ and Φ are not zero. In case when |M F (T )| > 1, |∆M(T )| < 1 and |M C (T )| < 1, we cannot use anymore the same expressions that we used above because of the fact that |M F (T )| > 1. In this case we can use the expressions for the Stokes parameters found in Sec. 4.3 up to first order in perturbation theory for arbitrary value of M F (T ). The expressions forQ(T ) andŨ (T ) are given by (51) and read where we may notice that there is no contribution to the linear polarization from the CM effect at the first order in perturbation theory. From (79) we get for the rotation angle of the polarization plane In Fig. 9 plots of the root mean square of the CMB rotation angle of the polarization plane given in (81) due to the Faraday effect are shown. In Fig. 9a the region in magenta colour between 0.36 • ≤ δψ 0 ≤ 4.3 • represents the region where experimentally do exist constraints on δψ 0 . The black points represent the values of these constraints found by different experiments at different frequencies. For example, the first black point from the top in Fig. 9a is the constraint found by BOOM3 experiment [22] at the frequency ν 0 = 145 GHz where |δψ 0 | = 4.3 • 4 . The second point from the top is the constraint found by BICEP 1 [23] at the frequency ν 0 = 129 GHz where |δψ 0 | = 2.77 • . The third and fourth points from the top are the constraint found by the QUaD collaboration [24] at the frequencies 100 GHz and 150 GHz where the constraint on δψ 0 are respectively |δψ 0 | = 1.89 • and |δψ 0 | = 0.83 • . The fifth point from the top is the constraint found by the WMAP9 collaboration [25] at the frequency ν 0 = 53 GHz where |δψ 0 | = 0.36 • , see also Ref. [26] for a general discussion on the constraints on δψ 0 . It is worth to stress that in Fig. 9 we have chosen the values of the parameters in such a way that the conditions |M C (T 0 )| < 1 and |∆M(T 0 )| < 1 are satisfied, see Fig. 1b. We may observe from Fig. 9a that most of experimental constraints on |δψ 0 |, represented by the black points, are within the grey region between the magnetic field values 10 −8 G ≤ B e0 ≤ 8 × 10 −8 G. The only exception is the constraint found by WMPA9 where the magnetic field amplitude corresponding to |δψ 0 | = 0.36 • is by equation (81) B e0 = 7.47 × 10 −10 G. In Fig. 9b plots of the root mean square δψ 0 as a function of the CMB frequency are shown. Similarly to the Fig. 9a, the black points represent the constraints on |δψ 0 |. The first point from the bottom correspond to the WMPA9 constraint [25], the second point from the bottom correspond to the QUaD constraint [24] and the third one correspond to the BICEP 1 [23]. Each of the points lie on the corresponding frequency line where the measurements are done. For example, the QUaD constraint on |δψ 0 | = 0.83 • is consistent with B e0 = 1.38 × 10 −8 G, while the BICEP 1 constraint on |δψ 0 | = 2.77 • is consistent with B e0 = 3.4 × 10 −8 G. Conclusions In this work, we have studied the generation of the CMB circular polarization and rotation angle of the CMB polarization plane due to the CM effect in a large-scale cosmic magnetic field. We worked with the Stokes parameters and derived a system of differential equations for their evolution in an expanding universe. In the equations governing the evolution of the Stokes vector, we included all standard magneto-optic effects which manifest in a magnetized plasma which are the CM and Faraday effects. Then we looked to solutions of the equations of motion of the Stokes parameters in different regimes by using several perturbative approaches such as the regular perturbation theory and the Neumann series expansion. The equations of motion that we found in (18) are a generalization to the equations of motion found in Ref. [14] in the case of an arbitrary direction of B e with respect to the photon direction of propagation and for arbitrary magnetic field profile. For an arbitrary direction of B e , the equations of motion (18) include two additional terms proportional to M C (T ), which, would be absent in the particular case when the magnetic field B e is in the same plane with the wave-vector k. These two terms proportional to M C (T ) make possible the mixing of the Q(T ) and V (T ) parameters with each other. The magnitude of the degree of circular polarization for the CM effect depends on several parameters where the most important ones are the CMB frequency ν 0 and the magnetic field amplitude B e (x, t 0 ). In addition, other parameters which play also an important role are the angles Θ and Φ. Consequently, depending on the values of these parameters, in this work, we divided our analysis of the CM effect in three major regimes. In the regime where |M C, F (T 0 )| < 1 and |∆M(T 0 )| < 1, the degree of circular polarization assumes the lowest values as shown in Figs. 2 and 3, where at best its value reaches P C (T 0 ) 10 −17 . The reason for such low values of P C (T 0 ) stands from the fact that the condition |M F (T 0 )| < 1, drastically restricts the values of the parameters ν 0 to very high frequencies and the values of B e0 to very low ones. In the case when the Faraday effect is completely absent, which happens when the direction of B e is perpendicular to the direction of propagation of the CMB photons, we essentially have that M F (T ) = 0 and the generation of circular polarization is maximal. The absence of the Faraday effect for such specific configuration results in an enhancement of the generation of the CMB circular polarization. For such case, we have found in Sec. 5.2 that the degree of circular polarization can reach values close to the CMB degree of linear polarization in the CMB low-frequency part of the spectrum. The maximum values of the degree of circular polarization are reached in the case when we concentrate at the frequency ν 10 8 Hz, where depending on the magnetic field amplitude, the degree of circular polarization is in the range 1.19 × 10 −10 P C (T 0 ) 7.65 × 10 −7 for magnetic field values 10 −10 G ≤ B e0 ≤ 8 × 10 −8 G. These results are plotted in Figs. 4 and 5 for different values of the parameters. In the case when the Faraday effect is present and in particular when |M F (T )| ≥ 1, the generation of the CMB circular polarizartion is strongly suppressed with respect to the case of absent Faraday effect where M F (T ) = 0. However, the generation of the CMB circular polarization in the case when |M F (T )| ≥ 1 is usually much efficient than that in the case when 0 < |M F (T )| < 1 which we studied in Sec. 5.1. Even in the case |M F (T )| ≥ 1 the degree of circular polarization depends on B e0 and ν 0 , where in some specific range of these parameters, the degree of circular polarization scales with the frequency as P C (T 0 ) ∝ ν −1 0 and with the magnetic field amplitude as P C (T 0 ) ∝ B e0 , see for example the expression (67). As shown in Fig. 6b, the degree of circular polarization can reach values in the range 3 × 10 −15 P C (T 0 ) 2.5 × 10 −13 for magnetic field values 10 −9 G ≤ B e0 ≤ 8 × 10 −8 G at the frequency ν 0 10 8 Hz, |Q i | = 10 −6 and |r| = 1. At the frequency ν 0 10 9 Hz, the values of P C (T 0 ) decrease exactly by an order of magnitude since P C (T 0 ) ∝ ν −1 0 in the frequency range considered. On the other hand, P C (T 0 ) ∝ |r|, so, higher values of |r| give higher values of P C (T 0 ) and vice-versa for smaller values of |r|. Apart from generating circular polarization, the CM effect also generates linear polarization and this fact is evident in all expressions of the Stokes parameters that we found in Sec. 4. In connection with linear polarization, in this work, we have studied the rotation angle of the CMB polarization plane due to the CM effect in the case when the Faraday effect is absent and in combination with the Faraday effect when it is present. In the case when it is present only the CM effect, the rotation angle is δψ(T 0 ) ∝ ν −6 0 B 4 e0 and consequently, significant rotation of the polarization plane occurs in the low-frequency part of the CMB spectrum and for higher values of the magnetic field amplitude. We have found in Sec. 6 that at ν 0 10 8 Hz, the rotation angle in units of degrees is in the range 10 −3 ≤ δψ(T 0 ) ≤ 1 for magnetic field amplitude in the range 10 −9 G ≤ B e0 ≤ 8 × 10 −8 G, |r| = 1 and |Q i | = 10 −6 , see Fig. 8. For higher frequencies, |δψ(T 0 )| acquires extremely smaller values which are uninteresting for any practical purpose. In the case when the rotation angle δψ(T 0 ) is due to a combination of the CM and Faraday effects the situation slightly changes with respect to the case of absent Faraday effect. If we are interested in taking the average value of δψ(T 0 ), the Faraday effect gives null contribution while the CM effect gives in average the same contribution as it does in the case of absent Faraday effect. If we do not take the average value of δψ(T 0 ) or we take the root mean square, the Faraday effect usually dominates over the CM effect in the case when it is present. One important aspect is that in case we take the root mean square of δψ(T 0 ), the Faraday effect generates significant rotation of the polarization plane depending on the CMB frequency and magnetic field amplitude. As shown in Fig. 9, the Faraday effect can generate substantial rotation of the polarization plane especially in the low-frequency part especially for ν 0 10 10 Hz. In the high-frequency part of the spectrum, namely for frequencies above 10 GHz, the rotation angle is still large depending on the magnetic field amplitude. An interesting fact is that most of the constraints on δψ(T 0 ) experimentally found correspond to magnetic field amplitudes in the range 10 −8 G B e0 8 × 10 −8 G. If we have to consider currents limits on δψ(T 0 ) as a potential indicator of the existence of the large-scale magnetic field and consequently a non zero rotation angle of the polarization, these limits would allow us to make some predictions on the signal of the circular polarization due to the CM effect. Indeed, if we consider the hypothesis that the rotation angle is due to the Faraday effect only (root mean square value) and that most experimental constraints on δψ(T 0 ) would suggest a magnetic field with amplitude approximately 10 −8 G B e0 8 × 10 −8 G, we would have that the signal of circular polarization for these values of B e0 would be quite substantial. For these values of the magnetic field, in the case when the field is perpendicular to the photon direction of propagation, we would have a circular polarization signal at present in the range 3 × 10 −8 K V (T 0 ) 2 × 10 −6 K at ν 0 10 8 Hz and a signal of 3 × 10 −11 K V (T 0 ) 2 × 10 −9 K at ν 0 10 9 Hz, see Fig. 4. In the case when the magnetic field is not perpendicular, the signal of the circular polarization is reduced by many orders of magnitude and is in the range 10 −14 K V (T 0 ) 10 −13 K depending on the angles Θ and Φ, see Fig. 5. Based on the arguments presented so far, it seems quite plausible that the CM effect is probably the most substantial effect in generating CMB circular polarization. However, the strongest signal of the circular polarization is located in the CMB frequency range 10 8 Hz ν 0 10 9 Hz. In the high-frequency range the signal of circular polarization due to the CM effect is much smaller than in the low-frequency part of the spectrum, but still, the signal is not negligible and can be comparable with the vacuum polarization circular polarization signal in a magnetic field and to that due to free photon-photon scattering. If we assume that there is not any major difficulty in arranging an experiment aiming to detect the circular polarization in the low-frequency part of the CMB spectrum, then it is quite logical to concentrate our attention in this frequency part of the spectrum where the signal is the strongest and more likely to be detected in a relatively short time.
2019-03-14T09:20:08.000Z
2018-10-11T00:00:00.000
{ "year": 2018, "sha1": "1410ec80438d066f7faa2904ae31c3d1e6338053", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-019-6713-8.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "1410ec80438d066f7faa2904ae31c3d1e6338053", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259224864
pes2o/s2orc
v3-fos-license
On Error-detecting Open-locating-dominating sets An open-dominating set S for a graph G is a subset of vertices where every vertex has a neighbor in S. An open-locating-dominating set S for a graph G is an open-dominating set such that each pair of distinct vertices in G have distinct set of open-neighbors in S. We consider a type of a fault-tolerant open-locating dominating set called error-detecting open-locating-dominating sets. We present more results on the topic including its NP-completeness proof, extremal graphs, and a characterization of cubic graphs that permit an error-detecting open-locating-dominating set. Introduction An open-locating-dominating set can model a type of detection system which determines the location of a possible "intruder" for a facility or a possible faulty processor in a network of processors [12]. A detection system is an extensively studied graphical concept which is also known as a watching system or discriminating codes [1,2]. Various detection systems have been defined based on the functionality of each detector in the system. Other well-known and much studied detection systems include identifying codes [8], locating-dominating sets [13] (See Lobstein's Bibliography [9] for a list of the articles in this field.). In this paper, we consider a fault-tolerant variant of an open-locating dominating set called error-detecting open-locating-dominating sets. We present more results on the topic including its NP-completeness proof, extremal graphs, and a characterization in cubic graphs. Notations and definitions We will also use terms such as "at least k-dominated" to denote j-dominated for some j ≥ k. There are several fault-tolerant variants of OLD sets. For example, a redundant open-locating-dominating set is resilient to a detector being destroyed or going offline [10]. Thus, an open-dominating set S ⊆ V (G) is called a redundant open-locating-dominating (RED:OLD) set if ∀v ∈ S, S − {v} is an OLD set. The focus of this paper is another variant of an OLD set called an error-detecting open-locating-dominating (DET:OLD) set, which is capable of correctly identifying an intruder even when at most one sensor or detector incorrectly reports that there is no intruder. Hence, DET:OLD sets allow for uniquely locating an intruder in a way which is resilient to up to one false negative. The following Theorem characterizes OLD, RED:OLD, and DET:OLD sets and they are useful in constructing those sets or verifying whether a given set meets their requirements. Naturally, our goal is to install a minimum number of detectors in any detection system. For finite graphs, the notations OLD(G), RED:OLD(G), and DET:OLD(G) represent the cardinality of the smallest possible OLD, RED:OLD, and DET:OLD sets on graph G, respectively [7,10,11]. Figure 1 shows an OLD, RED:OLD, and DET:OLD sets on the given graph G. We can verify that those sets of detectors meet the requirements specified in Theorem 1.1. There are no other sets that use fewer number of detectors for each of the three parameters, so we get OLD(G) = 6, RED:OLD(G) = 7, and DET:OLD(G) = 10. For infinite graphs, instead of the cardinality, we measure via the density of the subset, which is defined as the ratio of the number of detectors to the total number of vertices. The notations OLD%(G), RED:OLD%(G), and DET:OLD%(G) represent the minimum density of such a set on G. Note that the notion of density is also defined for finite graphs. In Section 3, we present a proof that the problem of determining DET:OLD(G) for an arbitrary graph is NP-complete. Section 2 shows extremal graphs with the highest density. Sections 4 and 5 discusses DET:OLD sets in the infinite regular graphs and cubic graphs, respectively. Extremal Graphs on DET:OLD sets In this section, we consider extremal graphs with DET:OLD(G) = n. Let S ⊆ V (G) be a DET:OLD set for G; because non-detectors do not aid in locating intruders, it must be that S is a DET:OLD set for G − (V (G) − S), implying the smallest graph with DET:OLD will have DET:OLD(G) = n. The following theorem shows that the smallest graphs with DET:OLD have n = 7. Proof. Assume to the contrary, n ≤ 6. Clearly G has a cycle because DET:OLD requires δ(G) ≥ 2. Firstly, we consider the cases when the smallest cycle in the graph is C n , with n = 6,5,4. If the smallest cycle is a C 6 subgraph abcdef , then a and c cannot be distinguished without having n ≥ 7, a contradiction. Suppose the smallest cycle is a C 5 subgraph abcde. To distinguish a and c, by symmetry we can assume that ∃u ∈ N (a) − N (c) − {a, b, c, d, e}. Similarly, to distinguish b and e we can assume by symmetry that there would be a smaller cycle, implying n ≥ 7, a contradiction; thus, we can assume u ̸ = v. Suppose the smallest cycle is a C 4 subgraph abcd. To distinguish a and c, we We know that {p, q} ∩ {x, y} = ∅ because otherwise we create a smaller cycle; thus, n ≥ 8, a contradiction. Otherwise, we can assume that G has a triangle. Next, we will show that if G contains a K 4 subgraph, then n ≥ 7; let abcd be the vertices of a K 4 subgraph in G. To distinguish pairs of vertices in abcd, without loss of generality we can assume that ∃x ∈ N (a), ∃y ∈ N (b), and ∃z ∈ N (c). Clearly {x, y, z} ∩ {a, b, c, d} = ∅ because x, y, and z are used to distinguish the vertices a, b, c, and d; however, we do not yet know if x, y, and z are distinct. If x = y = z, then distinguishing a, b, and c requires at least another two vertices, so we would have at least 7 vertices and would be done; otherwise, without loss of generality we can assume x ̸ = z. Suppose x = y; we see that a and b are not distinguished and n = 6, so without loss of generality let bz ∈ E(G) to distinguish a and b. If {dx, dz} ∩ E(G) = ∅, then (d, x) and (d, z) cannot be distinguished; otherwise without loss of generality assume dx ∈ E(G). We now see that a and d are not distinguished, but they are symmetric, so without loss of generality let dz ∈ E(G) to distinguish a and d. We see that d and b become closed twins and cannot be distinguished. Otherwise, we can assume x ̸ = y and by symmetry y ̸ = z. Thus, n ≥ 7, and we would be done. Thus, if G has a K 4 subgraph, then we would be done. Next, we will show that the existence of a "diamond" subgraph, which is an (almost) K 4 subgraph minus one edge, implies that n ≥ 7. Let abcd be a C 4 subgraph and assume ac ∈ E(G) but cd / ∈ E(G), which forms said diamond subgraph. To distinguish b and d, we can assume by symmetry that ∃u, v ∈ N (b)−N (d)−{a, b, c, d}. Similarly, to distinguish a and c, we can assume that ∃w ∈ N (a)−N (c)−{a, b, c, d}. We know that u ̸ = v by assumption, so n ≤ 6 requires by symmetry that w = u. We can assume that uc / ∈ E(G) because otherwise this would create a K 4 subgraph and fall into a previous case. We see that cv ∈ E(G) is required to distinguish u and c. To distinguish u and v, by symmetry we can assume uv ∈ E(G). Now, we see that a and v cannot be distinguished without creating a K 4 subgraph, so we are done with the diamond case. For the final case, we know there must be a triangle, abc, but there cannot be any K 4 or diamond subgraphs. To distinguish vertices in abc, we can assume by symmetry that ∃u ∈ N (a) − {a, b, c} and ∃v ∈ N (b) − {a, b, c}; further, we know that u ̸ = v because otherwise this would create a diamond subgraph. We know that {av, bu, cu, cv}∩E(G) = ∅ because any of these edges would create a diamond subgraph. Suppose uv ∈ E(G); then distinguishing a and v within the bounds of n ≤ 6 requires ∃w ∈ N (a)−N (v)−{a, b, c, u, v}. Similarly, distinguishing u and b requires w ∈ N (b); however, this creates a diamond subgraph, so we would be done. Otherwise, we can assume that uv / ∈ E(G). To 2-dominate u and v, we require ∃p ∈ N (u)−{a, b, c, u, v} and ∃q ∈ N (v) − {a, b, c, u, v}. However, n ≤ 6 requires that p = q. We see that u and v cannot be distinguished without creating a diamond subgraph or having n ≥ 7, completing the proof. Let G n,m have n vertices and m edges. Then G 7,11 , as shown in Figure 2, is the first graph that permits a DET:OLD set in the lexicographic ordering of (n, m) tuples; i.e., the graph with the smallest number of edges given the smallest number of vertices. Next we consider extremal graphs with DET:OLD(G) = n with the fewest number of edges. Proof. Because G has DET:OLD, we know δ(G) ≥ 2; let p be the number of degree 2 vertices in G. By Observation 2.1, every degree 2 vertex, v, must have at least one neighbor, u, of at least degree 3, and said neighbor u is not adjacent to any degree 2 vertices other than v. Thus, we can pair each of the p degree 2 vertices with a unique degree 3 or higher vertex. From this we know that p ≤ ⌊ n 2 ⌋, and the n − 2p vertices that are not pairs are all at least degree 3. Thus, v∈V (G) deg(v) ≥ (2 + 3)p + 3(n − 2p) = 3n − p ≥ 3n − ⌊ n 2 ⌋. However, we also know that the degree sum of any graph must be even, so we can strengthen this to . Dividing the degree sum by 2 completes the proof. The lower bound given in Theorem 2.2 on the minimum number of edges in a graph with DET:OLD is sharp for all n ≥ 9 and Figure 3 shows a construction for an infinite family of graphs achieving the extremal value for 9 ≤ n ≤ 20. NP-completeness of Error-detecting OLD It has been shown that many graphical parameters related to detection systems, such as finding the cardinality of the smallest IC, LD, or OLD sets, are NP-complete problems [3,4,5,12]. We will now prove that the problem of determining the smallest DET:OLD set is also NP-complete. For additional information about NP-completeness, see Garey and Johnson [6]. Proof. Clearly, DET-OLD is NP, as every possible candidate solution can be generated nondeterministically in polynomial time (specifically, O(n) time), and each candidate can be verified in polynomial time using Theorem 1.1. To complete the proof, we will now show a reduction from 3-SAT to DET-OLD. Let ψ be an instance of the 3-SAT problem with M clauses on N variables. We will construct a graph, G, as follows. For each variable x i , create an instance of the F i graph ( Figure 5); this includes a vertex for x i and its negation x i . For each i ∈ {1, . . . , N }, let {x i x k , x i x k } ⊆ E(G) for k = (i mod N ) + 1. For each clause c j of ψ, create a new instance of the H j graph ( Figure 5). For each clause c j = α ∨ β ∨ γ, create an edge from the y j vertex to α, β, and γ from the variable graphs, each of which is either some x i or x i ; for an example, see Figure 6. The resulting graph has precisely 8N + 6M vertices and 16N + 12M edges, and can be constructed in polynomial time. To complete the problem instance, we define K = 7N + 6M . Suppose S ⊆ V (G) is a DET:OLD on G with |S| ≤ K. By Lemma 3.1, we require at least the 6N + 6M detectors shown by the shaded vertices in Figure 5. Additionally, Lemma 3.1 gives us the additional requirement that each b i vertex must be dominated by at least one additional detector outside of its G 6 subgraph; thus, for each i, {x i , x i } ∩ S ̸ = ∅, giving us at least N additional detectors. Thus, we find that |S| ≥ 7N + 6M = K, implying that |S| = K, so |{x i , x i } ∩ S| = 1 for each i ∈ {1, . . . , N }. Applying Lemma 3.1 again to the clause graphs yields that each y j vertex must be dominated by at least one additional detector outside of its G 6 subgraph. As no more detectors may be added, it must be that each y j is now dominated by one of its three neighbors in the F i graphs; therefore, ψ is satisfiable. For the converse, suppose we have a solution to the 3-SAT problem ψ; we will show that there is a DET:OLD, S, on G with |S| ≤ K. We construct S by first including all of the 6N + 6M vertices as shown in Figure 5. Next, for each variable, x i , if x i is true then we let the vertex x i ∈ S; otherwise, we let x i ∈ S. Thus, the fully-constructed S has |S| = 7N + 6M = K. Each b i has its required external dominator due to having x i ∈ S or x i ∈ S. Additionally, because S was constructed from a satisfying truth assignment for the 3-SAT problem, by hypothesis each y j vertex must also be dominated by one of its (external) term vertices in the F i subgraphs. Therefore, each G 6 subgraph in G satisfies Lemma 3.1, and so are internally sufficiently dominated and distinguished. Since all G 6 subgraphs are sufficiently far apart, it is also the case that all vertex pairs in distinct G 6 subgraphs are distinguished. Indeed, it can be shown that all vertices are 2-dominated and 2 # -distinguished, so S is a DET:OLD set for G with S ≤ K, completing the proof. with N = 5, M = 4, K = 59 [7,10]. The solutions for the infinite hexagonal grid (HEX), the infinite square grid (SQR), and the infinite triangular grid (TRI) are tight bounds, while the exact value for the infinite king grid (KNG) is currently unknown. Subfigure (c) gives the best (lowestdensity) solution we have found for KNG. We believe that standard discharging share arguments can be used to prove a lower bound of 60 151 for KNG. Extremal cubic graphs In this section, we characterize cubic graphs that permit a DET:OLD set. We also consider extremal cubic graphs on DET:OLD(G). Proof. Let abcd be a 4-cycle in G. We see that a and c cannot be distinguished, a contradiction. For the converse, we will show that S = V (G) is a DET:OLD for a C 4 -free cubic graph. Let u, v ∈ V (G); we know that Proof. Assume to the contrary that there exist u, v ∈ V (G) − S such that u ∈ T 2 (v) ∪ T 4 (v). We know that u ̸ = v because G is C 4 -free due to the existence of DET:OLD. We consider the three nonisomorphic ways to form a length 2 or 4 trail from u to v as illustrated in Figure 8. We see that in Figure 8 (a), x cannot be 2-dominated, in (b), x and y cannot be distinguished, and in (c), v and x cannot be distinguished. All three cases contradict the existence of S, completing the proof. Theorem 5.1. Let G be a C 4 -free cubic graph, and let S ⊆ V (G) such that for all distinct u, v ∈ S, Proof. Let u ∈ V (G). We see that for any distinct x, y ∈ N (u), x ∈ T 2 (y). Thus, by assumption it must be that |N (u) ∩ S| ≤ 1, implying that dom(u) ≥ 2. We now know that all vertices are at least 2-dominated by S. Next, we consider the following three cases depending on the distance between a pair of vertices and show the pair is 2 # -distinguished. Case 1: Suppose u, v ∈ V (G) with d(u, v) = 1. Because G is twin-free (due to being a C 4 -free cubic graph) and regular, it must be that ∃x ∈ from which it can be seen that u and v must be distinguished. Otherwise, we can assume that N (u)∩N (v) = ∅. If u ∈ S, then N (v) − N [u] ⊆ T 2 (u) ⊆ S, so v is distinguished for u and we would be done; otherwise we assume u ∈ S and by symmetry v ∈ S. We see that We see that for any p ∈ N (u), {u ′ , u ′′ , u ′′′ } − {p} ⊆ T 2 (p) ⊆ S. Thus u and v must be distinguished, completing the proof. The upper bound on DET:OLD(G) for cubic graphs is known to be 45 46 n [11], and we can improve it using Corollary 5.1. Proof. We will show that we can construct a set S with the property that ∀v ∈ V (G), ∃u ∈ S such that v ∈ T 0 (u) ∪ T 2 (u) ∪ T 4 (u). Because |T 0 (u)| = 1, |T 2 (u)| ≤ 6, and |T 4 (u)| ≤ 24, this construction will result in a detector set S = V (G) − S with density at most 6+24 1+6+24 = 30 31 . Assume to the contrary that we have a maximal S set, but ∃x ∈ V (G) such that x / ∈ T 0 (u) ∪ T 2 (u) ∪ T 4 (u) for any u ∈ S, implying that x / ∈ S. Then by Corollary 5.1, we see that S ∪ {x} still satisfies the requirements of our S set, contradicting maximality of S. Therefore, we have a DET:OLD set S with density at most 30 31 , completing the proof. Corollary 5.2. If G is a cubic graph that permits DET:OLD, then DET:OLD(G) ≤ n − 1. Figure 9 shows extremal cubic graphs with the highest density on n vertices for 16 ≤ n ≤ 24. The n = 22 graph shown in Figure 9 has the highest density we have found so far of 21 22 , and we conjecture the density
2023-06-23T06:42:36.219Z
2023-06-21T00:00:00.000
{ "year": 2023, "sha1": "4746d1489954653feb31747525055ddf261a4dbe", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4746d1489954653feb31747525055ddf261a4dbe", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
1828135
pes2o/s2orc
v3-fos-license
Partial Deficiency of Sphingosine-1-Phosphate Lyase Confers Protection in Experimental Autoimmune Encephalomyelitis Background Sphingosine-1-phosphate (S1P) regulates the egress of T cells from lymphoid organs; levels of S1P in the tissues are controlled by S1P lyase (Sgpl1). Hence, Sgpl1 offers a target to block T cell-dependent inflammatory processes. However, the involvement of Sgpl1 in models of disease has not been fully elucidated yet, since Sgpl1 KO mice have a short life-span. Methodology We generated inducible Sgpl1 KO mice featuring partial reduction of Sgpl1 activity and analyzed them with respect to sphingolipid levels, T-cell distribution, and response in models of inflammation. Principal Findings The partially Sgpl1 deficient mice are viable but feature profound reduction of peripheral T cells, similar to the constitutive KO mice. While thymic T cell development in these mice appears normal, mature T cells are retained in thymus and lymph nodes, leading to reduced T cell numbers in spleen and blood, with a skewing towards increased proportions of memory T cells and T regulatory cells. The therapeutic relevance of Sgpl1 is demonstrated by the fact that the inducible KO mice are protected in experimental autoimmune encephalomyelitis (EAE). T cell immigration into the CNS was found to be profoundly reduced. Since S1P levels in the brain of the animals are unchanged, we conclude that protection in EAE is due to the peripheral effect on T cells, leading to reduced CNS immigration, rather than on local effects in the CNS. Significance The data suggest Sgpl1 as a novel therapeutic target for the treatment of multiple sclerosis. Introduction Sphingosine-1-phosphate (S1P) is a pluripotent lipid signaling molecule with important functions in health and disease across a broad range of organ systems [1][2][3][4]. S1P has been well characterized as an agonist of five G-protein coupled receptors, named S1P 1 to S1P 5 [5,6]. Among these receptors, S1P 1 is of particular interest as a target in immunomodulation; the drug fingolimod (FTY720, Gilenya TM ), licensed for the treatment of relapsing multiple sclerosis, acts in its phosphorylated form as S1P 1 modulator and thus regulates the migration of selected lymphocyte subsets into the central nervous system [7]. More recently, direct intracellular targets of S1P have been characterized that may offer additional points for pharmacological intervention [8,9]. As opposed to interfering with the molecular targets of S1P, modulation of its concentration constitutes an alternative approach to capture the therapeutic benefit of inhibiting or enhancing the functions of S1P. This appears achievable in at least three different ways: (i) by using anti-S1P antibodies to reduce extracellular S1P [10]; (ii) by inhibiting or enhancing the activity of intracellular sphingosine kinases which produce S1P [11,12]; (iii) by blocking S1P-degrading enzymes, namely the S1P phosphatases or S1P lyase [13]. Drug candidates from all three approaches, namely an S1P antibody [10], sphingosine kinase inhibitors [14,15], and a lyase inhibitor [16,17], are currently under evaluation in clinical trials. S1P lyase (Sgpl1), a microsomal enzyme ubiquitously expressed in mammalian tissues, is engaged in the irreversible degradation of S1P to 2-hexadecenal and phosphoethanolamine [13,18]. Thus, this enzyme is considered to be a major control point to regulate S1P concentrations in cells. Indeed, constitutive knock-out of Sgpl1 in mice leads to a pronounced increase of S1P levels in tissues and serum [19]; new-born Sgpl1 KO mice do not thrive, feature major derailment of lipid metabolism and innate immune functions, and die early in life [19][20][21][22]. However, partial inhibition of Sgpl1, which may lead to less pronounced and more benign increases of S1P levels, has been proposed as a therapeutic modality, in particular in autoimmune disease [16,19,[23][24][25]. As originally observed by J. Cyster and co-workers [26], Sgpl1 is required to maintain an S1P gradient between tissues (low S1P) on the one hand and efferent lymph and blood (high S1P) on the other, which appears to be required for the T cell egress from the lymphoid organs. Indeed, reduced numbers of T cells in the circulation are a consistent observation in mice completely or partially deficient in Sgpl1 activity [19], or in rodents treated with Sgpl1 inhibitors, such as 2-acetyl-4(5)-tetrahydroxybutyl imidazole (THI) or LX-2931 ( = LX3305) [16,27]. The latter compound was also efficacious in reducing peripheral T cell numbers in healthy subjects in the course of a clinical phase I study [16]; a phase II study in RA failed to meet its primary endpoint, apparently due to subtherapeutic dosing [17]. To date, the therapeutic potential of Sgpl1 inhibitors has not been fully explored. Therefore, we sought to establish a genetic model of partial Sgpl1 deficiency without the limitations of constitutive KO mice [19,20]. Here we describe a mouse strain in which Sgpl1 gene deletion is inducible in the adult animal, leading to partial reduction of enzyme activity. Importantly, these mice feature pronounced reduction of peripheral T lymphocyte counts and are fully protected in a model of experimental autoimmune encephalomyelitis. This indicates that inhibiting Sgpl1 may represent a new treatment strategy for autoimmune diseases including multiple sclerosis. Partially Sgpl1-deficient Mice Survive after Induced Knock-out We established mouse strains with either constitutive or inducible KO of Sgpl1. First, a mouse line was prepared in which exon 8 of the Sgpl1-encoding gene is flanked by loxP elements (Fig. S1). Crossing of the floxed Sgpl1 mice with a Cre deleter line yielded the constitutive KO mice. To generate inducible Sgpl1 KO mice, the floxed Sgpl1 mice were crossed with a B6.C actb-CreERT2 knock-in mouse line [28]; breeding yielded Sgpl1 Flox/ Flox Cre +/2 ( = inducible KO) and Sgpl1 Flox/Flox Cre 2/2 ( = control) littermates, which were used for experimentation. Sgpl1 Flox/Flox Cre +/2 mice were treated with tamoxifen for 5 days to induce Sgpl1 knock-out; two weeks later, deletion of exon 8 was observed with a frequency of 70-90% in the genomic DNA of various tissues, with the notable exception of brain (,40%) (Fig. 1A). Consequently, Sgpl1 mRNA and enzyme activity were reduced by 60 to 90% as compared to control mice, with the exception of brain where no decrease occurred (Fig. 1B,C). Thus, while Sgpl1 gene expression was markedly downregulated, significant residual enzyme activity was still present in these mice. As a consequence, the partially Sgpl1-deficient inducible KO mice showed normal weight gain and survival during a 6 month observation period, in contrast to fully Sgpl1-deficient constitutive KO mice which after birth did not gain weight and died within 3 to 4 weeks. Thus, the inducible KO mice are much better suited for experimentation than the constitutive KO mice. Partially Sgpl1-deficient Mice show Less Increase in Sphingolipids, None in Brain We compared the effect of complete and partial Sgpl1deficiency on the Sgpl1 substrate S1P and its metabolic precursors sphingosine (Sph) and ceramide. While new-born constitutive Sgpl1-deficient mice showed almost normal S1P concentrations (data not shown), S1P determined two weeks after birth was increased by factors ranging from 120-fold (lung) to 4700-fold mice. Two weeks after tamoxifen treatment, tissues from both Crepositive and negative Sgpl1 Flox/Flox mice (4 males and females, each) were analysed. A, Genomic DNA was isolated and analysed by RT-PCR probing for precence of exon 8. The percentage of gene deletion as compared to Sgpl1 Flox/Flox Cre 2/2 controls is given. B, mRNA was isolated and analysed by RT-PCR; expression levels in both the Sgpl1 Flox/ Flox Cre 2/2 controls (filled bars) (set to 1 for each organ) and in the induced Sgpl1 Flox/Flox Cre +/2 mice (open bars) are given. C, Sgpl1 activity is tissue homogenates was measured using 15-NBD-S1P as substrate. Activity is reported as product formed per mg of wet tissue per hour. Sgpl1 Flox (Fig. S2A,B). In inducible Sgpl1-deficient mice, analyzed two weeks after induction of gene deletion, S1P in the tissues was elevated to a lesser degree, ranging from 4-fold (heart) to 100-fold (lymph nodes and spleen) ( Fig. 2A,B). In both mouse strains, there was no statistically significant increase of S1P in the brain and spinal cord. Concentrations of S1P in the blood of the inducible Sgpl1-deficient mice showed a moderate elevation (factor of 1.6), while plasma concentrations were not increased at all (Fig. 2A). S1P concentrations in inducible Sgpl1-deficient mice were similar at early time points and 6 months after induction (data not shown). Sph was increased more strongly in constitutive Sgpl1-deficient mice (up to 200-fold; Fig. S2C,D) than in inducible Sgpl1-deficient mice (up to 16-fold; Fig. 2C,D). Furthermore, while C16-ceramide was elevated in the constitutive Sgpl1-deficient mice up to 9-fold in selected tissues (Fig. S2E), there was only a trend towards C16ceramide increase in the inducible Sgpl1-deficient mice (Fig. 2E). In summary, inducible Sgpl1-deficient mice feature less pronounced increase of sphingolipid metabolites as compared to the constitutive KO mice. Reduced Blood T Cell Numbers in Inducible Sgpl1deficient Mice Numbers of neutrophils, monocytes, platelets, and erythrocytes in the blood of inducible Sgpl1-deficient mice were not significantly different from control mice ( Fig. 3A and data not shown). In contrast, blood lymphocytes were reduced by 40% (Fig. 3A). This was due to strongly reduced CD4-and CD8positive T cells by approximately 85%, while CD19-positive B cells were unaffected (Fig. 3B). The decrease of T cell numbers was similar in constitutive KO mice. Tamoxifen treatment of Sgpl1 Flox/Flox Cre 2/2 controls and of unfloxed Cre +/2 and Cre 2/2 mice did not affect T cell numbers, indicating that the effect was strictly dependent on recombination of Sgpl1. As for the time course of reduction of T-cell numbers in Sgpl1 Flox/Flox Cre +/2 mice, five daily doses of tamoxifen (40 mg/kg p.o.) yielded maximal decrease of both CD4-and CD8-positive T cells two weeks later, which lasted for at least 6 months (Fig. 3C). T cells were less strongly reduced after fewer doses of tamoxifen or a shortened waiting period (data not shown). Collectively, these data indicate a selective effect of partial Sgpl1 deficiency on peripheral T cell numbers. Normal Thymic T Cell Development in Partially Sgpl1deficient Mice, but Retention in Thymus and LN In view of the strongly diminished T cell numbers in the blood of the inducible Sgpl1-deficient mice we next studied thymocyte subsets and the composition of lymphocytes in spleen and LN. At 8 weeks following tamoxifen treatment, thymocytes were enriched in mature single positive (SP) CD4 and CD8 cells (Fig. 4A), although the absolute numbers of most thymocyte subsets, including doublenegative (DN) cells were not significantly affected (Fig. S3A). In contrast, in spleen and LN partial Sgpl1 deficiency affected both the size and composition of several lymphocyte subsets. In particular, the splenic cellularity strongly declined, including major reductions in the overall numbers of both CD4-and CD8-positive T cells and subpopulations of B cells (Fig. 4B). While the proportional representation of B cell subsets was not affected by Sgpl1 deficiency (Fig. S3B), there was a sharp decline in the total numbers of follicular and marginal zone B cells, albeit no statistically significant loss of the physiologically small numbers of germinal center B cells was seen (Fig. 4B). The profoundly reduced pools of splenic CD4-and CD8-positive T cells were strongly skewed towards cells of a memory phenotype, in particular CD4 T effector memory and CD8 central memory cells (Fig. 4C). In contrast to spleen, LN cell numbers were significantly increased in Sgpl1 Flox/Flox Cre +/2 mice compared to control mice. Larger LN cellularity was associated with a significant increase in CD4 and CD8 positive T cells, as well as total B cell numbers (with the exception of germinal center B cells) (Fig. 4D). While the proportions of LN B cell subsets and CD4 and CD8 T cell subsets were only marginally affected by Sgpl1 deficiency (Fig. S3C), the representation of CD4 effector memory and CD8 central memory cells also increased in LN, albeit to a smaller degree than in spleen (Fig. 4E). In summary, these data indicate normal thymic T cell development in inducible Sgpl1-deficient mice, but retention of mature T cells in the thymus and LN, leading to reduced T cell numbers in spleen and blood, with concomitant increased proportions of memory T cells. Partial Sgpl1-deficiency Promotes CD4 + Foxp3 + T Cells in LN and Spleen CD4 T cells expressing the transcription factor Foxp3 constitute a critically important T regulatory cell subpopulation that dampens inflammatory responses and secures immunological tolerance [29]. Given the modulation of peripheral T cell numbers and composition in inducible Sgpl1-deficient mice, we also assessed the effect on CD4 + Foxp3 + T cells. Partial Sgpl1 deficiency resulted in a profound increase in the proportions of Foxp3 + cells in both spleen and LN to more than twice the physiological levels ( Fig. 5A). However, due to the marked loss and gain of total overall T cell numbers in spleen and LN, respectively, absolute CD4 + Foxp3 + T cell numbers were 3-4 fold reduced in spleen, whereas they were increased more than 4-fold in LN (Fig. 5B). Inducible Sgpl1-deficient Mice are Protected from Delayed-type Hypersensitivity (DTH) Reaction and EAE The capacity of inducible SPL-deficient mice to mount in vivo T cell dependent immune responses was then investigated using the model of DTH induced by systemic immunization and localized challenge with sheep red blood cells (SRBC). Sgpl1 Flox/ Flox Cre 2/2 mice developed edema at the site of challenge (footpad), which increased footpad thickness by about 40-50% ( Fig. 6; data show one out of two independent studies with similar outcome). Cyclosporin A as a reference compound inhibited the response in these mice by 86%. In induced Sgpl1 Flox/Flox Cre +/2 mice swelling was reduced as well, with similar inhibition as achieved with Cyclosporin A (87% reduction of swelling vs. Sgpl1 Flox/Flox Cre 2/2 mice), indicating pronounced protection by the partial Sgpl1 deficiency. Similar inhibition of DTH has been observed previously also for the S1P1 agonist FTY720 ( [30] and our unpublished data). Since partial Sgpl1 deficiency appears to phenocopy the effect of FTY720 on T cell distribution, we asked if it would also confer protection in murine MOG-induced EAE, a model of multiple sclerosis that depends on the infiltration of pathogenic T cells into the brain. Tamoxifen-treated Sgpl1 Flox/Flox Cre +/2 , Sgpl1 Flox/Flox Cre 2/2 , Cre +/2 , and Cre 2/2 mice were immunized with MOG in Complete Freund's Adjuvant and were analyzed daily for clinical signs of EAE over a period of 27 days. Notably, while most animals with normal Sgpl1 expression (Sgpl1 Flox/Flox Cre 2/2 , Cre +/2 , and Cre 2/2 ) developed clinical signs of EAE along with a significant loss of body weight (Fig. 7A-C; data from one representative experiment out of three independent studies), Sgpl1-deficient Sgpl1 Flox/Flox Cre +/2 mice were almost completely protected from EAE. This was accompanied by a markedly reduced histopathological disease score of the spinal cord tissue from Sgpl1 Flox/Flox Cre +/2 mice compared to control mice, and significantly lower numbers of CNS-invading inflammatory cells, including CD3 + T cells (Fig. 8A,B). Furthermore, while a substantial destruction of the myelin sheath was evident on day 24 of EAE in spinal cord tissue of control mice, this was almost undetectable in Sgpl1 Flox/Flox Cre +/2 mice (Fig. 8C). To evaluate whether MOG-immunized Sgpl1 Flox/Flox Cre +/2 mice were fully capable of mounting a MOG-specific recall response, MOG-dependent T cell proliferation was assessed. First, CD4 + T cells were isolated by FACS from MOG-primed Sgpl1 Flox/Flox Cre +/2 and Sgpl1 Flox/Flox Cre 2/2 mice on day 10 (Fig. S4); then, similar numbers of isolated T cells from both strains were stimulated with MOG peptide in the presence of APCs. T cells isolated from Sgpl1 Flox/Flox Cre +/2 mice showed significantly reduced proliferation and secreted less IFN-c than controls (Fig. 9). Discussion In the inducible Sgpl1 KO mice, tamoxifen-induced Cremediated gene recombination leads to pronounced, but not complete, downregulation of Sgpl1 activity, typically by 70 to 90% in various tissues. The partial reduction of enzyme activity gives rise to a phenotype that is very different from the short-lived completely Sgpl1-deficient mouse strains [19][20][21][22]. The inducible knock-out mice develop normally and show no increased mortality over the observation period of 6 months, while featuring reduced numbers of circulating T cells similar to the constitutive KO mice. This is likely due to the fact that the increase of S1P and Sph in the tissues of the inducible KO mice is much less pronounced, namely by factors of about 10 to 300; thus, residual Sgpl1 activity allows turnover of part of the S1P produced in these mice, preventing the vast accumulation of S1P seen in the fully deficient mice and its associated toxicity. Importantly, there is no difference in S1P levels between animals at 2 weeks or 6 months after induction, indicating that the elevated steady-state concentration of S1P in the tissues is maintained permanently. Likewise, the reduction of lymphocytes in the blood persists over this observation period; apparently, the degree of S1P elevation in the secondary lymphoid organs of the inducible Sgpl1-deficient mice is sufficient to prevent lymphocyte egress. These observations are in line with data on humanized Sgpl1 knock-in mice [19] featuring 10-20% residual enzyme activity but still pronounced peripheral reduction of T-lymphocytes counts. Hence, partially Sgpl1-deficient mice may in general be a more suitable genetic model to predict the effect of pharmacological Sgpl1 inhibition which is likely to lead to only partial inhibition of the enzyme as well. The extent of S1P increase upon partial Sgpl1 deficiency differs considerably between various tissues (Fig. 2B); notably, the highest increase was seen in the spleen and LNs (over 90-fold). This is in line with data from mice treated with the Sgpl1 inhibitors LX2931 [16] and 2-acetyl-4(5)-(1(R),2(S),3(R),4-tetrahydroxybutyl)-imidazole (THI; our unpublished data), which induce the highest S1P increase in the lymphoid organs as well. The extent of S1P increase in different tissues of the inducible Sgpl1-deficient mice does not parallel the extent of reduction of Sgpl1 activity (Fig. 1C) or with baseline Sgpl1 expression in the tissues (data not shown). Therefore, we assume that the importance of Sgpl1 in controlling intracellular S1P levels varies between tissues, e.g., due to different rates of S1P synthesis. Thus, tissues with high S1P production will attain a higher S1P steady level in conditions of partial Sgpl1 deficiency. In wild-type mice, S1P concentrations are higher in plasma than in the lymphoid organs ( Fig. 2A). Importantly, this gradient is inverted in the inducible Sgpl1-deficient mice, i.e. S1P concentrations in the tissues are higher than in the extracellular fluid; thus T cell egress is impaired and their numbers in the circulation are reduced. In the blood of the inducible Sgpl1-deficient mice there is a relatively minor increase in S1P, disproportionate to the one seen in tissues, and no increase in the plasma. This contrasts the situation in fully deficient mice [19,22] where S1P blood concentrations are highly elevated. By inference, partial Sgpl1 inhibition by a pharmacological inhibitor is not expected to lead to adverse effects via the S1P receptors, e.g., on the heart and on endothelial barriers. The brain is the only tissue without any increase of S1P, neither in the partial nor the complete Sgpl1-deficient mice. For the inducible Sgpl1-deficient mice this is due to a low recombination frequency in this tissue (Fig. 1A), probably resulting from poor penetration of tamoxifen through the blood-brain barrier [31], and hence no reduction of brain Sgpl1 activity. However, the lack of S1P increase in the brain of constitutive Sgpl1 knock-out mice indicates that Sgpl1 does not control the concentration of S1P in the brain as in other tissues. This finding may be explained as follows: it has been shown that S1P is taken up by cells from erythrocytes via cell-cell contact and that this S1P is then susceptible to cleavage by Sgpl1 [32]; due to the blood-brain barrier, brain tissue is obviously unable to take up S1P from erythrocytes, hence this source of Sgpl1 substrate is missing in the brain making S1P levels unaffected by the absence of Sgpl1. On another note, it has been reported that neurons isolated from constitutive Sgpl1 knock-out mice show elevated S1P, especially after addition of S1P to the cultures which leads to neurotoxicity [33]; in view of our findings, this appears as an experimental setting that obviously does not reflect the in vivo situation. We observed a similar degree of reduction of T-lymphocyte counts in the inducible and constitutive Sgpl1-deficient mice. Using THI as pharmacological inhibitor, it has been shown that the S1P concentration in the spleen required to induce 50% reduction of peripheral lymphocytes is about 4.4 mM [27]; concentrations in the spleen of inducible Sgpl1-deficient mice are considerably higher (about 17 mM), suggesting that an even lower degree of Sgpl1 inhibition may suffice to induce reduction of T cell numbers in the blood. Interestingly, in the inducible deficient mice the effect of Sgpl1 downregulation on blood lymphocyte numbers was confined to the T cells, without any effect on B cells, while in fully Sgpl1-deficient mice B cells in the blood were partially reduced (ref. 19 and data not shown). Although increased B cell numbers in the LN of inducible Sgpl1deficient mice indicated impaired B cell egress from LN, our data taken together suggest that the migration and distribution of T cells is more strictly controlled by S1P gradients. As demonstrated before using the S1P modulator drug FTY720, T cells require S1P responsiveness at two major sites; to leave the thymus as mature SP (CD4 + or CD8 + ) T cells, and to egress from LN to return to the circulation and home to inflamed tissues [34]. Here we show that partial Sgpl1 deficiency results in overrepresentation of CD4SP and CD8SP thymocytes, indicating interference with thymic egress comparable to what was reported for FTY720 [35]. Interestingly, this retention occurs without significantly increased ceramide in the thymus which in the case of constitutive KO mice has been proposed to cause abrogation of thymocyte development via apoptosis [36]. The FTY720-induced thymic retention was previously shown to strongly delay the physiological turnover of the peripheral T cell pool and to contribute to an overrepresentation of T cells with a memory phenotype over naïve T cells [37]. Because naïve T cells express the LN homing receptor CD62L and require S1P responsiveness for LN egress, they would be expected to remain more prominently represented in LN than in spleen. This differential effect in LN versus spleen was observed in the present study using partial Sgpl1-deficient mice, similar to previous findings with FTY720-treated mice [38]. In addition to modulating naïve and memory T cell subsets, FTY720 was shown to affect the distribution and accumulation of Foxp3 + Tregs in a viral infection model as well as under normal homeostatic conditions in otherwise unchallenged mice (ref. 44 and our unpublished data). We therefore analyzed how the manipulation of the sphingolipid pathway through partial Sgpl1 deficiency affected the distribution of CD4 + Foxp3 + T cells. In both spleen and LN of inducible Sgpl1-deficient mice, CD4 T cell populations were strongly skewed towards Foxp3 + cells. This resulted in a weaker decline of total Foxp3 + T cells over other T cell subsets in spleen, and it led to a profound gain in absolute Foxp3 + T cell numbers in LN. The reasons for these sphingolipid pathway-related specific distribution effects on CD4Foxp3 + cells remain to be determined; they might include differential expression of S1P receptors, and/or differences in sensitivity or signaling pathways in response to S1P. Partial Sgpl1 inhibition was shown here to confer protection in two T cell dependent in vivo models, namely in DTH as a classical inflammation model and in EAE as a disease model for multiple sclerosis. In EAE, an almost complete prevention of disease was observed; while spinal cord tissue of control mice undergoing EAE Figure 7. Protection of inducible Sgpl1-deficient mice in EAE. Tamoxifen-induced Sgpl1 Flox/Flox Cre +/2 , Sgpl1 Flox/Flox Cre 2/2 , Cre +/2 , and Cre 2/ 2 mice (n = 6-10/group) were immunized with MOG emulsified in Complete Freund's Adjuvans. Data from one representative experiment out of three independent studies are shown. A, Incidence of mice with a clinical EAE score $1; B, clinical score; C, body weight. For histological analysis thoracic sections of spinal cord tissue from Sgpl1 Flox/Flox Cre +/2 and Sgpl1 Flox/Flox Cre 2/2 mice undergoing EAE (day 24) were stained (D) with H&E to visualize CNS-invading cells (scale bar is 500 mm, arrows highlight areas of inflammation); E, for CD3 + T cells (scale bar is 500 mm, rectangles indicate area of magnification, where scale bar represents 100 mm); and F, with solochrome to assess the integrity of the myelin sheath (scale bar is 500 mm, arrows highlight areas of beginning demyeliniation). doi:10.1371/journal.pone.0059630.g007 S1P Lyase Deficiency Protects in Murine EAE PLOS ONE | www.plosone.org contained significant numbers of CNS-invading CD3 + T cells, these cells were undetectable in respective tissue preparations from inducible Sgpl1-deficient mice. The protection observed in these models may be primarily due to the retention of T cells in the LN; however, we cannot exclude that the increased proportion of Foxp3 + regulatory CD4 + T cells in blood and lymphoid organs and the reduced antigen-responsiveness of the T cells (Fig. 7G) contribute to the protection. These possibilities will need to be addressed in future studies. Interestingly, S1P levels were not elevated in brain and spinal cord of the inducible Sgpl1-deficient mice; hence it appears that the protection in EAE is solely due to the peripheral effect on T cell numbers, quality, and CNS immigration rather than on central effects of S1P on cells in the CNS such as astrocytes that have been described for FTY720 [39]. In conclusion, based on the present data, inhibitors of Sgpl1 may present a new therapeutic option for the treatment of multiple sclerosis. Importantly, the studies on the inducible Sgpl1 KO mice show that partial inhibition of Sgpl1 suffices to induce reduction of peripheral T lymphocyte numbers and to confer protection in T cell dependent models of inflammation, while avoiding the overt toxicity associated with complete KO of the enzyme. Further studies will need to demonstrate whether Sgpl1 inhibitors offer an advantage in terms of efficacy and safety over other agents interfering with S1P-regulated lymphocyte trafficking, in particular S1P 1 agonists such as FTY720. Generation of Sgpl1 2/2 Mice Procedures involving animals were conducted in conformity with the guidelines and standards of the Novartis Animal Welfare Organization; studies were approved by the ethics committee of the regional governmental authority ''Kantonales Veterinä ramt der Stadt Basel'' (Permit Numbers: 2119 and 2305). All efforts were made to minimize animal suffering (see in particular details given in section ''EAE model''). To generate homozygous mice with a floxed Sgpl1 gene (Sgpl1 Flox/Flox ), a targeting vector for homologous recombination was designed by cloning 3 kb genomic DNA containing Sgpl1 intron 7, a 250-bp fragment containing Sgpl1 exon 8 as well as 2 kb genomic DNA containing Sgpl1 intron 8, exon 9 and part of intron 9 into vector pRay2loxP2Frt harboring a neomycin expression cassette. Exon 8 was flanked by two loxP elements that facilitate the excision of the exon after breeding with Cre deleter lines. After introduction into C57Bl/6 embryonic stem cells [41], neomycin resistant clones were screened by polymerase chain reaction (PCR) for homologous recombination. Correct targeting was confirmed by Southern blot using a neomycin-specific probe that allowed the exclusion of random integration events of the targeting vector. Selected targeted embryonic stem cells were injected into Balb/c blastocysts and chimeric mice were bred with C57Bl/6 females, resulting in an F1 generation of heterozygous inbred C57Bl/6 mice. To eliminate the FRT-flanked neomycin cassette, Sgpl1 gene targeted mice were crossed with a C57Bl/ 6 Flp deleter mouse strain and analyzed for the loss of the neomycin cassette. These Sgpl1 Flox/Flox mice were further crossed with a C57Bl/6 Cre deleter line [42] to generate the completely Sgpl1-deficient mice. All animals used in experiments with inducible knock-out mice were aged 5-7 weeks and were treated with tamoxifen (dissolved in sunflower oil/ethanol (10:1) mixture at 8 mg/mL) dosed at 40 mg per kg body weight per day, given perorally, once daily on 5 consecutive days. Analysis of recombination of the Sgpl1 gene and determination of Sgpl1 mRNA expression levels was done as described in Protocol S1. Determination of S1P, Sph, and C16-ceramide in Tissues and Blood Tissue samples were homogenized in water/acetonitrile 1:1. To 100-ml aliquots of tissue homogenate or body fluid, 25 ml internal standard solution containing 0.4 mg/ml C17-sphingosine, C17-S1P and C17-ceramide were added, followed by 700 ml acetonitrile/methanol/trichloromethane 40:30:30. After ultrasonication and a 5 min centrifugation step at 16,200 6g, the upper layer was evaporated to dryness. To reduce unspecific binding the extracts were then subjected to acetylation as described [43]. Analyte concentrations were determined by LC/MS using atmospheric pressure electrospray ionization source on a triple quadrupole mass spectrometer. Instrumentation used was a HTS PAL autoinjector (CTC Analytics, Ziefen, Switzerland), Rheos Allegro (Flux Instruments) and TSQ Quantum Ultra triple quadrupole mass spectrometers, both from Thermo Scientific, Rheinach, Switzerland. For analysis, 10 ml sample was injected on a Reprosil-Pur C18 2.0 6 50 mm reversed-phase column filled with 2.5 mm particles and held at 40uC. For loading, solvent composition was 5% B in A at 100 ml/min for 1 min. For separation a linear, two step gradient from 5% to 60% B in A within 1 min and 60% to 100% B in A within 5 min was applied with a total cycle time of 14 min. During separation the flow rate was held at 200 ml/min. Solvent A was 5 mM ammonium formate and 0.2% formic acid in water and solvent B was 5% methanol in acetonitrile. Multiple reaction monitoring was used, based on the di-or triacetylated precursor ions of the compounds and the corresponding internal standards (Table S1). Sph and ceramide derivatives were detected in positive mode; negative ionization was used for S1P. Quantification was performed based on the area ratios of the compound over internal standard in the extracted ion chromatograms. Recovery was .85% for all analytes. The limit of quantification, as determined by the lowest calibration sample showing signal-to-noise ratio .5 and accuracy ,25%, was 10 ng/ ml for Sph, 1 ng/ml for S1P, and 25 ng/ml for C16-ceramide. Assay of Sgpl1 Activity in Tissues The method is based on a protocol by Bandhuvula et al. [44] with modifications. Tissues were homogenized in two volumes of lysis buffer (10 mM HEPES, pH 7.4, 100 mM EDTA, 1 mM DTT, 10% (w/v) glycerol, 0.25 M sucrose, protease inhibitor cocktail (Roche)), followed by centrifugation at 500 6 g for 5 min. To 5 mL of the lysate 20 mL assay buffer (100 mM HEPES, pH 7.4, 100 mM EDTA, 0.05% Triton X-100, 10 mM pyridoxal-59-phosphate), and 25 mL 20 mM 15-NBD-S1P were added. The reaction mixture was incubated at 37uC for 30 min, followed by addition of 150 mL 1.33 M KCl in 2.66% HCl, 200 mL methanol containing 0.25 mM NBD-PA as internal standard, and 300 mL chloroform. After mixing and centrifugation, 200 mL of the organic layer was collected and evaporated in vacuo. The residue was taken up in 20 ml methanol and a 5 mL-aliquot was injected into the HPLC system (Agilent 1100) equipped with a Luna C18 column (100 6 4.6 mm; Phenomenex). The column was eluted at a flow of 1 ml/min with a gradient of (A) water and (B) methanol/ 5 mM acetic acid in water/1 M tetrabutylammonium dihydrogenphosphate (Fluka) 95:4:1; gradient schedule: 60% B for 1 min; 60 to 100%B for 2.5 min; 100% B for 4.5 min. Fluorescence detection was done at l ex 485 nm and l em 530 nm. Differential Blood Cell Counts and Flow Cytometry Differential hematology analysis was performed on whole blood, using an Advia 120 instrument (Siemens, Germany). For flow cytometry analysis of whole blood, erythrocytes were lysed by hypotonic shock, washed once in FACS wash buffer (PBS containing 1% FCS), blocked with mouse Fc Block TM (BD Biosciences), and stained for 30 minutes at 4uC in the dark with the indicated combination of fluorochrome-conjugated mAbs. After staining, the cells were washed twice with wash buffer and resuspended in 200 ml buffer. Samples were analyzed using a FACSCalibur flow cytometer and CellQuest Pro software (BD BioSciences). The following antibodies were used (all from BD BioSciences): anti-CD3-PerCP (clone 145-2C11), anti-CD4-FITC (clone V4), CD8-FITC (clone 53-6.7), and CD19-PerCP (clone 1D3). Absolute cell numbers within each subset were calculated by multiplying their fractional representation determined by FACS by the absolute number of white blood cells measured by hematology analysis. T cells were identified as CD3 + , and B cells as CD19 + mononuclear cells, respectively. Four days after immunization, mice were injected s.c. into the right hind paw with 2610 8 SRBC suspended in 50 ml of PBS. The same volume of PBS was injected into the left hind paw. Footpad swelling was measured 24 hours after the challenge using a microscopic lens with a superimposed grid, and was confirmed by weighing of footpads. The degree of DTH reaction was calculated as the percentage of footpad swelling using the following formula: footpad swelling (%) = (thickness of footpad injected with SRBC -thickness of footpad injected with PBS)/thickness of footpad injected with PBC 6 100. Data were evaluated using ANOVA, followed by Dunnett's multiple comparison. Animals were scored for neurological signs according the following scale: 0, no symptoms; 1, complete loss of tail tone; 2, clear hind limb weakness; 3, complete hind-limb paralysis; 4, moribund. Animals were sacrificed immediately if they had a clinical score of 4 or after having grade 3.5 (complete bilateral hindlimb paralysis and partial forelimb paralysis) for more than 3 days; also animals having a score of 3 for more than 7 days were sacrificed. If animals needed special care, food and water in form of gel packs was placed in the cages; additionally, the diet was supplemented with a water/nutrient mixture. Animals were sacrificed by inhalation of 5 vol% isoflurane in O 2 until death occurs, displayed by spontaneous urine loss. Histology of Spinal Cords Deeply anesthetized animals were perfused with PBS followed by 4% paraformaldehyde. Spinal cord tissue was removed and postfixed in 4% paraformaldyde for 24 hours, followed by embedding in paraffin. Representative sections were then taken H&E and solochrome cyanin staining. H&E was used to detect inflammation and to determine the integrity of tissue. Solochrome cyanin staining was used to stain the myelin. Briefly, following dewaxing and rehydration sections for H&E were immersed in Mayer's hematoxylin for 5 min, followed by rinsing, immersion in eosin for 2 min, rinsed, cleared, and mounted. Sections for solochrome stain were submerged in a solution of 0.2% solochrome cyanine, differentiated in 10% iron alum, followed by rinsing, clearing, and mounting. For immunohistochemistry, tissue was placed in OCT (Tissue-Tek) following PBS perfusion and snap frozen in dry ice cooled isopentane. Frozen sections were fixed in acetone for 10 min at room temperature. Anti-mouse CD3 antibody (Serotec, MCA 1477) was used. Evaluation for extent of H&E-positive inflammatory cells and myelin destruction was done using a semi-quantitative scoring system; 1 = mild; 2 = moderate; 3 = strong. The following CD3 positive staining score was used; 0 = 0-5 cells; 1 = 5-20 cells; score 2 = 20-50 cells; score 3 = 50-100 cells; score 4.100 cells. Proliferation Assay and IFNc Determination CD3 + CD4 + T cells from spleen and inguinal lymph nodes of MOG-immunized Sgpl1 Flox/Flox Cre 2/2 and Sgpl1 Flox/Flox Cre +/ 2 mice (day 10) were isolated by FACS sorting. 40,000 cells were incubated for 72 hrs in RPMI-1640 containing 10% FCS in presence of irradiated APCs (2610 5 ) derived from the same donor mice and MOG (20 mg/ml). 0.5 mCi/well of thymidine was added for the last 24 hrs. Incorporated radioactivity was measured as indicator for proliferation. Supernatants were analyzed for IFNc by ELISA (R&D Systems). Statistics Bar graphs in the figures represent average values 6 standard error of the mean. Statistical significance was calculated using Student's T-test, two-tailed for unequal variance, and is indicated in the graphs as follows: *, P,0.05; **, P,0.01; ***, P,0.001; n.s., not significant. Protocol S1
2017-04-16T15:41:43.393Z
2013-03-27T00:00:00.000
{ "year": 2013, "sha1": "85c04f1deeca6960e7c841c5fda969fb8d8caa79", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0059630&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85c04f1deeca6960e7c841c5fda969fb8d8caa79", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
265185773
pes2o/s2orc
v3-fos-license
Shear Testing of Topologically Optimised Web Cover Plates in Splice Connections—Experiment Design and Results Testing shear-resisting plates in steel connections is one of the most challenging laboratory undertakings in steel construction, as the most common experimental layout design includes simulating the connection with its adjoining members. This significant hindrance gained particular magnitude as the need to test prototypes of topologically optimised shear cover plates became more pressing. Indeed, new code-compliant topology optimisation approaches for steel construction have recently been offered, and physically non-linear analyses have been demonstrated to be vital for assessing these elements. Hence, a rapid and reliable experimental process has become a fundamental necessity. To answer this need, a novel layout is herein proposed, in which topologically optimised and previously numerically examined bolted shear plates of a well-known steel joint were tested. The results allowed for the definition of the material trilinear model for use in subsequent numerical analysis, as well as the validation of the numerical simulation results. The discrepancy between the previously mathematically anticipated and empirically determined ultimate resistance did not exceed 1.7%. Introduction Shear testing of steel bolted connection parts has always been challenging for laboratory researchers [1][2][3][4].The fact that these connections' centre of rotation depends on the number and position of the bolts (changing as these undergo non-linearity) and that cover plate connections are made of several independent parts makes it difficult to design an experimental test, except if specific machinery is available. In the current case, the need to test exceptionally slender shear parts and the availability of a general-purpose tension testing machine led to the design of a novel testing apparatus that can be employed in several other cases. The experimental programme aims to assess the ultimate and buckling behaviour of topologically optimised web cover plates [5,6] originally designed for the seminal work of Sheikh-Ibrahim on steel girder splice connections [7][8][9]. Experimental validation has been considered paramount for leveraging topology optimisation (TO) for steel construction after recent developments on the proposition and numerical validation of a methodology for code-compliant TO for steel connections [10].Moreover, the same research initiative also showed that the behaviour of topologically optimised bolted connections cannot be safely modelled in the linear regimen [10], meaning that non-linear analyses shall be conducted, thus imposing a further need for experimental validation. Hence, the current study is framed within a comprehensive research programme deemed to find a much-needed solution to bring TO to practice in steel construction and fill an important gap in relation to other industries where TO is already a daily reality [11]. Prototypes were optimised using a solid isotropic material with a penalisation (SIMP)based approach using the TOSCA software (version 2017) following a Eurocode 3-compliant methodology. The literature was investigated for state-of-the-art experimental setups for shear testing.The works of Chen et al. for shear tests on cold-formed steel channels [12], Milewska et al. on proposing design rules for shear testing [13], Lu et al. on experimental investigation of the shear behaviour of connectors [14], and Fan et al. [15] and Cai et Yuan [16] on the shear behaviour of steel connections have been considered. In light of the depicted literature, one can frame the current experimental research within the field as a contribution to a better understanding of the shear behaviour of highly optimised shear parts. The main aims of the current document are reporting a novel layout for rapid and reliable shear testing of shear cover plates, as well as experimentally determining the ultimate behaviour of topologically optimised shear plates in a way that previous results from numerical non-linear analyses can be validated.The results herein presented to the community are extensive with a further aim of enabling full reproducibility and sustaining the deduction of critical conclusions from previous numerical calculations. Materials 2.1.Sheikh-Ibrahim's Splice Connection 2.1.1.Geometry Sheikh-Ibrahim's work at the University of Texas at Austin in 1995 [7] provided a wideranging comprehension of moment connections in steel beams, particularly concerning the effects of bolt row eccentricity on web plates, thus contributing to the development of design recommendations that are broadly used by engineers and researchers [8,9].This scenario, together with the abundance of experimental data available, made it appropriate to examine one of the examples analysed for implementing a novel TO approach.As a result, Sheikh-Ibrahim's first case (1 of 32) was investigated, as shown in Figure 1 in which two W24 × 7 × 55 segments in A36 steel [17] are coupled with one cover plate in each flange and two cover plates in the web.The web cover plates are 381.2mm in length, 304.8 mm in width, and 12.7 mm thick.Because the fabrication was performed in Europe, the plate thickness was reduced to 12 mm.Similarly, the original A325 steel bolts with 15.9 mm diameter were replaced with M16 (8.8) bolts. Connection and Plate Resistance The European steel construction standards [18,19] and [20] were used to establish an initial scenario for the predicted failure modes of the non-optimised plate.Within this context, it was determined that the connections' capacity is restricted by their bending Connection and Plate Resistance The European steel construction standards [18][19][20] were used to establish an initial scenario for the predicted failure modes of the non-optimised plate.Within this context, it was determined that the connections' capacity is restricted by their bending moment resistance, which was calculated to be 417 kNm and 514 kNm depending on whether or not the partial safety coefficients relevant to the ultimate limit states were included.Given the characteristic resistance of the connection (417 kNm), the related shear when the connection fails in bending is 151 kN. Each 12.0 mm thick web cover plate has a bearing resistance of F bRd = 0.00048 f u [kN] (with fu in kPa and no partial safety factor of γ M2 = 1.25).According to EN1993-1-8, each M16 class 8.8 bolt has a shear resistance of F vRd = 75.4kN (without the partial safety factor of γ M2 = 1.25).As a result, the bolt shear limits the plate capacity rather than its bearing capacity, which equals 192 kN for the standard's minimal threshold for ultimate stress. In addition to the connection's vertical shearing forces, the inherent eccentricity of the bolt rows leads to a bending moment opposed to a torque with horizontal components [7].For R ,vector = 1 (unitary resultant force), this leads to vertical and horizontal components in the bolts equal to V ,component = 0.555 and H ,component = 0.832, respectively, indicating that each bolt capacity is 75.4 kN (per bolt and shear plane).The joint shear resistance is four times the vertical component.Thus, V/4 = 0.555 × 75.4,and the connection's resistance to an externally induced shear force is V = 167 kN. Topology Optimisation The web plate of the connection was topologically optimised utilising a densitybased technique, the solid isotropic microstructure with penalisation (SIMP) method [11], with a penalty factor of 3. Svanberg's method of moving asymptotes (MMA) solver was utilised [21,22]. Following a methodology proposed in [6], a Eurocode-compliant procedure was implemented, ensuring that the relevant collapse modes are considered by means of geometric constraints.The optimisation computational models employed the finite element method with two-dimensional triangular quadratic elements (STRI65) on free meshes with partitions.The selected objective function was the energy stiffness measure minimisation. To that goal, parametric research was conducted, resulting in optimal topologies with variable volume fractions from 100% to 10% every 10% in the first stage and a refinement every 2.5% in the second stage, from which physically non-linear studies were able to determine the precise final capacity of each solution.The selected topology corresponds to the smallest volume percentage that demonstrated the ability to ensure the eventual capacity of the original connection [5,6].A complete explanation of the optimisation methods and practical specificities can be found in [6]. Material Modelling The optimisation process considered the case study's web plate material, which was set to be A36 steel [17].The mechanical parameters of this alloy are determined by a minimum yield strength of 250 MPa and a minimum ultimate strength of 400 MPa. Nevertheless, other factors, such as information on the steel hardening phase and a reasonable level for ultimate elongation, were required to obtain truthful results from non-linear analysis.Consequently, the published literature was studied, including Sheikh-Ibrahim's [7], Sheikh-Ibrahim and Frank's [8], Mayatt's [23], and Rex and Easterling's [24] research, which suggested that it is reasonable to include a sensible and code-supported trilinear model with hardening.An elastic stage with E 0 = 199.9GPa, a plastic stage with E 0 /265, and a hardening stage with E 0 /1000 established such a model. However, material models had to be changed to reflect actual steel characteristics when the chosen topology progressed to the production stage.The supplier's material sheets confirmed the initial insight.BAMESA ensured a yield strength of 321 MPa, an ultimate strength of 434 MPa, and an elongation of 0.340 (mm/mm) for the steel batch from which the present plates were fabricated through subtractive manufacturing. Nonetheless, steel coupons obtained from the steel plate have been tested for casespecific quantification.These results are provided in the next section and were utilised to obtain the updated trilinear model with hardening.Material coupons, in the number of five, were obtained from the steel plate of which the prototypes were manufactured.These samples, listed in Table 1 and illustrated in Figure 2, were cut with a rectangular section and afterwards processed to meet the gauges prescribed in the EN 10025-2 [25] and ASTM A36 standards [17] for sample testing. Optimised Prototype The numerically optimised solution to a volume fraction of 12.5%-following the procedure depicted in [6]-was manufactured by VALIS, from a steel sheet with reference BAMESA lot 24474172FG and certificate 86.DO.01.22.A2, and was given the code "Part Optimised Prototype The numerically optimised solution to a volume fraction of 12.5%-following the procedure depicted in [6]-was manufactured by VALIS, from a steel sheet with reference BAMESA lot 24474172FG and certificate 86.DO.01.22.A2, and was given the code "Part 13".A photograph of the prototype is depicted in Figure 3. Designing a Novel Testing Layout Testing shear connections is a challenging laboratory endeavour since shear plates are loaded by bolts (Figure 4), which are not fixed.The plate will be subjected to a movement composed of translations and rotations, changing as the bolts experience different loading due to the movement, firstly, and due to the material non-linearity after.Therefore, fixed points within the plate domain are unlikely to exist.Under these circumstances, testing layouts shall be designed to ensure the necessary degrees of freedom-thus avoiding unrealistic fixed points-and to obtain the loads ap- Designing a Novel Testing Layout Testing shear connections is a challenging laboratory endeavour since shear plates are loaded by bolts (Figure 4), which are not fixed.The plate will be subjected to a movement composed of translations and rotations, changing as the bolts experience different loading due to the movement, firstly, and due to the material non-linearity after.Therefore, fixed points within the plate domain are unlikely to exist. Designing a Novel Testing Layout Testing shear connections is a challenging laboratory endeavour since shear plates are loaded by bolts (Figure 4), which are not fixed.The plate will be subjected to a movement composed of translations and rotations, changing as the bolts experience different loading due to the movement, firstly, and due to the material non-linearity after.Therefore, fixed points within the plate domain are unlikely to exist.Under these circumstances, testing layouts shall be designed to ensure the necessary degrees of freedom-thus avoiding unrealistic fixed points-and to obtain the loads applied by the correct elements with the correct intensities.Under these circumstances, testing layouts shall be designed to ensure the necessary degrees of freedom-thus avoiding unrealistic fixed points-and to obtain the loads applied by the correct elements with the correct intensities. Therefore, the most reliable way to ensure these conditions is replicating the whole connection layout, including, most of the time, two shear cover plates, as well as the beam segments, a load actuator upon the latter, and fixed and sliding beam supports. However, several problems arise from this approach.Using case-specific beam segments is expensive, time-consuming, and requires several different segments in each research programme.Moreover, loading control is indirect, as only the beam bending and shear moments at the splice can be commanded.The need to use two plates is equally a hindrance in most cases. A testing layout has been designed to overcome these challenges to ensure a beam-splice-like interface assembled in a tension testing machine. Made of easily removable and reassembled parts, this layout can be quickly assembled and disassembled.It is made of unique plates at the machine gripping sites, multiplied in the central part to ensure a symmetric loading configuration with double plates, to which a single shear testing plate can be bolted. Hence, the plate behaviour can be simulated in real connection conditions (including accurate loading by bolts and plate movement) while the testing machine exerts direct control over load and displacement. Contrary to indirect approaches, as in loading a beam with significant bending moments and shear forces, the machine-applied testing force equals the force to which the plate is subjected.Thus, it allows for the testing of much stronger plates with moderate laboratory resources. The testing layout scheme can be found in Figure 5 for the sacrificial (non-optimised) plate and in Figure 6 for the topologically optimised plate. Experimental Protocol The experimental protocol may be divided into three steps: steel coupon testing for determining material properties, layout general rehearsal, and prototype testing. Pertaining to the first step, the coupons' test section is defined as a 12 mm × 6 mm section, and the parallel length is 50 mm, thus guaranteeing that the test results are admissible under both European (EN 10025-2) and American (ASTM A36) materials standards.To anchor the yielding position, a "dog bone" is required.The minimum number of tests required to generate characteristic values for assessing steel qualities has been determined to be three.Five specimens were created to confirm the test's endurance to unanticipated circumstances, four of which were made and tested coupons. The prismatic coupons were used to create four "dog bone" testing specimens.Figure 7 depicts the process used to prepare these specimens, and its purpose was to ensure 50 mm long testing gauges with a 12 mm × 6 mm section. The coupons were tested until they failed in order to ascertain the steel's yielding stress, ultimate stress, and ultimate strain.According to the literature on steel bolted connections [26,27], all tests were conducted in displacement control mode using a monotonic ramp with a 0.05 mm/min displacement rate. A universal testing machine, the "MTS 810 Material Test System, model 318.25" with a load capacity of 250 kN and clamping peak pressure of 69 MPa, was utilised for the tests.The strain within the specimens' "dog bone" was measured both by crosshead motion, assuming uniform deformation along the 50 mm length of the reduced section ("crosshead def "), and by an integrated extensometer ("MTS 634.12F-24,S/N 10183942E") with a gauge length of 25 mm ("extensometer def "). Figure 8 depicts an image of the testing machine. Experimental Protocol The experimental protocol may be divided into three steps: steel coupon testing for determining material properties, layout general rehearsal, and prototype testing. Pertaining to the first step, the coupons' test section is defined as a 12 mm × 6 mm section, and the parallel length is 50 mm, thus guaranteeing that the test results are admissible under both European (EN 10025-2) and American (ASTM A36) materials standards.To anchor the yielding position, a "dog bone" is required.The minimum number of tests required to generate characteristic values for assessing steel qualities has been determined to be three.Five specimens were created to confirm the test's endurance to unanticipated circumstances, four of which were made and tested coupons. The prismatic coupons were used to create four "dog bone" testing specimens.Figure 7 depicts the process used to prepare these specimens, and its purpose was to ensure 50 mm long testing gauges with a 12 mm × 6 mm section.The coupons were tested until they failed in order to ascertain the steel's yielding stress, ultimate stress, and ultimate strain.According to the literature on steel bolted connections [26,27], all tests were conducted in displacement control mode using a monotonic ramp with a 0.05 mm/min displacement rate. A universal testing machine, the "MTS 810 Material Test System, model 318.25" with a load capacity of 250 kN and clamping peak pressure of 69 MPa, was utilised for the tests.The strain within the specimens' "dog bone" was measured both by crosshead motion, assuming uniform deformation along the 50 mm length of the reduced section ("crosshead def "), and by an integrated extensometer ("MTS 634.12F-24,S/N 10183942E") with a gauge length of 25 mm ("extensometer def").The coupons were tested until they failed in order to ascertain the steel's yielding stress, ultimate stress, and ultimate strain.According to the literature on steel bolted connections [26,27], all tests were conducted in displacement control mode using a monotonic ramp with a 0.05 mm/min displacement rate. A universal testing machine, the "MTS 810 Material Test System, model 318.25" with a load capacity of 250 kN and clamping peak pressure of 69 MPa, was utilised for the tests.The strain within the specimens' "dog bone" was measured both by crosshead motion, assuming uniform deformation along the 50 mm length of the reduced section ("crosshead def "), and by an integrated extensometer ("MTS 634.12F-24,S/N 10183942E") with a gauge length of 25 mm ("extensometer def").In a second step, tests using "sacrificial test plates" were performed to evaluate the layout, measurement assembly, and testing conditions; to assess staff preparation; to inspect possible setup errors; and to determine the general conditions for following prototype testing.The "sacrificial test plate", made of S275 steel to EN 10025, is displayed in Figure 9, and the testing configuration is shown in Figure 5.The bolts used to connect the testing plates to the layout were EN 14399's HR-tZn class 8.8.spect possible setup errors; and to determine the general conditions for following pr type testing.The "sacrificial test plate", made of S275 steel to EN 10025, is displaye Figure 9, and the testing configuration is shown in Figure 5.The bolts used to connec testing plates to the layout were EN 14399's HR-tZn class 8.8.Once any adjustments deemed essential after evaluating the sacrificial plate tes results were applied, the prototype layout was gathered, as shown in Figure 6, and tests then performed.According to the literature on steel bolted joints [26,27], the test the optimised shear prototypes were performed in displacement control mode cons ing a monotonic ramp and a 1 mm/min displacement rate. A universal testing machine model, the " Galdabini Quasar 1200 S/N VB47" with kN of nominal load capacity, a stroke of 1300 mm, and a clamping force limit of 1800 was employed for the tests.Figure 10 depicts the testing machine model.Once any adjustments deemed essential after evaluating the sacrificial plate testing results were applied, the prototype layout was gathered, as shown in Figure 6, and the tests then performed.According to the literature on steel bolted joints [26,27], the tests on the optimised shear prototypes were performed in displacement control mode considering a monotonic ramp and a 1 mm/min displacement rate. A universal testing machine model, the "Galdabini Quasar 1200 S/N VB47" with 1200 kN of nominal load capacity, a stroke of 1300 mm, and a clamping force limit of 1800 kN, was employed for the tests.Figure 10 depicts the testing machine model.Beyond the vertical motion "crosshead" measurement at the testing frame, the loca deformation of the prototype was computed using two linear variable displacement trans ducers (LVDTs) (model "Monitran MTN/IEUSAL050-10", stroke ±50 mm, placed in a suit Beyond the vertical motion "crosshead" measurement at the testing frame, the local deformation of the prototype was computed using two linear variable displacement transducers (LVDTs) (model "Monitran MTN/IEUSAL050-10", stroke ±50 mm, placed in a suitable configuration as shown in Figure 11 for the shear tests).The latter displacement measurement is referred to as "LVDT" and is the difference between the displacements detected by LVDT 1 and LVDT 2 .The displacement measurement obtained via the two LVDTs is expected to represent the net displacement of the specimen.In contrast, the displacement measurement obtained through the crosshead motion shows the overall test setup's gross displacement, which may be affected by possible slip phenomena at the wedges and further slip caused by bolt-hole clearance. Tests for Material Properties Assessment Figure 12 depicts the attained stress-strain curves for the ASTM A36 steel coupons' tensile tests considering crosshead deformation and extensometer deformation.Figure 13 compares the four stress-strain curves produced for the four specimens considering crosshead deformation.Table 2 shows the yielding and ultimate stress values for the four coupons, as well as the mean values and associated coefficient of variation (CoV).Both LVDT measurements were collected using an acquisition unit (model "NI cDAQ 9189") that was outfitted with a suitable connection type ("NI-9209") for monitoring input voltage signals.In the LabView environment, a personalised "project" was created to transform the voltage data into a displacement signal in real time and record signal data in a ".txt" file for postprocessing.All measurements (both those related to crosshead motion and those obtained via LVDTs) were taken at a sampling frequency of 10 Hz. Tests for Material Properties Assessment Figure 12 depicts the attained stress-strain curves for the ASTM A36 steel coupons' tensile tests considering crosshead deformation and extensometer deformation.Figure 13 compares the four stress-strain curves produced for the four specimens considering crosshead deformation.Table 2 shows the yielding and ultimate stress values for the four coupons, as well as the mean values and associated coefficient of variation (CoV).As a result, the yield stress, which ASTM A36 limits to 250 MPa, has been reported in the material certificate as 321 MPa and was empirically determined as 397.9 MPa (CoV 2.05%).Furthermore, the ultimate stress, to which ASTM A36 prescribes no less than 400 MPa, is reported as 434 MPa in the material certificate and was empirically measured as 440.9 MPa (CoV 4.73%).The failure mechanism of the steel coupons happened in the middle section of the "dog bone" for all specimens, followed by a noticeable steel necking event, as shown in Figure 14.As a result, the yield stress, which ASTM A36 limits to 250 MPa, has been reported in the material certificate as 321 MPa and was empirically determined as 397.9 MPa (CoV 2.05%).Furthermore, the ultimate stress, to which ASTM A36 prescribes no less than 400 MPa, is reported as 434 MPa in the material certificate and was empirically measured as 440.9 MPa (CoV 4.73%). The failure mechanism of the steel coupons happened in the middle section of the "dog bone" for all specimens, followed by a noticeable steel necking event, as shown in Figure 14.The failure mechanism of the steel coupons happened in the middle section of the "dog bone" for all specimens, followed by a noticeable steel necking event, as shown in Figure 14.The stress-strain diagram was updated based on the experimental data, which were based on mathematical correlations and computational techniques [28][29][30][31].Ultimate and yield thresholds were specified to the observed values, E 0 was held constant at 199.9 GPa, and E was limited to E 0 /10,000 during the hardening phase to avoid a significant excess of the experimentally measured ultimate stress.The Young modulus of the plastic phase was set at E = E 0 /642.Figure 15 The stress-strain diagram was updated based on the experimental data, which were based on mathematical correlations and computational techniques [28][29][30][31].Ultimate and yield thresholds were specified to the observed values, E0 was held constant at 199.9 GPa, and E was limited to E0/10000 during the hardening phase to avoid a significant excess of the experimentally measured ultimate stress.The Young modulus of the plastic phase was set at E = E0/642.Figure 15 depicts both the original stress-strain diagram used in the nonlinear analyses and the one modified in light of the experimental results. Shear Test on the Sacrificial Plate This preliminary trial aims to assess the correctness of the layout, geometry, general assumptions, and the non-optimised plate base scenario, including its failure mode, LVDT performance, installation and adequacy, and staff readiness for executing and recording tests. Layout preparation was initiated by tightening the 3 + 3 M20 bolts at the extreme fixed plates by preloading them with the required torque (38 kgf.m), followed by manual tightening of the sacrificial test plate 2 + 2 M16 bolts with a spud wrench without preloading. Since the sacrificial plate for the shear test will not be used for subsequent tests, it was decided to perform the tests until complete collapse.The load-displacement relation for the sacrificial shear test plate is shown in Figure 16. Shear Test on the Sacrificial Plate This preliminary trial aims to assess the correctness of the layout, geometry, general assumptions, and the non-optimised plate base scenario, including its failure mode, LVDT performance, installation and adequacy, and staff readiness for executing and recording tests. Layout preparation was initiated by tightening the 3 + 3 M20 bolts at the extreme fixed plates by preloading them with the required torque (38 kgf.m), followed by manual tightening of the sacrificial test plate 2 + 2 M16 bolts with a spud wrench without preloading. Since the sacrificial plate for the shear test will not be used for subsequent tests, it was decided to perform the tests until complete collapse.The load-displacement relation for the sacrificial shear test plate is shown in Figure 16. The prototype failed at a load of 159.5 kN, corresponding to displacement equal to 26.19 mm (measured through the crosshead motion) and 27.39 mm (measured through the LVDTs).The machine setup integrated crosshead motion and the LVDT measurements (acquired by the external cDAQ 9189 unit) which were relatively consistent up to the complete collapse of the specimen.Some photographs of the shear test on the sacrificial plate are shown in Figure 17, while the failure of the prototype is shown in Figures 18 and 19.In particular, the failure is ascribed to the bolt rupture of one of the four bolts, while significant shear deformations are observed in the other bolts along the bolt threaded area (especially in the bolt being vertically aligned with the former bolt that experienced the rupture).The two holes' borders at the opposite corners of the sacrificial plate are markedly ovalised along the loading shear direction.The plate is considerably deformed, and a crack line develops around the bottom-right hole and propagates until the specimen edge along the horizontal direction (Figure 19).The prototype failed at a load of 159.5 kN, corresponding to displacement equal to 26.19 mm (measured through the crosshead motion) and 27.39 mm (measured through the LVDTs).The machine setup integrated crosshead motion and the LVDT measurements (acquired by the external cDAQ 9189 unit) which were relatively consistent up to the complete collapse of the specimen.Some photographs of the shear test on the sacrificial plate are shown in Figure 17, while the failure of the prototype is shown in Figure 18 and 19.In particular, the failure is ascribed to the bolt rupture of one of the four bolts, while significant shear deformations are observed in the other bolts along the bolt threaded area (especially in the bolt being vertically aligned with the former bolt that experienced the rupture).The two holes' borders at the opposite corners of the sacrificial plate are markedly ovalised along the loading shear direction.The plate is considerably deformed, and a crack line develops around the bottom-right hole and propagates until the specimen edge along the horizontal direction (Figure 19).It shall be noted that, despite the fact that neither the plate nor the bolt material has been tested to assess its actual capacity, failure occurring by bolt fracture due to shear is in line with the connection capacity calculations.In fact, failure was expected to occur at this mode when a total shear of 167 kN was applied to the plate.This test achieved a force of 159.5 kN before collapse, meaning a difference of approximately 4.5%.It shall be noted that, despite the fact that neither the plate nor the bolt material has been tested to assess its actual capacity, failure occurring by bolt fracture due to shear is in line with the connection capacity calculations.In fact, failure was expected to occur at this mode when a total shear of 167 kN was applied to the plate.This test achieved a force of 159.5 kN before collapse, meaning a difference of approximately 4.5%. Shear Test on the Optimised Prototype A shear test was performed on specimen S1 (referred to as Part 13 within the manufacturing context).The 2 + 2 M16 bolts were tightened with a spud wrench without any applied preload, while preloading (38 kgf.m) was used on the 3 + 3 M20 bolts on plates fixed at the loading machine. Figure 20 depicts the load-displacement curves obtained from the tested prototype (S1).The prototype failed with a load of 106.4 kN, equivalent to crosshead motion displacements of 43.62 mm and LVDT displacements of 43.53 mm.The two displacement measurements correspond remarkably, except for a minor difference in the linear elastic loading ramp of the curve.The prototype has shown significant ductility, with a slight post-peak softening.For a displacement of roughly 80 mm, the prototype experienced failure in shear.Some photographs of the shear test on the S1 plate are shown in Figure 21, in which it is observed that the specimen exhibited a large shear deformation.The failure of the specimen is illustrated in Figures 22 and 23.The former demonstrates the plate damage caused by shear loading while still assembled in the layout and the slight rotation of the layout plates, which can only be seen after significant vertical displacement.This, however, is a logical and expected phenomenon related to the degrees-of-freedom the layout allows to precisely match the conditions of the cover plate in a real connection.Furthermore, the collapse is ascribed to the plate shear failure of the upper-left segment of the Xshaped specimen, accompanied by significant hole border ovalisation observed in the two opposite corner bolts (top-left and bottom-right), which is consistent with the loading shear direction.No cracks are noted near the hole borders (Figure 23).Some photographs of the shear test on the S1 plate are shown in Figure 21, in which it is observed that the specimen exhibited a large shear deformation.The failure of the specimen is illustrated in Figures 22 and 23.The former demonstrates the plate damage caused by shear loading while still assembled in the layout and the slight rotation of the layout plates, which can only be seen after significant vertical displacement.This, however, is a logical and expected phenomenon related to the degrees-of-freedom the layout allows to precisely match the conditions of the cover plate in a real connection.Furthermore, the collapse is ascribed to the plate shear failure of the upper-left segment of the X-shaped specimen, accompanied by significant hole border ovalisation observed in the two opposite corner bolts (top-left and bottom-right), which is consistent with the loading shear direction.No cracks are noted near the hole borders (Figure 23).The ultimate capacity of the assessed prototype was experimentally found to be 106.4kN.This outcome can be compared to the optimised plate's adjusted ultimate capacity.Hence, by adjusting plate resistance to account for a thickness of 12.0 mm rather than 12.7 mm, as well as yield and ultimate stresses of 398 MPa and 441 MPa instead of the codebased values of 250 and 400 MPa, respectively, the prototype's expected global ultimate characteristic resistance is 104.6 kN based on numerical analyses.Hence, one can find a difference of nearly 1.7% between the expected and the experimental ultimate capacity values. Conclusions A laboratory programme evaluated the behaviour of topologically optimised joint web plates and offered valuable information on the validity of prior computational physically non-linear analyses on those parts.The contributions can be summarized as follows: • Tensile experiments were performed on coupons cut from the steel plate used to manufacture the optimised prototypes, matching European and American specifications for testing steel to structural standards for material properties definition. • Experimental results fostered the development of a new trilinear material model, which will be useful for further numerical assessments in the optimised sections. • A preliminary test to collapse, using the sacrificial test plate, was extremely useful for confirming the joint collapse mode-by bolt rupture-and the expected collapse load. • The experimentally obtained ultimate capacity of the prototype closely matched the expected value.The former value not only slightly exceeded the numerical simulation findings, which are considered critical for attesting that the TO process is safesided for engineering design, but it also exceeded those values by roughly 1.7%, corroborating the numerical results. • Future developments made possible with the herein attained results include reassessing numerical simulations, considering the experimentally defined steel properties, as well as expanding the research objects to other plates.The ultimate capacity of the assessed prototype was experimentally found to be 106.4kN.This outcome can be compared to the optimised plate's adjusted ultimate capacity.Hence, by adjusting plate resistance to account for a thickness of 12.0 mm rather than 12.7 mm, as well as yield and ultimate stresses of 398 MPa and 441 MPa instead of the code-based values of 250 and 400 MPa, respectively, the prototype's expected global ultimate characteristic resistance is 104.6 kN based on numerical analyses.Hence, one can find a difference of nearly 1.7% between the expected and the experimental ultimate capacity values. Conclusions A laboratory programme evaluated the behaviour of topologically optimised joint web plates and offered valuable information on the validity of prior computational physically non-linear analyses on those parts.The contributions can be summarized as follows: • Tensile experiments were performed on coupons cut from the steel plate used to manufacture the optimised prototypes, matching European and American specifications for testing steel to structural standards for material properties definition. • Experimental results fostered the development of a new trilinear material model, which will be useful for further numerical assessments in the optimised sections. • A preliminary test to collapse, using the sacrificial test plate, was extremely useful for confirming the joint collapse mode-by bolt rupture-and the expected collapse load. • The experimentally obtained ultimate capacity of the prototype closely matched the expected value.The former value not only slightly exceeded the numerical simulation findings, which are considered critical for attesting that the TO process is safe-sided for engineering design, but it also exceeded those values by roughly 1.7%, corroborating the numerical results. • Future developments made possible with the herein attained results include reassessing numerical simulations, considering the experimentally defined steel properties, as well as expanding the research objects to other plates. 24 Figure 5 . Figure 5. Testing layout for the sacrificial test plate. Figure 8 Figure 8 . Figure 8.(a) Testing equipment used for the tensile tests on steel coupons and (b) a coupon under testing. Figure 8 . Figure 8.(a) Testing equipment used for the tensile tests on steel coupons and (b) a coupon under testing. Figure 8 . Figure 8.(a) Testing equipment used for the tensile tests on steel coupons and (b) a coupon under testing. Materials 2023 , 2 Figure 10 . Figure 10.Testing machine used for the prototype tests. Figure 10 . Figure 10.Testing machine used for the prototype tests. Figure 11 . Figure 11.Configuration of LVDTs used for the tensile tests. Figure 11 . Figure 11.Configuration of LVDTs used for the tensile tests. Figure 13 . Figure 13.Contrast of the four stress-strain curves of ASTM A36. Figure 14 . Figure 14.Failure of steel coupons made of ASTM A36. Figure 13 . Figure 13.Contrast of the four stress-strain curves of ASTM A36. Figure 14 . Figure 14.Failure of steel coupons made of ASTM A36. Figure 14 . Figure 14.Failure of steel coupons made of ASTM A36. depicts both the original stress-strain diagram used in the non-linear analyses and the one modified in light of the experimental results.Materials 2023, 16, x FOR PEER REVIEW 16 of 24 Figure 16 . Figure 16.Load-displacement relation of the sacrificial test plate under shear. Figure 17 . Figure 17.Shear test on the sacrificial plate: (a) beginning, (b) end of the test. Figure 17 . Figure 17.Shear test on the sacrificial plate: (a) beginning, (b) end of the test. Figure 18 . Figure 18.Failure of the sacrificial shear plate while in the testing machine. Figure 18 . 24 Figure 19 . Figure 18.Failure of the sacrificial shear plate while in the testing machine.Materials 2023, 16, x FOR PEER REVIEW 19 of 24 Figure 19 . Figure 19.Failure of sacrificial shear plate extracted from the testing machine. Figure 21 . Figure 21.Shear test on the S1 plate: (a) beginning, (b) end of the test. Figure 22 . Figure 22.Failure of shear prototype S1 in the testing machine. Figure 22 . Figure 22.Failure of shear prototype S1 in the testing machine.Figure 22. Failure of shear prototype S1 in the testing machine. Figure 22 . Figure 22.Failure of shear prototype S1 in the testing machine.Figure 22. Failure of shear prototype S1 in the testing machine. Figure 23 . Figure 23.Failure of shear prototype S1 extracted from the testing machine. Figure 23 . Figure 23.Failure of shear prototype S1 extracted from the testing machine. Table 1 . Steel coupons parts list. Table 2 . Tensile test results of coupons for yielding and ultimate stress. Table 2 . Tensile test results of coupons for yielding and ultimate stress.
2023-11-15T17:30:32.424Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "8d7960b177556eaef7b10e18d33de14a44085a91", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/22/7077/pdf?version=1699424874", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ed7a74191918aaa291fe860a818364f6cad4dba", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
3015428
pes2o/s2orc
v3-fos-license
Vitamin D and Cardiovascular Disease Vitamin D deficiency, as well as cardiovascular diseases (CVD) and related risk factors are highly prevalent worldwide and frequently co-occur. Vitamin D has long been known to be an essential part of bone metabolism, although recent evidence suggests that vitamin D plays a key role in the pathophysiology of other diseases, including CVD, as well. In this review, we aim to summarize the most recent data on the involvement of vitamin D deficiency in the development of major cardiovascular risk factors: hypertension, obesity and dyslipidemia, type 2 diabetes, chronic kidney disease and endothelial dysfunction. In addition, we outline the most recent observational, as well as interventional data on the influence of vitamin D on CVD. Since it is still an unresolved issue whether vitamin D deficiency is causally involved in the pathogenesis of CVD, data from randomized controlled trials (RCTs) designed to assess the impact of vitamin D supplementation on cardiovascular outcomes are awaited with anticipation. At present, we can only conclude that vitamin D deficiency is an independent cardiovascular risk factor, but whether vitamin D supplementation can significantly improve cardiovascular outcomes is still largely unknown. Introduction Vitamin D is classically known for its role in bone metabolism, being important for the maintenance of calcium homeostasis by ensuring physiologic calcium absorption by the gut [1][2][3]. The discovery that the vitamin D receptor (VDR) is ubiquitously expressed in almost all body cells, such as immune, vascular or myocardial cells, suggests an involvement of vitamin D-mediated effects in several other systems apart from musculoskeletal tissues [2]. This has led to extensive research on vitamin D as a potential influencing factor in the pathogenesis of several chronic non-skeletal diseases, such as infectious or autoimmune diseases, cancer or cardiovascular diseases (CVD) [4][5][6]. Cardiovascular (CV) risk factors, such as arterial hypertension, obesity, dyslipidemia or diabetes mellitus, as well as CVDs, including myocardial infarction, coronary artery disease or stroke, are the most prevalent diseases and account for the major causes of death worldwide, especially in Western countries [7]. This underlines the importance of clarifying the role of vitamin D in the context of CVD. Already in 1981, Scragg reported on a seasonal variation of CV mortality and suspected a positive, protective effect of UVB-radiation on CV risk [8]. An association of vitamin D and different CV risk factors and diseases has been extensively evaluated during the last few years. Numerous observational studies, prospective meta-analyses, as well as some interventional studies have addressed the possible linkage of vitamin D deficiency and the development of CVD and its risk factors [9][10][11][12]. The scope of this review is to provide a brief overview on basic vitamin D metabolism and vitamin D deficiency. We summarize the most recent studies evaluating the relationship between vitamin D and the presence of cardiovascular risk factors, including hypertension, obesity, type 2 diabetes mellitus, chronic kidney disease, dyslipidemia and endothelial dysfunction. In addition, we give an overview on observational data on the association between vitamin D status and incident CV events. Finally, we discuss randomized controlled trials (RCTs) and meta-analyses on vitamin D treatment and its influence on CVD. Since there has been extensive research, including numerous reviews on this topic within the last few years, we mainly want to concentrate on the latest developments within the years 2012 to 2013. We conclude our work by giving an outlook on expectations concerning the large ongoing interventional trials on vitamin D supplementation and CVD and the future developments in this research field. Basic Vitamin D Metabolism Vitamin D 3 is a steroid pro-hormone, which is mainly derived from UVB-induced synthesis of 7-dehydrocholesterole in the skin. This endogenous synthesis is the main source of vitamin D supply to the body and accounts for approximately 80% of the vitamin D supply [1][2][3]. Vitamin D 2 or D 3 can also be taken up via nutrition in small amounts, as it is also contained in, e.g., eggs, mushrooms and fish. After synthesis in the skin or nutritional uptake, vitamin D is then transported to the liver by a specific vitamin D binding protein (VDBP), where it is hydroxylated to 25-hydroxy-vitamin D (25(OH)D) [1][2][3]. This inactive form is the main metabolite circulating in the blood and is also used for the classification of vitamin D status [1][2][3]. Predominantly in the kidneys, 25(OH)D is further hydroxylated to its most active form, 1,25-dihydroxy-vitamin D (1,25(OH)2D), by the enzyme, 1-α-hydroxylase. Since 1-α-hydroxylase is also found to be active in extra-renal tissues throughout the body [13], this gives rise to the assumption that vitamin D is playing a widespread role for overall health, including, beyond the musculoskeletal system, other tissues, such as the heart and the vessels. Classification of Vitamin D Deficiency Vitamin D status is classified according to 25(OH)D levels in the blood, and its half-life is approximately two to four weeks. There exists no clear consensus on the definition of vitamin D deficiency and vitamin D sufficiency. While the Institute of Medicine (IOM) report classifies vitamin D deficiency according to 25(OH)D levels below 12 ng/mL (multiply by 2.496 to convert ng/mL to nmol/L) and 20 ng/mL as sufficient, the Endocrine Society Guidelines suggest that 25(OH)D levels <20 ng/mL are deficient and levels of 30 ng/mL are sufficient [14][15][16]. These classifications are mainly based on bone related outcomes, since available data are still insufficient to give recommendations related to CVD or other chronic diseases. Prevalence of Vitamin D Deficiency Vitamin D insufficiency and deficiency are highly prevalent; this is very well reflected by the fact that more than half of the population worldwide has levels below 30 ng/mL [16,17]. Different factors, such as increased age, female sex, darker skin pigmentation, reduced sun exposure, as well as seasonal variation and distance from the equator are risk factors for vitamin D deficiency and must be considered. The increasing prevalence of low levels of vitamin D is mainly explainable by changes in lifestyle, reduced sun exposure and, to some extent, by air pollution [18]. It should, however, be acknowledged that previous inter-assay and inter-laboratory comparisons of 25(OH)D levels showed significant variability of the reported values. This, in turn, points to the need for standardization of 25(OH)D measurements and warrants caution when comparing 25(OH)D levels and their cut-offs derived from different studies [19][20][21]. Arterial Hypertension Vitamin D deficiency has been associated with higher blood pressure levels, which was already shown in most, but not all, prospective studies, as well as meta-analyses of observational studies [9,22,23]. While these observational data support an association between vitamin D status and blood pressure, it must also be acknowledged that residual confounding cannot be excluded. In addition, the reported variations in blood pressure explained by differences in vitamin D status were often relatively small and, thus, of questionable clinical relevance. Possible mechanisms for this association of vitamin D and blood pressure include the inverse association of vitamin D levels with the renin-angiotensin-aldosterone system (RAAS) activity, the effect of improving endothelial function and the prevention of secondary hyperparathyroidism [24][25][26][27]. In this context, it should be noted that high parathyroid hormone (PTH) levels are a hallmark of vitamin D deficiency and are known to be associated with myocardial hypertrophy and higher blood pressure levels [28]. In addition, increasing evidence suggests that the mutual interplay between vitamin D, parathyroid hormone and aldosterone mediates cardiovascular damage independent of the RAAS [29,30]. A large meta-analysis assessing the association of baseline vitamin D status with the risk of hypertension was performed by Kunutsor et al. They included 11 prospective studies published between 2005 and 2012, which comprised a total of 283,537 participants and 55,816 cases of hypertension with a mean follow-up of nine years [31]. The authors reported on a significant inverse association of baseline circulating serum vitamin D levels with the risk of incident hypertension. In detail, the pooled relative risk (RR) was 0.70 (95% confidence intervall (CI) 0.57-0.86) when comparing the highest to the lowest tertile of baseline 25(OH)D levels, with no evidence of heterogeneity among the findings. When evaluating dose-response in five studies that reported RRs for vitamin D exposure, the authors found that the risk for hypertension was lowered by 12% per 10 ng/mL increment of 25(OH)D [31]. Although this was the largest meta-analysis performed giving strong evidence for a relationship of vitamin D and blood pressure (BP), data on causality are still insufficient and warrant further RCTs. Within the last year, several RCTs were performed to evaluate the effect of vitamin D on blood pressure (BP) levels in various cohorts, showing different results [32][33][34]. Larsen et al. performed an RCT in 130 hypertensive patients who were supplemented with 3000 IU of vitamin D or placebo over 20 weeks during winter in Denmark. They found a non-significant reduction of BP in the results of 24-h ambulatory blood pressure monitoring (ABPM) (−3 mmHg, p = 0.26/−1 mmHg, p = 0.18). Interestingly, when only vitamin D-insufficient patients were analyzed, with 25(OH)D levels below 32 ng/mL, (n = 92), systolic and diastolic BP levels in 24-h ABPM were significantly lowered (−4 mmHg, p = 0.05/−3 mmHg, p = 0.01) in the therapy group compared to placebo [32]. This effect in hypertensive and vitamin D-deficient patients has also been seen in a study by Forman et al., who performed an RCT in black Americans, who are known to be at a very high risk of both vitamin D deficiency and hypertension [33]. They included 283 participants who were allocated to either 1000, 2000 or 4000 IU of vitamin D or placebo over three months. They were able to show that supplementation of vitamin D in an unselected population of blacks lead to a reduction of 0.2 mmHg of systolic BP for each increase of 1 ng/mL of vitamin D over three months (p = 0.02) [33]. These results indicate an effect on BP of vitamin D, particularly in hypertensive, vitamin D-insufficient/deficient patients, rather than in a normotensive population with normal serum vitamin D levels. This should be considered for the design of future RCTs. Obesity Obesity is closely associated with vitamin D deficiency [35]. It had been hypothesized that this may be due to vitamin D deposition in adipose tissue, resulting in lower circulating 25(OH)D levels in the blood [36]. Others hypothesized a causal relationship of vitamin D deficiency leading to obesity [37]. To solve this research question, Vimaleswaran et al. performed a bi-directional Mendelian Randomization study and showed a one-directional causal relationship, indicating that obesity leads to lower vitamin D levels and not the other way around [38]. In that investigation, they included 21 cohorts, comprising a total number of 42,024 patients. They analyzed 12 established single nucleotide polymorphisms (SNPs) related to body mass index (BMI) and four typical vitamin D-related SNPs to perform this bi-directional Mendelian Randomization study. They could show that each unit increase of BMI was associated with a 1.15% decrease of 25(OH)D after adjustments for typical confounders. The authors concluded that obesity can be regarded as a causal risk factor for vitamin D deficiency, accounting for approximately one third of vitamin D deficiency [38]. On the other hand, genetically determined 25(OH)D levels were not significantly related to BMI. These findings suggest that the link between obesity and vitamin D deficiency is only driven by the fact that a higher BMI lowers 25(OH)D levels. By contrast, there seems to be no significant effect of vitamin D status on obesity. While these findings are important for our understanding on the causality regarding the association between vitamin D and obesity, there are several unanswered questions surrounding this topic related, e.g., to the bioavailability of vitamin D stored in adipose tissue. Glucose Metabolism and Diabetes Mellitus Type 2 In observational and prospective studies, low vitamin D levels have largely been associated with disturbances in glucose metabolism, as well as higher risk of developing diabetes in the future, although some authors have reported on conflicting results [5,[39][40][41][42]. It should also be kept in mind that vitamin D deficiency in diabetic patients may partly be a consequence of reduced physical activity and consecutive obesity, as well as limited sun exposure. Therefore, residual confounding in observational studies due to the close link of obesity with both vitamin D deficiency and glucose intolerance cannot be ruled out with certainty [35,43]. On the other hand, we must also consider that reverse causality may exist, since there are data suggesting that an inflammatory insult might decrease 25(OH)D levels [44]. There are, however, several possible mechanisms that could explain the association of vitamin D deficiency with disturbances in glucose homeostasis and diabetes mellitus. VDR, as well as 1-α-hydroxylase are expressed in pancreatic beta cells, indicating a potential role of vitamin D on beta cell function [2,45]. It has also been hypothesized that calcium, which is crucial for insulin synthesis and secretion, could play a role, since it is mainly regulated by vitamin D [46]. Another possible pathway could be vitamin D-induced stimulation of osteocalcin, which may improve insulin sensitivity [47]. Randomized trials, on the other hand, have largely failed to show clear beneficial effects of vitamin D supplementation on improving glycaemia or insulin resistance [48,49]. Addressing this issue, Davidson et al. conducted an RCT in individuals with prediabetes and hypovitaminosis D [50]. Study participants were allocated to high dose vitamin D therapy (mean weekly dose of 88,865 IU) vs. placebo [50]. No difference regarding plasma glucose parameters, insulin secretion and sensitivity or development of diabetes in the therapy group compared to placebo administration was found after one year [50]. Hence, although some preliminary data suggested a relevant effect of vitamin D on glucose homeostasis, the currently available literature on vitamin D does not support the notion that vitamin D supplementation is useful for the prevention and/or treatment of diabetes mellitus. Further RCTs are, however, urgently needed before drawing final conclusions on the relationship between vitamin D and diabetes. Lipids Some observational studies indicate an association of vitamin D deficiency with lower high density lipoprotein (HDL) and higher triglycerides, as well as higher apolipoprotein E levels [51,52]. Towards this, a large prospective evaluation of vitamin D levels and blood lipids showed a significant association of lower vitamin D levels with hypercholesterinemia [53]. However, it should be acknowledged that the results on vitamin D and blood lipids are inconsistent and could be confounded by the above mentioned link of vitamin D and obesity [38]. Recent clinical studies that have evaluated the effect of vitamin D supplementation on blood lipids in some RCTs yielded conflicting evidence. They showed rather inconsistent findings with the majority of the studies reporting on no significant effect on blood lipids when vitamin D supplementation was compared to placebo [33,[54][55][56]. These recent results aggravate the decision on a causal relationship of vitamin D deficiency and an unfavorable lipid profile. Nevertheless, no final conclusion can be drawn since large, well-designed RCTs are still missing in this field. In addition, we should also consider that publication bias, i.e., unpublished results showing no effects of vitamin D, might be a problem. Chronic Kidney Disease Vitamin D levels in patients with chronic kidney disease (CKD) are significantly lower compared to the general population. For example, a high prevalence of vitamin D deficiency with values of below 20 ng/mL in more than 70% was seen in dialysis patients [57]. This may be due to the fact that these patients have a reduced sun exposure, due to a higher prevalence of co-morbidities. Moreover, it has also been suggested that CKD patients have an impaired vitamin D synthesis in the skin. Several epidemiological studies have shown that lower 25(OH)D levels were associated with albuminuria and/or progression of renal failure. Moreover, vitamin D deficiency has been identified as an independent risk factor for higher mortality in patients suffering from CKD, which can mainly be attributed to cardiovascular deaths [58][59][60][61]. Apart from low 25(OH)D, also low 1,25(OH)2D was associated with higher mortality rates in most observational studies among CKD patients [62,63]. Particular attention is paid to vitamin D in the field of nephrology, because the classic and broadly known effect of vitamin D supplementation is the reduction of PTH levels. This is of high clinical relevance, since PTH itself is an independent cardiovascular risk factor [64], and secondary hyperparathyroidism is very common in CKD patients. This therapeutic effect of PTH lowering is achieved with both active (1,25(OH)2D) and natural (25(OH)D) vitamin D supplementation, although stronger when supplementing with 1,25(OH)2D [65]. Addressing the role of active vitamin D treatment in CKD, Duranton et al. conducted a systematic review and meta-analysis of seven prospective and seven retrospective observational trials in CKD patients treated with 1,25(OH)2D or different active vitamin D analogues. The authors found a significant reduction of all-cause mortality (RR 0.73; 95% CI 0.65-0.82) and even 37% reduction of cardiovascular mortality (RR 0.63; 95% CI 0.44-0.92) in patients on active vitamin D treatment [66]. While these data suggest beneficial effects of vitamin D treatment in CKD patients, it must also be pointed out that no major vitamin D RCTs have evaluated hard clinical endpoints in CKD patients yet; though, meta-analyses of randomized trials performed in older study populations, of which a great part showed impaired kidney function, showed a reduction of fractures and all-cause mortality by vitamin D supplementation [67,68]. Endothelial Dysfunction/Atherosclerosis Since the VDR is also expressed in the vasculature, it is tempting to hypothesize that vitamin D might also protect against vascular diseases, including atherosclerosis and endothelial dysfunction [27]. According to experimental studies, some putative vasculoprotective actions of vitamin D may be mediated by increasing nitric oxide (NO) production, inhibiting macrophage to foam cell formation or reducing the expression of adhesion molecules in endothelial cells [69][70][71]. This is in line with reports from cross-sectional observational studies, which showed that lower vitamin D levels are associated with endothelial dysfunction, as well as increased arterial stiffness [6,27]. Clinical data from RCTs addressing vitamin D effects on vascular diseases are sparse and revealed inconsistent results. Promising results were, however, published on vitamin D and endothelial function with some, but not all RCTs showing that vitamin D may improve endothelial function [72][73][74]. Observational Studies on Vitamin D and Cardiovascular Events Already in 1981, Scragg found an inverse relationship of cardiovascular mortality and UVB radiation [8]. Since then, several, but not all, observational studies that have been published indicated that low vitamin D levels are associated with higher incidence of cardiovascular events and mortality [10,66,[75][76][77]. Even asymptomatic coronary artery disease was associated with lower vitamin D levels in high risk type 2 diabetic patients (adjusted odds ratio (OR) 2.9, 95% CI 1.02-7.66), as observed in a recent observational study [78]. Vitamin D deficiency has been associated with an increased risk of myocardial infarction (MI), and a significant inverse relationship of 25(OH)D levels and matrix-metalloproteinase-9 (MMP-9), a marker for myocardial remodeling after acute MI, has been documented [79,80]. Vitamin D levels also seem to predict the risk of adverse events after acute myocardial events and cardiac surgery, indicating higher risk for patients with lower vitamin D levels, as reported in recent publications [81,82]. Data from prospective observational studies suggest that low vitamin D levels are a risk factor for the occurrence of strokes [83][84][85]. Chowdhury et al. showed in a meta-analysis of seven studies, including 47,809 individuals and 926 cerebrovascular events that, under consideration of established cardiovascular risk factors, the risk for cerebrovascular disease was significantly lower in subjects with high 25(OH)D levels compared to those with insufficient vitamin D status [84]. Another meta-analysis reported on similar results when comparing low versus high vitamin D levels, with an RR for strokes of 1.52 (95% CI 1.20-1.85) in the lowest versus the highest 25(OH)D group [83]. In the currently largest meta-analysis on circulating 25(OH)D levels and risk of CVD, Wang et al. could show an adjusted pooled RR of 1.52 (95% CI 1.30-1.77) for total CVD when comparing the lowest to the highest categories of baseline circulating 25(OH)D concentration [12]. The authors investigated 19 studies, including 65,994 patients and 6123 CVD cases. The increment in CVD risk across decreasing 25(OH)D levels was generally linear over the range of 25(OH)D levels from 20 to 60 nmol/L, with a marginally significant pooled RR of 1.03 per decrement of 25 nmol/L of 25(OH)D [12]. It should, however, be noted that not all single studies reported on a significant association between low 25(OH)D levels and increased risk of CVD [86]. When reviewing these above mentioned meta-analyses, it has to be kept in mind that these observations could also be influenced by confounding factors, such as reduced mobility and physical activity in chronically ill patients, therefore leading to reduced sunlight exposure and lower vitamin D levels. Other confounders, such as increased age or higher rate of obesity, as well as PTH, renin, calcium and phosphorus, cannot be ruled out with certainty, although they are included as possible confounders in most trial analyses [25,87]. Vitamin D Supplementation and Cardiovascular Disease There are only a few RCTs that have evaluated cardiovascular outcomes, and since all large previous RCTs were designed to study vitamin D effects on bone health, most of them were neither primarily designed nor statistically powered to assess vitamin D effects on CVD. Some small RCTs reported on mixed results of vitamin D supplementation on cardiovascular events [33,88], although it bears mentioning that some meta-analyses of these RCTs have found non-significant trends for reduced CV-events in patients receiving vitamin D supplementation compared to placebo [89,90]. Of note, vitamin D supplementation was very often combined with calcium intake, making it hard to interpret the RCT results, especially since calcium intake may be associated with increased cardiovascular risk, as suggested in a previous meta-analysis [91]. Since CVD is globally the leading cause of death, it is of special interest that in a Cochrane review and meta-analysis on randomized controlled trials from 2011, Bjelakovic et al. could show that vitamin D supplementation leads to a moderate, but statistically significant, reduction of total mortality rate (RR 0.94, 95% CI 0.91-0.98) compared to placebo [68]. The authors calculated that 161 individuals have to be treated to prevent one additional death [68]. This result by Bjelakovic et al. was mainly derived from RCTs in older individuals and is in line with several observational studies suggesting that vitamin D deficiency is a risk factor for mortality, particularly in the aging population [92]. The findings by Bjelakovic should, however, be interpreted with caution, since competing risks in elderly people might have an impact on the results. In addition, it must also be underlined that the study by Sanders et al. showed that high-dose vitamin D supplementation caused an increased risk of falls and fractures [93]. Discussion and Future Outlook Based on systematic reviews and meta-analyses of the currently available literature, it can be concluded that vitamin D deficiency is an independent cardiovascular risk factor that is associated with increased risk of cardiovascular events. However, it is largely unclear whether these associations are of causal nature. While it seems plausible that vitamin D deficiency can be considered a surrogate marker for poorer health status, most notably observed in patients with chronic diseases, including cardiovascular risk factors and CVD, it remains to be proven whether vitamin D itself can directly impact on cardiovascular outcomes [94]. Several RCTs in the past failed to prove a causal relationship between vitamin D repletion and reduction of CV risk factors and CVD. This could hypothetically be attributed to small sample sizes or inappropriate study designs, since most trials were initially designed for clinical endpoints other than cardiovascular events [33,[88][89][90]. Independent of these conflicting results, it is not entirely clear whether vitamin D supplementation exerts significant beneficial effects on healthy population, in addition to multimorbid patients. Of note, most trials have shown that beneficial effects of vitamin D supplementation are frequently identified in patients with very low 25(OH)D levels, and these patients seem to be at the highest risk for CVD [11,12,31,32,73,95]. On the other hand, it is not entirely clear whether vitamin D supplementation has significant beneficial effects in healthy population, as well, or is only meaningful in vitamin D deficient, chronically ill patients. While existing data are insufficient to draw final conclusions on the effect of vitamin D supplementation on cardiovascular outcomes, several large interventional trials, designed to evaluate the effect of vitamin D supplementation on different CVD as primary endpoints in chronically ill, as well as general populations, have just started. These include a study in 1000 heart failure patients in Germany, the EVITA study by Zittermann et al., the Vitamin D Assessment Study (ViDA) in New Zealand among over 5000 older individuals conducted by Scragg et al. and the VITamin D and OmegA-3 TriaL VITAL trial, evaluating cardiovascular and cancer mortality in over 20,000 older subjects in the USA without cancer or CVD at baseline [96][97][98]. The large sample size of all these studies and a rather long time of intervention seem promising to gain final conclusions, especially regarding vitamin D effects on cardiovascular events and mortality in the general population. Results are expected in the next few years (2017-2020), but it is still open whether the findings of these RCTs will give all the final answers on the question of whether vitamin D is useful for the prevention and treatment of CVD. It has to be kept in mind that patients included in, for example, the VITAL study are not screened for vitamin D deficiency prior to inclusion, and also, an additional amount of up to 800 IU of vitamin D supplementation is also allowed in the placebo group [96]. This could lead to relatively high vitamin D levels in the placebo group, which could mask a possible beneficial effect of supplementation in individuals with very low vitamin D status [77]. Another possible problem with the VITAL study is the vitamin D food fortification in the US. Therefore, we can hope to gain new insights on the association of vitamin D and CVD, but cannot be sure to receive definite answers to be able to recommend vitamin D supplementation as a preventive or therapeutic method in this context. Conclusions At present, we can conclude that vitamin D deficiency is an independent cardiovascular risk factor, but whether vitamin D supplementation can significantly improve cardiovascular outcomes is still largely unknown.
2014-10-01T00:00:00.000Z
2010-03-31T00:00:00.000
{ "year": 2013, "sha1": "ac998a2e53844234440a15436a9ac1f338a46c03", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/nu5083005", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac998a2e53844234440a15436a9ac1f338a46c03", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218502420
pes2o/s2orc
v3-fos-license
Secure Single-Server Nearly-Identical Image Deduplication Cloud computing is often utilized for file storage. Clients of cloud storage services want to ensure the privacy of their data, and both clients and servers want to use as little storage as possible. Cross-user deduplication is one method to reduce the amount of storage a server uses. Deduplication and privacy are naturally conflicting goals, especially for nearly-identical (``fuzzy'') deduplication, as some information about the data must be used to perform deduplication. Prior solutions thus utilize multiple servers, or only function for exact deduplication. In this paper, we present a single-server protocol for cross-user nearly-identical deduplication based on secure locality-sensitive hashing (SLSH). We formally define our ideal security, and rigorously prove our protocol secure against fully malicious, colluding adversaries with a proof by simulation. We show experimentally that the individual parts of the protocol are computationally feasible, and further discuss practical issues of security and efficiency. I. INTRODUCTION Cloud-based storage has become an increasingly popular solution for storing large amounts of data. Both users and providers of these systems have the common incentive to reduce the amount of storage and bandwidth these systems require. Users also have the incentive of privacy -they prefer for the provider and for other users to learn as little about their data as possible. The obvious solution to this problem is encryption -instead of uploading their files to a cloud server, users will instead upload an encryption of their file. Data encryption is neccesary to protect against data breaches, which may cost cloud storage providers millions of dollars in damages and lost business [28]. As the amount of data stored by cloud storage providers increases, they will seek to mitigate their increasing costs from the extra storage. One technique to save storage and bandwidth is deduplication, where identical or similar pieces of data are detected, allowing servers to avoid storing redundant data. When identical or nearly-identical files are uploaded, the server will keep pointers to a single copy of data instead of storing redundant copies. There is a natural dissonance between deduplication and privacy. For accurate deduplication, some information about the file must be provided in order to test whether that file is similar to previously uploaded files. However, this provision of information defeats the purpose of encrypting files for data privacy, leading us to consider the question of to what extent a deduplication protocol can be both accurate and secure. Figure 1. Example of Nearly-Identical Images [11] Overview of Deduplication: Deduplication is the process of detecting identical or nearly-identical data for the purpose of conserving storage by storing unique data only. Deduplication can take place on entire files, or on individual blocks of files, but it has been noted that the distinction is not important when considering deduplication schemes [27]. Deduplication schemes can be classified as exact or nearly-identical. Exact deduplication works to determine if files are exact copies [26], [27], [29], [32], [33]. Nearlyidentical deduplication works to detect highly similar files [8], [14], in addition to exactly identical files. However, this additional functionality requires more computation. We consider similarity as it relates to human perception of nearly-identical images as other similar works do [14] (e.g., Fig. 1). In a cloud storage system utilizing deduplication for saving storage, the deduplication can be carried out by the clients or the server. It is often preferable in high-trust scenarios for the server to carry out deduplication, to reduce the computational load on the clients. However, in situations where privacy is a concern, clients may not wish to provide the server with the necessary data to perform deduplication. Deduplication can be performed between data from multiple users or only across data from a single user. Only applying deduplication on a per-user basis is a simple answer to concerns of cross-user privacy, but cannot reduce storage in the event of multiple users storing the same file. Client-based secure nearly-identical deduplication is most useful in a scenario where clients' computation is plentiful, but their storage is limited. For example, ordinary smartphones can perform the computation needed to carry out nearly-identical deduplication when plugged in at night, and this deduplication can reduce the use of smartphones' limited storage. This is also applicable with use cases involving IoT devices. For instance, there has been recent interest in utilizing IoT devices to allow for the affordable deployment of biometric technology, but a major challenge in this scenario is building efficient systems in spite of the space constraints [12]. Secure deduplication can be used to decrease the need to store a large amount of redundant data on a server, which would alleviate practical space constraints when leveraging such technology in the wild. Summary of contributions: (1) A review of related work in the area of deduplication; (2) Design and implementation of a nearly-identical deduplication scheme for images, with security against fully malicious, colluding adversaries and only utilizing a single untrusted server; (3) A proof of security of our scheme, with a discussion of practical issues; (4) Experiments with real-world datasets showing the feasibility of our protocol and implementations. A. Exact Two-server Deduplication In some schemes, hashing is used to protect data privacy. The scheme proposed by Wen et al. uses two servers to construct a system for exact deduplication [32]. A storage server will store both hashes and encryptions of users' images, while a verification server will store only the hashes. The multiple redundancy of both the storage and verification server storing image hashes protects the user in the case that one server behaves maliciously. Convergent encryption is used to ensure users can access deduplicated images. A scheme proposed by Yan et al. uses proxy re-encryption to share data between users who have attempted to upload identical data [33]. Similarly to the work of Wen et al., a verification server is used to store information needed for deduplication. B. Exact One-Server Deduplication The scheme of Rashid et al. similarly leverages hash values for image privacy, but with only one server [29]. Beyond the storage saved by using deduplication, this scheme achieves even better savings by compressing images. The compression takes a tree-like, hierarchical form, where the original image cannot be reliably reconstructed without the most significant information from the higher levels of the tree. Thus, by only encrypting the most significant information from the compression of an image, the amount of encrypted data sent and stored can be reduced, saving bandwidth and storage. Liu et al. constructed a system that allows secure deduplication with only one central server [27]. A key building block of this protocol is user-based key sharing, which takes place through a subprotocol known as Password-Authenticated Key Exchange (PAKE) [2]. In this protocol, upon a file upload the server will compare a short hash (e.g. 13 bits) of the file to short hashes of previously uploaded files, and use this to construct a shortlist of users that may have previously uploaded identical files. Data privacy is preserved because many collisions (of different files) are intentionally created in this list. Additional computation by both the clients and server allow the server to check whether a duplicate file exists. If it does, the server will return an encryption key of the file to the uploader. If not, then the server will accept the file as a unique one. The protocol is provably secure against malicious and colluding adversaries. This protocol also has the advantage of being generalized to any type of data, not just images or text. Many practical attacks are precluded by the use of server-or client-side rate limiting. The protocol does have room for improvement. Its utility is strictly limited to the scenario of exact deduplication, because PAKE requires exact equality of the parties' inputs for identical key exchange. From an efficiency viewpoint, the protocol requires up to six communication rounds per upload. C. Nearly-identical Two-Server Deduplication By using two servers, Li et al. are able to construct a system for secure nearly-identical image deduplication [14]. Their protocol uses one server for deduplication, which stores the perceptual hashes of users' images and performs the work of deduplication. The other server stores the users' encrypted images. A perceptual hashing method is used to perform image deduplication by mapping similar images to identical hashes. In this protocol, the deduplication server is only able to see perceptual hashes of the users' images, and the storage server sees only encryptions of those images, making this system effective for protecting users' privacy against other parties or external adversaries. However, the system has users share group keys among themselves, which requires that users will know a priori whom will be uploading similar images. Thus if two users in different groups upload identical or similar images, the storage server will store both. Later work extended this system with Proof of Ownership and Proof of Retrieval [3]. D. Proof of Ownership/Retrieval Proof of Ownership/Retrieval (PoW and PoR) schemes aim to provably ensure a client's ownership of a file or their ability to recover a stored file from a server, respectively [5], [24]. Both of these concepts have been applied to deduplication, especially PoW [3], [4], [13], [17], [22], [34], [35]. PoW and PoR have even been applied to secure nearlyidentical deduplication, though the scenario is much less adversarial than ours [3]. PoR is perpendicular to our work: though it could be applied with our scheme, it is not the focus of this work. We use PoW in our work for both deduplication and access control. A. System Model We consider the scenario where the parties consist of an arbitrary number of users and a single cloud storage server. The users wish to use the server to securely store images, but without allowing the server to learn the content of their images, or the other users being able to determine the content of their images unless the users both have an identical or nearly-identical image. All parties have the shared goals of wishing to conserve storage while also keeping their own information secure. We thus only consider the case where the server stores encrypted images. B. Adversary Model and Goals We consider the (very challenging) case of fully malicious, colluding adversaries. The server, any of the clients, or any collusion thereof may take any action. Their adversaries' goal in this scenario is to gain some semantically useful information about an innocent users' data that they do not already possess. They may also choose to take actions that may abort the correct execution of the system (e.g. refusing to reply, sending junk data). This type of behavior is a practical issue, and does not compromise the privacy of innocent users' data. C. Security Model We define the ideal functionality δ of secure nearlyidentical deduplication over encrypted data in Fig. 2. This functionality is ideal in the sense that an ideal, fully trusted 'system' takes the input and returns the output, without disclosing any information to any participant. This functionality characterizes the views of adversaries in an ideal world where the whole process is delegated to an ideal 'system'. Our protocol (Section IV) will be designed such that the adversaries' views during the execution of it in the real world are computationally indistinguishable from the adversaries' views in the ideal world. The three types of participants are the storage server S, the user U i attempting to upload an image, and preexisting users U j who have already uploaded a file. A protocol implementing δ is considered secure if it implements δ and leaks negligible information about U i 's or U j 's images and keys, and S only knows whether a PAKE transaction has been initiated between two parties or not. An adversary A may compromise any one of S, U i , or U j . We highlight that, in δ, U 1 j learns nothing about I 1 i if I 1 i and I 1 j are not similar, and the server S 1 learns only some data relevant to I 1 i that cannot be used to reconstruct I 1 i except with negligible probability (e.g., hashes of an image or of its feature vector). We follow the approach of [20] to formalize this intuition in the following definition: Definition 1. Let Γ and δ be the real and ideal functionalities respectively. Protocol Γ is said to securely compute in the presence of fully malicious adversaries with abort if for every non-uniform probabilistic polynomial time adversary A, for the real model there exists a non-uniform probabilistic polynomial-time adversary S for the ideal model such that for every input x, x 1 P t0, 1u˚with |x| " System Inputs: j . ‚ The server S 1 has encryptions of images I 1 j under symmetric keys k 1 j System Outputs: i gets k 1 j as well as the encrypted I 1 j , and S, U 1 i , and U 1 j may learn i and j. Otherwise, U 1 i gets a new symmetric key k 1 i , and S 1 gets an encryption of I 1 i under k 1 i . Figure 2. Ideal Functionality δ |x 1 |, security parameter κ, every auxiliary parameter input z P t0, 1u˚, and locality sensitive hashes h pz pxq, h pz px 1 q, the views generated by tIDEAL δ,Spzq px 1 , k, h pz px 1 qq, u and tREAL Γ,Apzq px, k, h pz pxqqu are computationally indistinguishable w.r.t. κ. If users fail to respond during the protocol, the protocol aborts. Here, the inputs x, x 1 are the images uploaded by users, the security parameter κ is the number of bits of security, the auxiliary parameter data z contains details about the implementation of the protocol (e.g. the cyclic group used in PAKE, which hash functions to use), and the localitysensitive hashes h pz pxq, h pz px 1 q are the hashes of the two images x, x 1 which indicate the similarity scores of the images. Note that some practical attacks are not prevented by this ideal functionality, most notably that adversaries are able to learn in a quantifiable way how similar their uploaded images are to those of another user. This is a common problem in secure deduplication schemes, and to the best of our knowledge there is no consensus in the community of how to address this concern [8], [16]. We address some of these attacks with practical safeguards discussed in Section VI. A. Preliminaries Definition 2. A locality-sensitive hash scheme is a distribution on a family F of hash functions operating on a collection of objects K, such that for two objects x, y P K, Pr hpPF rh p pxq " h p pyqs " simpx, yq for a hash parameter p where simpx, yq P r0, 1s is some similarity function defined on K. Intuitively, a locality-sensitive hash scheme hashes similar objects to the same value. However, this definition makes no statements about the security of the function. In particular, the definition does not imply preimage resistance, meaning that an adversary may be able to reverse the locality-sensitive hash to find the original input. Definition 3. A secure locality-sensitive hash (SLSH) is a locality-sensitive hash function h that has the property of preimage resistance: for any input x and a polynomially bounded number of parameters p 1¨¨¨pt , it is computationally intractable to find x given only h p1 pxq¨¨¨h pt pxq and p 1¨¨¨pt . A SLSH can be constructed from the standard assumption of the existence of cryptographic hash functions [25]. Construction 1. A SLSH can be constructed as the composition H˝LSH p of a locality-sensitive hash function LSH p pxq and a cryptographic hash Hpxq, i.e. SLSHpxq :" HpLSH p pxqq. Cryptographic hash functions are one-way functions, with the property of preimage resistance. Using a cryptographic hash to construct a SLSH gives it the property of preimage resistance, which is desirable for our application. Definition 4. A password authenticated key exchange (PAKE [2]) is a functionality where two parties P 1 and P 2 each input a password pw 1 and pw 2 receive as respective output keys k 1 and k 2 . If pw 1 " pw 2 , then k 1 " k 2 , and otherwise P 1 and P 2 cannot distinguish k 1 and k 2 respectively from a random string of the same length. B. Protocol Description Our protocol Γ is shown in Fig. 3, where the parties consist of a single central server S and N users U 1 ,¨¨¨U N . The server maintains t hash tables HT 1¨¨¨H T t used in deduplication, and makes the parameters of each table public. When a user U i wishes to upload an image I i they will first calculate a feature vector of the image, V Ii , and then find t SLSHes H 1¨¨¨Ht according to the server's hash parameters. After this client-side calculation, the uploading user U i will then send the SLSHes for its image I i to S. The server then constructs a shortlist of possibly similar images by checking the received hash values against the hashes in the tables HT 1¨¨¨H T t and noting any collisions. Images I i , I j whose SLSHes collide will have similar feature vectors (i.e. V Ii « V Ij ), and are similar. Thus the server can identify any images with at least c (a scenario-dependent parameter) hash collisions as similar images. If no other image is found to be similar to the new image I i , then the server indexes each hash value H xPr1,ts into table HT x . It then allows the uploading user to upload an encryption of its image EN C k pI i q (with the encryption key k being unique for I i ), which S then stores. If the image being uploaded I i is found similar to a stored image I j (so that V Ii « V Ij ), then the server directs the original owner U j and new uploader U i to distribute the image's encryption key to U i through PAKE, and allows the new uploader to access the (encrypted) original image. (In case of multiple possible similar images, any of the nearly-identical images can be chosen as the similar one, though a salient choice would be to use the image with the most collisions.) Feature Extraction: For feature extraction we use the ResNet neural network architecture. Compared to other similar architectures for image feature extraction (e.g. the VGG and AlexNet architectures used by the system of Pintrest [23]), ResNet can achieve higher accuracy with less computation, making it an attractive choice for accuracy and efficiency [21]. Dimensionality Reduction: We use a well-known method of locality-sensitive hashing based on random planes for dimensionality reduction [7], [19]. To construct a SLSH from the LSH, we compose the locality-sensitive hash with a cryptographic hash function (SHA256 in our implementation). Nearest-Neighbor Search: The nearest-neighbor search is made easy by the SLSHing carried out previously. We use multiple hash tables to be robust against the small possiblity that similar items might differ in parts of their localitysensitive hash (leading to a potentially wildly different SLSH value). Items hashed to the same hash buckets will be similar, thus we can simply choose the item with the most hash collisions (above a minimal threshold) as a similar image. Access Control: For post-deduplication image sharing, we use the PAKE method [2]. After two users are notified to share keys by the server, they first mutually agree upon a new set of SLSH parameters. They then calculate SLSHes of the feature vectors of their images, and perform PAKEbased key sharing with those hashes as input. When the users' images are similar, the SLSHes used as input will be equal with high probability, and the users will receive identical keys. The key received by the holder of the original image is used to symmetrically encrypt the original image's encryption key. That encryption is then sent to the uploading user. If the keys received from PAKE are identical, then the uploading user will be able to later decrypt the encryption of the original image that the server stores. If the users' images are not similar, then the SLSHes of their feature vectors will be different (with high probability), and decryption of the encrypted encryption key will fail, because the PAKE protocol will return different keys to participants with differing inputs. C. Advantages of our Protocol Our system uses a single untrusted server, with the user performing feature extraction and dimensionality reduction before sending hash values to the server. The users also do the work in rehashing and PAKE required for access control. This arrangement allows for a very high degree of security and utility in a highly adversarial setting. While more computation must be done on the user's side, this is not prohibitively expensive. V. PROOF OF SECURITY Theorem 1. Γ securely computes the ideal functionality δ in the presence of fully malicious, colluding adversaries with abort if PAKE is secure against fully malicious adversaries, the encryption scheme used is also secure, and cryptographic Our Protocol Γ Definitions: Let S be the central server, and U 1¨¨¨UN be users of the server. The server has t hash tables HT 1¨¨¨H T t with t sets of public parameters. The algorithms GEN, EN C, DEC are the key-generation, encryption, and decryption algorithms of a symmetric-key encryption scheme. tH p u is a family of SLSH algorithms, where elements are parameterized for a set of parameters p. Client-Side Computation: 1) User U i wishes to upload an image I i and calculates a feature vector of the image, V Ii . The user then calculates t SLSHes H 1 " H p1 pV Ii q,¨¨¨, H t " H pt pV Ii q of V Ii according to the server's hash parameters p 1¨¨¨pt . 2) U i will then send the hashes H xPr1,ts for its image I i to S. Server-Side Deduplication: 1) S will then compare H x with values already in its hash tables. For x P r1, ts, the server will check H x against the values (filenames of previously indexed images) stored in HT x at H x , and add any values found to a shortlist, counting how many times that value has been found in the tables. Once this has been completed, the server can conclude that the image I i is similar to another image I j‰i owned by a user U j if its hashes have at least c collisions with the hashes from I j . 2) If no other image is found to be similar to the new image, then the server indexes the filename of I i in its tables HT 1¨¨¨H T t at the locations H 1¨¨¨Ht , and allows the uploading user to upload an encryption of its image EN C k pI i q, which S then stores. 3) If the image being uploaded is found similar to another stored image (i.e. V Ii « V Ij ), then the server directs the original owner U j and new uploader U i to share the encryption key of I j through PAKE, and allows the new uploader to access the encryption of the original image. Client-Based Access Control 1) After being so directed by S, U i and U j choose and share fresh sets of SLSH parameters p i , p j respectively. They then calculate the hashes H i i " H pi pI i q and H i j " H pi pI j q of their feature vectors V Ii and V Ij according to p i , and similarly calculate the hashes H j i and H j j according to p j . 2) U i and U j perform the PAKE protocol twice, using H i i and H i j respectively as input to the first session and H j i and H j j respectively as input to the second session. They receive back keys k i i and k i j respectively from the first session, and keys k j i and k j j respectively from the second session. They then concatenate their keys to form k i " k i i }k j i and k j " k i j }k j j . 3) If the users' images I i and I j are similar, then their feature vectors will be similar, and with high probability will be hashed to the same value under a SLSH. Then k i " k j , and decryption succeeds, allowing U i to recover k. 4) If the images I i and I j are not similar, then U i cannot recover k and will not be able to decrypt I j . hash functions exist. If users fail to respond during the PAKE protocol, the protocol aborts. Proof: We will show that the execution of the protocol Γ in the real world is computationally indistinguishable from the execution of the ideal functionality δ. This proof is inspired by that of [27]. The simulator SIM can both access δ in the ideal model and obtain messages that the corrupt parties would send in the real model. SIM generates a message transcript of the ideal model execution δ that is computationally indistinguishable from that of the real model execution Γ. To simplify the proof we assume that the PAKE protocol is implemented as an oracle to which the parties send inputs. Our proof assumes that parties will send dishonestly constructed messages, and does not consider a party choosing to not send a message. Note that if any party refuses to respond or sends junk data, the honest parties can abort the protocol at that point, allowing us to achieve security with abort. A corrupt uploader CU : We first assume that S and U j are honest and construct a simulator for CU . The simulator records CU 's SLSHes of the form H p pV CU q. After receiving a message M SG CU,Uj from S indicating that CU and a user U j have similar images, it records the calls that CU makes to the PAKE protocol with U j . Conversely, if no existing image stored on S is similar to I CU for all other users U j , this implies there will be no further communication between CU and any other user. If CU uses a value HpV I CU q in that call that appears in a hash table HT x , the simulator invokes δ with the image I j that corresponds to the hash HpV I CU q. In this case, CU will receive a key k I CU . If an image I j similar to I CU has been uploaded by any U j , k Ij " k I CU is the key corresponding to that image. We now show that Γ and δ are identically distributed. If I CU already exists in the server's storage and CU behaves honestly, then V I CU « V I U j and thus k CU " k Uj . If I CU does not already exist in the server's storage, then something encrypted by k CU will be indistinguishable from random by the security of the symmetric encryption scheme. Thus, EN C k CU pI CU q will be indistinguishable by S from a random value. Now if CU deviates from the protocol then the only action it can take, except for changing its input hash, is to replace its encryption of the image corresponding to V I CU with an encryption of a different image or random data, that it then sends to S. The result of both types of malicious behavior is that CU sends S hashes H 1¨¨¨Ht that are not correct SLSHes corresponding to the data EN C k CU pI CU q uploaded. In this case, there are two possibilities: either the server will incorrectly not identify I CU as being similar to any stored image when it should, or the server will incorrectly identify I CU as being similar to some other image. In the first case, upon initial upload, as no similar images to I CU are identified, CU does not exchange keys with any other user prior to upload, and learns nothing about another user's image. However, another user U j later uploading EN C k I j pI j q may then have their image identified by S as being similar to I CU . In this case, the users will then make calls to the PAKE protocol. CU cannot learn anything more than what is described in the security definition about I j from either EN C k I j pI j q or from SLSHes of V Ij . For CU to learn anything about I j , they need to recover k Ij . However, without having I j or a highly similar image a priori, CU cannot correctly calculate new SLSHes, and thus cannot receive k Ij through PAKE. Thus in the first case, CU cannot learn anything more than what is described in the security definition about I j . In the second case, because an image in the server's storage is similar to the new image, CU will begin the PAKE protocol with user U j who owns the similar image. CU may or may not have honestly generated H 1¨¨¨Ht from an image I 1 CU . If this was not the case, then as above CU cannot recover k Ij , and cannot learn anything more than what is described in the security definition about I j . On the other hand, if CU generated H 1¨¨¨Ht honestly from I 1 CU , then I 1 CU « I j , and CU is able to correctly generate new localitysensitive hashes for its PAKE sessions with U j . Then in this case, CU can recover k Ij , allowing it to download and decrypt EN C k I j pI j q, recovering I j . However, because I 1 CU « I j , this does not violate ideal functionality or the security definition. We assume that CU sends q messages m 1¨¨¨mq during its execution (hashes, etc.), and replaces y of these messages. In the real model Γ, the execution will change if there is an index j such that the message m j in Γ (which corresponds to the same m 1 j in δ) is replaced by CU . As a result, CU will change the execution even though it inputs a modified encrypted image or hash. The probability for this event is y{q, but it will be detected with high probability. However, in δ, the same result will occur in the event that a replaced element is chosen by the simulator. The probability of this event occurring is also y{q by the security of PAKE. Thus, we conclude that the views of Γ and δ are identically distributed. A corrupt previous uploader CP : Here, we say that CP has previously been honest in uploading its hashes and encrypted image to the server. CP will learn from this execution if H p 1 pV Ii q " H p 1 pV I CP q, for p 1 P tp i , p j u. The simulator SIM will receive CP 's input H p 1 pV I CP q, but since CP has previously uploaded EN C k I CP pI CP q, it only needs to recover the key corresponding to k CP . The simulator SIM first checks whether the hashes H 1¨¨¨Ht of V Ii match the hashes of I CP in S's hash tables. If not, CP is not identified as having a similar image to I i , and will take no action. Otherwise, S observes CP 's inputs H p 1 pV I CP q to the PAKE protocol, the new key k i that U i gains from PAKE, and the message EN C k CP pk I CP q. Then CP and U i exchange the same information necessary to run the PAKE protocol as a black box. The simulator checks if HpV I CP q " HpV Ii q. If so, it extracts and sends k CP to U i . To show that the simulation is accurate, note if CP behaves honestly, then δ and Γ are obviously indistinguishable. CP can only deviate from the protocol in two ways. First, it can deviate from the PAKE protocol in a way that forces PAKE to abort, or by providing incorrect input so that the symmetric keys from PAKE do not match. In either case, though the adversary has managed to prevent the successful operation of the protocol, it has not learned any new information about other parties' images, due to the security of PAKE. Second, it can abide honestly by the PAKE protocol, but send an incorrect key that U i then cannot use to successfully recover I CP . Again, CP does not learn any new information about another party's image, and we can safely abort if necessary. We conclude that the views of δ and Γ are identically distributed. A corrupt server CS: The simulator will first act as a user U i with image I i , and send hash values H 1¨¨¨Ht to CS. The server CS will query those values against its tables HT 1¨¨¨H T t , and either find that there is a user U j with a similar image, or that no similar image has been stored with the server. If the server behaves honestly in the second case or dishonestly in the first, then the server will accept the upload of EN C k I i pI i q. By the security of the symmetric encryption and the security of the SLSH, the server cannot learn any new information beyond what is described in the security definition about I i from H 1¨¨¨Ht and EN C k I i pI i q. The server can also behave maliciously by telling U i that they have uploaded an image I i similar to an image I j previously uploaded by U j , and directing them to perform PAKE to share keys. If this happens, then the users U i and U j will with overwhelming probability choose different passwords in their PAKE protocol, and will thus be unable to share encryption keys. Thus when U i and U j have different images, neither can learn anything more than what is described in the security definition about the other's image, even when CS behaves dishonestly. Suppose the server has m other images that they can choose to identify as similar with I i . Deduplication fails if the owner U j and their image I j are not chosen correctly by CS, which happens in both the real and the ideal model with the same probability r{m where r is the number of dissimilar images. In both cases, U i and/or U j will be able to detect this behaviour with high probability. Thus δ and Γ are identically distributed. Colluding corrupt server CS and corrupt previous uploader CP : When the honest user U i uploads a new image, CS can either behave honestly or maliciously. If CS behaves honestly, then this reduces to the above case of a single corrupt previous uploader. If CS does not, then it can take only one action not already enumerated in the above case of a single malicious server. The server can falsely claim that I i is similar to an image I CP owned by CP , and direct them to exchange keys. Then this reduces to the case of a single corrupt previous uploader. Colluding corrupt server CS and corrupt uploader CU : Similarly, the only dishonest actions the collusion of CS and CU can take that differ from already-enumerated cases is for CS to falsely tell an innocent previous uploader U j that CS has attempted to upload an image similar to an image I j stored by CS. Then this also reduces to the case of a single corrupt uploader. A. Inference and Anonymity The protocol Γ does not allow participants to learn anything more than what is described in the security definition about images unless they possess a similar image a priori. However, an inference attack is trivial to mount: a user can easily learn if another user has uploaded an image by simply requesting to upload that image to the server. This attack can be prevented by making all connections anonymous, which can be accomplished through onion routing [30]. When the server notifies two users to share encryption keys, it then also gives them a one-time-use token pair that the users can use to authenticate themselves to one another without revealing their identities. A common assumption in image deduplication is that the server must be able to know which users own which images in order to identify duplicates between different users, so we follow the precedent set by [14], [16], [27], and assume brute-force server inference attacks to be outside our threat model. B. Adding Images The server's hash tables can only hold up to 2 h entries each, where h is the size in bits of the result of the SLSH. When taking into account the desire to avoid collisions due to load, the practical upper bound is even lower. In other applications, the server can rehash its elements into a larger table when the number of elements it stores approaches that threshold. However, because the server cannot generate SLSHes (it does not have the original image or feature vector), it would have to ask the users to generate new hashes. This is costly to the users computationally, and is thus not a desirable approach. Instead, the server can initialize a new set of hash tables HT 1 1¨¨¨H T 1 t , with a new set of parameters p 1 1¨¨¨p 1 t . Users uploading will henceforth provide the server two sets of hashes of their images' feature vectors: one set for the parameters p 1¨¨¨pt of the original hash tables HT 1¨¨¨H T t , and one set for the parameters p 1 1¨¨¨p 1 t of the new hash tables HT 1 1¨¨¨H T 1 t . Newly uploaded images are queried against all the hash tables, but only stored (if not deduplicated) in the new set. While this doubles the amount of computation the users must perform, these calculations are still only performed once, at image upload. Further, this strategy allows the server to store images beyond the original capacity of HT 1¨¨¨H T t without violating user privacy. This scenario should be rare as long as h is chosen to be sufficiently large, so that tables are not filled quickly and adding tables occurs only rarely. C. Sharing the Load Our system offers a high degree of privacy and functionality to its users, at the cost of extra computation. One of these costs is the PAKE-based key exchange that users must perform. The original owners of images that are "popular" (frequently selected for deduplication) bear a disproportionate part of this load. A server can attempt to prevent this unfair situation by not always selecting the image's original owner to perform key exchange with new uploaders, but by instead selecting from all users who already have access and thereby distributing the load fairly. In this way, the ability of the server to infer which parties have uploaded similar images actually becomes an advantage for ensuring fairness among users. D. Brute-Force Attacks In our protocol, both servers and clients can carry out brute-force attacks by repeatedly querying images against the server's storage to see if another client has stored a similar image with the server. Such an attack from the server cannot be theoretically prevented without introducing more assumptions (i.e. an extra server [31]). The practical approach of rate-limiting user queries can prevent such attacks from users [14]. E. Leveraging Trusted Hardware In order to prevent an adversary from conducting a bruteforce attack, this protocol could be modified to utilize secure hardware to prevent a server from guessing how similar two images are by observing the number of hashes that match. For instance, using Intel SGX [10], we could define a function that computes the similarity score in a secure enclave and only outputs a binary value to indicate whether or not the similarity score is above a threshold. This would prevent a malicious user from learning extra information regarding how exactly similar their image is to another user's, but would make the protocol hardware-dependent. Remote attestation can securely verify that the server is running authenticated code that has not been tampered with. A. Testing Implementation We implemented and tested feature extraction, dimensionality reduction, nearest-neighbor searching, and the SPAKE2 subprotocol [2]. Our implementation of SPAKE2 is in C++, and uses GMP for algebraic operations [18]. The other tests are written in Python, and make use of the OpenCV library for image processing [6]. Keras and Tensorflow [1], [9] are used for feature extraction, and a modified version of lshash incorporating the SHA256 cryptographic hash was used for dimensionality reduction and nearest-neighbor search [36]. To measure realistic performance, our tests were run on a server node belonging to a cluster in active use by a university (Intel Xeon CPUs, 128 GB of RAM, and GTX 1080Ti). The nodes were not exclusively used by us, and our tests were run in an environment similar to servers under high load. This may have introduced extra latency and variance in our results. We used training data from the standard image datasets featured in the Visual Decathalon Challenge [15]. We have omitted results from the Imagenet dataset from our graphs for readability, though those results were also considered in drawing our conclusions. The number of images in each dataset is given in Table I. B. Efficiency Feature Extraction: The time to extract features for each database using ResNet50 is shown in Fig. 4. We ran 10 trials on each dataset (with the exception of Imagenet, which was tested 5 times). Our results show that feature extraction on a single image takes about 33 ms on average. This computational overhead for an image upload is a manageable amount for a client. Dimensionality Reduction: The time to index a database of images is shown in Fig. 5. We performed 10 trials on each dataset (Imagenet was tested only 5 times). Our results show that indexing with 6 hash tables and a locality-sensitive hash size of 24 bits takes about 39 ms per image on average, taking hash calculation into account. These parameters were chosen to strike a balance between efficiency and accuracy. The computation time for a client is then even less, as they only need to calculate the hashes, and do not have to index the values into multiple hash tables. In Tables II and III we show the time needed to calculate a client's hashes when varying the number of tables and hash size, with data from 4000 trials in each case. In particular, calculating a 24-bit hash for 6 tables takes 0.93 ms on average. As expected, the runtime for hash calculations increases linearly with both the number of tables and hash size. From our experiments we can thus conclude that both client-side hashing and server-side indexing are feasible and scalable. Nearest-Neighbor Searching: We tested the time for querying a small constant number (100) of images against each database, using 10 trials. We used the same index specifications as above. The resulting runtimes are shown in Table I, which includes both the average time per image query and average query time divided by database size. We can conclude that the average time to query per image may be as little as 2.13 ms. The average time per query across all tested datasets was about 55 ms. Interestingly, we note that the average time for a query does not increase as the size of the previously indexed dataset does, and may even decrease. A possible explanation is that cache/memory coherency yields greater benefits for queries over larger databases. From this, we conclude that querying is computationally feasible and also scalable. Access Control: Our implementation of PAKE was tested over cyclic groups with prime orders of 1024, 2048, 4096, and 8192 bits. Each group was tested with 1000 trials. Even for group sizes of 8192 bits, the user computation averaged below 170 µs. The time to perform user computation is not C. Distortions' Impacts to Our Deduplication We tested the propensity of our nearly-identical deduplication scheme to identify images as similar after small distortions are applied. We randomly chose a subset of Imagenet, and applied gradually increasing distortions to images in that subset. We then ran queries with those images and observed how many hash tables recorded a match with the original image. The results are shown in Figures 6(a)-(h). In these graphs, the proportion of queries with some number of matches is shown as a function of the number of matches and the severity of the distortion. For example, in Figure 6(a), the figure shifts from yellow to blue as query images become more blurry, and there is a visible trend of the number of matches decreasing as the distortion increases. These results show that our system is able to accurately identify (with c " r t`1 2 s, i.e. hash collisions in more than half the tables) similar images with small changes from blurring, brightening, enlargement, saturation, and sharpening. The system was not able to reliably detect nearly-identical images with distortions of solarization or salt-and-pepper noise, and performed somewhat poorly with Gaussian noise -this is logical, as those types of distortions will affect features more. Shrinking the image also resulted in poor performance, which makes sense, as shrinking an image results in a loss of information. Our system performed extremely well for false positives (i.e. an image not the original identified as similar) -none of our tests had more than one table indicate a false positive. D. Quality of Service with Concurrent Requests First, we examined how well our system could respond to multiple simultaneous queries. We measured the runtimes of each individual request, as well as the total runtime of the whole set of requests. The results are averaged over five trials, and used up to 16384 threads. The average time for only a single request (Figure 7(a)) was higher due to the overhead of initialization. For the rest of the runtimes up to 16384 requests, the average time was on order of 0.1 ms. The average times increase greatly as the number of requests grows close to 16384, as the overhead from more threads increases. After that point, the average time decreases again, as each thread will then have multiple requests. The total runtime for all of the requests ( Figure 7(b)) shows that up to a certain level of saturation (around 8192 simultaneous requests), the overall runtime was small (hundredths of seconds up to 32 requests, and seconds or less for up to 4096 requests). This shows that for client queries, our protocol is efficient for many simultaneous requests. Next, we examined how our implementation handled simultaneous indexing of new images. The average request runtimes are shown in Figure 7(c), and the total time to index the entire set of new images is shown in Figure 7(d). The time for a single request to complete was under 0.65 seconds in all cases, showing that concurrent indexing is efficient, even with locking. The time to fulfill all requests increased linearly with the number of requests. Our implementation used simple locking. A more sophisticated database system might be able to allow more efficient indexing, though this is beyond the scope of our work. VIII. CONCLUSION This paper presents the first protocol for nearly-identical image deduplication with only a single untrusted server. Our rigorous proof shows the protocol's security in the highly challenging case of fully malicious and colluding adversaries. We also discuss practical issues widely applicable to deduplication. Finally, our experiments show the efficacy and efficiency of our protocol and its components.
2020-05-06T01:01:04.045Z
2020-05-05T00:00:00.000
{ "year": 2020, "sha1": "16be42e27232b34cf4991bcaf7c32b0ab6872ef2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.02330", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "16be42e27232b34cf4991bcaf7c32b0ab6872ef2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
90261683
pes2o/s2orc
v3-fos-license
A refinement of the Ozsv\'ath-Szab\'o large integer surgery formula and knot concordance We compute the knot Floer filtration induced by a cable of the meridian of a knot in the manifold obtained by large integer surgery along the knot. We give a formula in terms of the original knot Floer complex of the knot in the three-sphere. As an application, we show that a knot concordance invariant of Hom can equivalently be defined in terms of filtered maps on the Heegaard Floer homology groups induced by the two-handle attachment cobordism of surgery along a knot. Introduction Let S 3 t (K) denote the manifold constructed as Dehn surgery along K ⊂ S 3 with surgery coefficient t. In [OS04] Ozsváth and Szabó construct a chain homotopy equivalence between certain subquotient complexes of the full knot Floer chain complex CFK ∞ (S 3 , K) and Heegaard Floer chain complexes CF(S 3 t (K), s m ) for sufficiently large integers t for each spin c structure s m . This equivalence is known as the large integer surgery formula. The meridian µ of K naturally lies inside of the knot complement S 3 \ K and the surgered manifold S 3 t (K). The meridian µ induces a filtration on CF(S 3 t (K), s m ) for each spin c structure s m . In [Hed07] Hedden gives a formula for the filtered complex CFK(S 3 t (K), µ, s m ) in terms of CFK ∞ (S 3 , K) for sufficiently large t. As an application of this formula, Hedden computes the knot Floer homology of Whitehead doubles and the Ozsváth-Szabó concordance invariant τ of Whitehead doubles. In [HKL16] Hedden, Kim, and Livingston generalize Hedden's formula by computing the full knot Floer complex CFK ∞ (S 3 t (K), µ, s m ) in terms of CFK ∞ (S 3 , K) for sufficiently large t. As an application to knot concordance, they show that the subgroup of topologically slice knots of the concordance group contains a Z ∞ 2 subgroup. K µ n Figure 1. The two-component link µ n and K for n = 5 The author was partially supported by NSF grant DMS-1606451. We refine the theorems of Ozsváth-Szabó, Hedden and Hedden-Kim-Livingston to determine the filtered chain homotopy type of CFK ∞ (S 3 t (K), µ n ), where µ n denotes the (n, 1)-cable of the meridian of K, viewed as a knot in S 3 t (K). See Figure 1. For each spin c structure s m , we show that the complex CFK ∞ (S 3 t (K), µ n , s m ) is isomorphic to CFK ∞ (S 3 , K), but endowed with a different Z ⊕ Z filtration and an overall shift in the homological grading. Theorem 1.1. Let K be a knot in S 3 and fix m, n ∈ Z. Then there exists T = T (m, n) > 0 such that for all t > T , the complex CFK ∞ (S 3 t (K), µ n , s m ) is isomorphic to CFK ∞ (S 3 , K)[ ] as an unfiltered complex, where [ ] denotes a grading shift that depends only on m and t. Given a generator [x, i, j] for CFK ∞ (S 3 , K), the Z⊕Z filtration level of the same generator, viewed as a chain in CFK ∞ (S 3 t (K), µ n , s m ), is given by: As a corollary, the Z-filtered complex CFK(S 3 t (K), µ n , s m ) is isomorphic to a subquotient complex of CFK ∞ (S 3 , K), endowed with an (n + 1) step filtration F: This filtration is illustrated in Figure 2 in the case n = 3. Corollary 1.2. Let K ⊂ S 3 be a knot, and fix m, n ∈ Z. Then there exists T = T (m, n) > 0 such that for all t > T , the Z-filtration on CF (S 3 t (K), s m ) induced by µ n ⊂ S 3 t (K) is isomorphic to the filtered chain homotopy type of the (n + 1) step filtration on C{max(i, j − m) = 0} described above. j i Figure 2. C{max(i, j − m) = 0} is the shaded region. The subregions bounded by the colored dots represent subcomplexes of the filtration F in the case n = 3. As an application, we show that the concordance invariant a 1 (K) of Hom [Hom14b] can equivalently be defined in terms of filtered maps on the Heegaard Floer homology groups induced by the two-handle attachment cobordism of surgery along a knot K in S 3 . The rationally null-homologous knot µ n ⊂ S 3 t (K) induces a Z-filtration of CF(S 3 t (K), s τ ) and CF(S 3 −t (K), s τ ), that is, a sequence of subcomplexes: . Using the knot filtrations, an equivalent definition of a 1 (K) can be formulated in terms of the filtration F and F induced by µ n as a knot inside S 3 t (K) and S 3 −t (K). Theorem 1.3. Let n > 2g(K). For sufficiently large surgery coefficient t, the concordance invariant a 1 (K) is equal to: This interpretation of the invariant a 1 (K) offers a topological perspective that complements the original algebraic definition of a 1 (K). We will also include properties of the invariant a 1 (K) as well as computations of a 1 (K) for homologically thin knots and L-space knots. Acknowledgements. The author thanks her advisors, Peter Ozsváth and Zoltán Szabó, for their guidance. Adam Levine for reading the version of this work which appeared in the author's PhD thesis and for helpful comments. The author would also like to thank Matt Hedden, Jen Hom and Olga Plamenevskaya for helpful conversations. 2. The knot Floer filtration of cables of the meridian in Dehn surgery along a knot In this section we will refine the theorem of Ozsváth-Szabó to determine the filtered chain homotopy type of the knot Floer complex of (S 3 t (K), µ n ). We begin by recalling the large integer surgery formula from Ozsváth and Szabó [OS04]. Let (Σ g , α 1 , . . . , α g , γ 1 , . . . , γ g , w, z) be a doubly-pointed Heegaard diagram for CFK ∞ (S 3 , K), where • the curve γ g = µ is a meridian of the knot K • the curve α g is a longitude for K • there is a single intersection point in α g ∩ γ g = x 0 • the basepoints w and z lie on either side of γ g Let β = {γ 1 , . . . , γ g−1 , λ t } be the set of curves in γ, with γ g replaced by a longitude β g = λ t winding t times around µ. Label the unique intersection point γ g ∩ β g = θ. The Heegaard triple diagram (Σ, α, β, γ, w, z) represents a cobordism between S 3 and S 3 t (K). See Figure 3. Let C{max(i, j − m)} = 0 denote the subquotient complex of CFK ∞ (S 3 , K) generated by triples [x, i, j] with the i and j filtration levels satisfying the specified constraints. induces an isomorphism of chain complexes. Remark 2.2. Here, as usual, the labeling of the spin c structures is determined by the condition that s m can be extended over the cobordism −W t from −S 3 t (K) to −S 3 associated to the two-handle addition along K with framing t, yielding a spin c structure r m satisfying Above, S denotes a surface in W t obtained from closing off a Seifert surface for K in S 3 to produce a surface S of square t. We refine the theorem of Ozsváth-Szabó to determine the filtered chain homotopy type of the knot Floer complex of (S 3 t (K), µ n ). Consider the meridian µ = µ K of a knot K. The meridian µ naturally lies inside of the knot complement S 3 \ K and the surgered manifold S 3 t (K). For n ∈ N, µ n denotes the (n, 1)-cable of µ K , and also lies inside S 3 \ K and the surgered manifold S 3 t K. The knot µ n is homologically equivalent to n · [µ] in H 1 (S 3 t (K)). When n = 1, µ 1 = µ. See Figure 1 for a picture of the two-component link K ∪ µ n . For all n ≥ 1 there is a natural (n + 1)-step algebraic filtration F on the subquo- This filtration is illustrated in the case n = 3 in Figure 2. Theorem 2.3 says that this algebraic filtration F corresponds to a relative Zfiltration on CF(S 3 t (K), s m ) induced by µ n ∈ S 3 t (K). This generalizes work of Hedden [Hed07] who studied the n = 1 case of the filtered complex CFK(S 3 t (K), µ, s m ). Theorem 2.3. Let K ⊂ S 3 be a knot, and fix m, n ∈ Z. Then there exists T = T (m, n) > 0 such that for all t > T , the following holds: The filtered chain homotopy type of the (n+1) step filtration F on C{max(i, j−m) = 0} described above is filtered chain homotopy equivalent to that of the filtration on CF (S 3 t (K), s m ) induced by µ n ⊂ S 3 t (K). Proof. The key observation will be that the triple diagram (Σ, α, β, γ, w, z) used to define Φ m not only specifies a Heegaard diagram for the knot (S 3 , K), but also a Heegaard diagram for the knot (S 3 t (K), µ n ) with the addition of a basepoint z . Place an extra basepoint z = z n so that it is n regions away from the basepoint w in the Heegaard triple diagram representing the cobordism between S 3 and S 3 t (K) as in Figure 4. (This can be accomplished if t is sufficiently large, e.g. if t > 2n). The knot represented by the doubly-pointed Heegaard diagram (Σ, α, β, w, z n ) is µ n in S 3 t (K). An intersection point x ∈ T α ∩ T β is said to be supported in the winding region if the component of x in α g lies in the local picture of Figure 4. Intersection points in the winding region are in t to 1 correspondence with intersection points x in T α ∩T γ . Fix a Spin c structure s m where m ∈ Z. For t (the surgery coefficient) sufficiently large, any generator x ∈ T α ∩ T β representing Spin c structure s m is supported in the winding region. In this case, there is a uniquely determined x ∈ T α ∩ T γ and a canonical small triangle ψ ∈ π 2 (x, θ, x ). Suppose ψ ∈ π 2 (x, θ, x ) is the canonical small triangle and x ∈ T α ∩ T β is a generator representing Spin c structure s m . If k = n z (ψ) ≥ 0 (so n w (ψ) = 0), then the α g component of x is x k (and lies k units to the left of x 0 ) in Figure 4. In this case, Φ m maps x to C{i = 0, j ≤ m}. On the other hand, if x is a generator with n z (ψ) = 0 and l = n w (ψ) > 0, then the α g component of x is x −l (and lies l steps to the right of x 0 ) in Figure 4. In this case, Φ m maps x to the subcomplex Figure 4. Local picture of the winding region of the Heegaard triple diagram (Σ, α, β, γ, w, z n ) for the cobordism between S 3 t K and S 3 . The basepoint z n is located n regions away from the basepoint w in the Heegaard diagram (Σ, α, β, w, z n ). Here we depict the basepoint z n for n = 3. The following lemma (which generalizes Lemma 4.2 of [Hed07]) will be used to finish the proof. Lemma 2.4. Let p ∈ CFK(S 3 t (K), µ n , s m ) be a generator supported in the winding region, and let x i denote the α g component of the corresponding intersection point in T α ∩ T β , where the x i are labeled as in Figure 4. Then Here, F top (respectively, F bottom ) denotes the top (respectively, bottom) filtration level of CFK(S 3 t (K), µ n , s m ). F top−i denotes the filtration level that is i lower than F top . In addition F bottom = F top−n , so this is an (n + 1)-step filtration. Proof. The Z-filtration F is defined by the relative Alexander grading A n induced by µ n on CF ∞ (S 3 t K, s m ). That is, Let p, q ∈ CFK(S 3 t K, µ n , s m ) be generators supported in the winding region, and let x i , x j denote the α g components of the corresponding intersection points T α ∩T β . Assume without loss of generality that i < j (so that x i lies to the right of x j ). We will construct a Whitney disk φ p,q ∈ π 2 (p, q) with the following properties: • If i > 0 and j > 0, (that is, x i , x j both lie on the left of x 0 ), then ∂φ p,q doesn't contain any arc δ k . Therefore, • If i ≤ −n and j ≤ −n, (that is, x i , x j both lie ≥ n steps to the right of x 0 ), then ∂φ p,q doesn't contain any arc δ k . Therefore, • If i < −n and j > 0, (that is, x j lies to the left of x 0 and x i lies i steps to the right of x 0 ), then ∂φ p,q contains the n arcs δ 1 , . . . , δ n , each with multiplicity one. Therefore, F(p) − F(q) = −n. • If −n ≤ i < 0 and j > 0, (that is, x j lies to the left of x 0 and x i lies i steps to the right of x 0 ), then ∂φ p,q contains the i arcs δ 1 , . . . , δ i , each with multiplicity one. Moreover, ∂φ p,q doesn't contain the arcs δ k for k > i. Therefore, F(p) − F(q) = −i. • If −n < j < 0 and i ≤ −n, (that is, x i lies ≥ n steps to the right of x 0 and x j lies j steps to the right of x 0 ), then ∂φ p,q contains the n + j arcs δ |j|+1 , . . . , δ n , each with multiplicity one. Moreover, ∂φ p,q doesn't contain the arcs δ k for k ≤ |j|. Therefore, • If −n < i < 0 and −n < j < 0, (that is, x j lies j steps to the right of x 0 and x i lies i steps to the right of x 0 ), then ∂φ p,q contains the j − i arcs δ |j|+1 , . . . , δ |i| , each with multiplicity one. Therefore, Assuming the existence of such φ p,q , the lemma follows immediately. In [Hed07, Lemma 4.2] Hedden constructs a Whitney disk φ p,q ∈ π 2 (p, q). The above enumerated properties of ∂φ p,q will be immediate from the construction. We restate his construction here. Note first since p, q lie in the winding region, they correspond uniquely to intersection pointsp,q ∈ T α ∩ T γ . These intersection points p,q can be connected by a Whitney disk φ ∈ π 2 (p,q) with n w (φ) = 0 and n z (φ) = k for some k ∈ Z ≥0 . This means that ∂φ contains γ g with multiplicity k, which further implies that the distance between x i and x j is k, that is, i − j = k. The domain of φ p,q can then be obtained from the domain of φ by a simple modification in the winding region as described in [Hed07]. This modification is shown in Figure 5. It replaces the boundary component k · γ g by a simple closed curve from an arc connecting x i and x j along α g followed by an arc connecting x j to x i along β g , and which wraps k times around the neck of the winding region. This completes the description of the knot Floer complex CFK(S 3 t (K), µ n ) in terms of the complex CFK ∞ (S 3 , K). where p, q have α g components x −3 , x −1 . ∂φ p,q contains arcs δ 2 and δ 3 on β drawn in violet. Figure 5. The domain of a disk φ p,q ∈ π 2 (p, q), for p, q ∈ T α ∩ T β in the winding region can be identified with the domain of a disk φ ∈ π 2 (p,q). Theorem 2.3 described the Z-filtered chain homotopy type of knot Floer chain complex CFK(S 3 t (K), µ n , s m ) for t large with respect to m and n. In Theorem 1.1, we describe the Z ⊕ Z-filtered chain homotopy type of CFK ∞ (S 3 t K, µ n , s m ). This generalizes Theorem 4.2 of Hedden-Kim-Livingston [HKL16] which studies the n = 1 case. Φ m : CF ∞ (S 3 t (K), s m ) → CFK ∞ (S 3 , K) respects the F[U, U −1 ]-module structure of both complexes, and hence determines one of the Z-filtrations (called the U -filtration) of CFK ∞ (S 3 t (K), µ n , s m ). The knot µ n ⊂ S 3 t (K) induces an additional Z-filtration (the Alexander filtration) on CF(S 3 t (K), s m ) and on CFK ∞ (Y t (K), s m ). The additional Z-filtration on CFK ∞ (Y t (K), µ n , s m ) can be determined in exactly the same way as it was determined for the case of CF(S 3 t (K), s m ). Lemma 2.4 identifies the Z-filtration induced on any given i = constant slice in CF ∞ (S 3 t , s m ) with a (n + 1)-step filtration as above. This yields the statement of the theorem. Alternatively, the additional (Alexander) Z-filtration on CFK ∞ (Y t (K), µ n , s m ) can be obtained from the Alexander filtration on CFK(Y t (K), µ n , s m ) by the fact that the U variable decreases Alexander grading by one, i.e. we have the relation A(U · x) = A(x) − 1. Corollary 2.5. Let K be a knot in S 3 and fix m, n ∈ Z. Then there exists T = T (m, n) > 0 such that for all t > T the following holds: Up to a grading shift, the p th filtration level of CFK ∞ (S 3 t (K), µ n , s m ) is described in terms of the original Z ⊕ Z−filtered knot Floer homology CFK ∞ (S 3 , K) as max(i, j − m − n) = p. That is, each Alexander filtration level p of CFK ∞ (S 3 t (K), µ n , s m ) is a "hook" shaped region in CFK ∞ (S 3 , K). Proof. This follows from Theorem 1.1. Proposition 2.6. Let m ∈ Z with |m| ≤ g(K) and let n > 2g(K). For sufficiently large surgery coefficient t, the Alexander filtration induced by µ n on CF ∞ (S 3 t (K), s m ) coincides with the algebraic i-filtration on CFK ∞ (S 3 , K) under the correspondence given by Φ m . Proof. Since CFK(Y, K) has degree equal to the Seifert genus of the knot, CFK ∞ (Y, K) is supported along a thick diagonal of width 2g(K) + 1. By the hypothesis, we have m + n > g(K). Therefore the corner (p, m + n + p) of the hook region C{max(i, j − m − n) = p} of each constant Alexander filtration level p of CFK ∞ (S 3 t K, µ n , s m ) lies above the thick diagonal along which CFK ∞ (Y, K) is supported. See Figure 6. For spin c structures s m where |m| ≤ g(K), this means that the Alexander filtration induced by µ n on CFK ∞ (S 3 t (K), µ n , s m ) coincides with the algebraic i-filtration on CFK ∞ (S 3 , K) under the correspondence given by Φ m . Because the algebraic i-filtration is used to define concordance invariants (such as a 1 (K), which can be interpreted as an integer lift of the Hom ε invariant [Hom14a]), the filtration induced by µ n on CF ∞ (S 3 t (K), s m ) can be used to study the concordance class of a knot K. We will see that we can extract concordance invariants of K from CFK ∞ (S 3 t (K), µ n , s m ). A knot concordance invariant As an application for the results in the previous section on the Z-filtration induced on CF(S 3 N (K), s m ) by the (n, 1)-cable of the meridian µ n , our main result in this section (Theorem 3.5) shows that the concordance invariant a 1 (K) of Hom [Hom14b], which has an algebraic definition in terms of maps on subquotient complexes of CFK ∞ (K), can be equivalently defined by studying filtered maps on the (hat version of the) Heegaard Floer homology groups induced by the two-handle attachment cobordism of large integer surgery along a knot K in S 3 and the filtration induced by the knot µ n inside of the surgered manifold. Our result is analogous to the statement that the concordance invariants ν(K) of Ozsváth-Szabó [OS11] and ε(K) of Hom [Hom14a] can be defined algebraically or in terms of maps on the (hat version of the) Heegaard Floer homology groups induced by the two-handle attachment cobordism of large integer surgery along a knot K in S 3 . Definition 3.1 gives an algebraic definition of ε(K) in terms of certain chain maps on the subquotient complexes of the knot Floer chain complex CFK ∞ (K). Due to the Ozsváth-Szabó large integer surgery formula [OS04], ε(K) can equivalently be defined in terms of maps on the Heegaard Floer chain complexes induced by the two-handle attachment cobordism of (large integer) surgery. We begin by recalling the definition of the concordance invariants ε(K). Let N be a sufficiently large integer relative to the genus of a knot K. Consider the map where |s| ≤ N 2 and F denotes the capped off Seifert surface in the four-manifold. We also consider the map N . The maps F s and G s can be defined algebraically by studying certain natural maps on subquotient complexes of CFK ∞ (K), as in [OS04]. The map F s is induced by the chain map consisting of quotienting by C{i = 0, j < s} followed by the inclusion. Similarly, the map G s is induced by the chain map consisting of quotienting by C{i < 0, j = s} followed by the inclusion. In [Hom14b], Hom defines a concordance invariant a 1 (K) for knots with ε(K) = 1 that is a refinement of ε(K). Definition 3.2 ( [Hom14b]). If ε(K) = 1 (F τ is trivial), define We extend this definition of a 1 (K) to all knots (to include knots with ε(K) = 1). Consider the maps Definition 3.3. Given a knot K inside S 3 , define: Note that a 1 (K) only depends on the doubly-filtered chain homotopy type of the knot Floer chain complex CF K ∞ (K), so it is a knot invariant. Remark 3.4. When ε(K) = 1, the definition of a 1 (K) agrees with the invariant a 1 (K) defined in Lemma 6.1 in [Hom14b]. As remarked in [Hom14b], a 1 (K) measures the "length" of the horizontal differential hitting the special class generating the vertical homology of CF(S 3 ). Similarly, when ε(K) = −1, a 1 (K) measures the "length" of the horizontal differential coming out of the special class generating the vertical homology of CF(S 3 ). Recall that the rationally null-homologous knot µ n ⊂ S 3 t (K) induces a Z-filtration of CF(S 3 t (K), s τ ) and CF(S 3 −t (K), s τ ), that is, a sequence of subcomplexes: . Using Theorem 2.3 and Proposition 2.6, an equivalent definition of a 1 (K) can be formulated in terms of the filtration F and F induced by µ n as a knot inside S 3 t (K) and S 3 −t (K). This interpretation of the invariant a 1 (K) offers a topological perspective that complements the original algebraic definition of a 1 (K). Theorem 3.5. Let n > 2g(K). For sufficiently large surgery coefficient t, the concordance invariant a 1 (K) is equal to Proof. Since |τ | ≤ g 4 (K) ≤ g(K), we can apply Proposition 2.6 which states that in the spin c structure s τ , the algebraic i-filtration on CFK ∞ (S 3 , K) coincides with the filtration induced by µ n on CF(S 3 N (K), s τ ) under the identification of the two filtered chain complexes in Theorem 2.3. Remark 3.6. Recall that a 1 (K) is a concordance invariant (see Proposition 3.7) that fits into a family of concordance invariants studied by Dai, Hom, Stoffregen and the author in [DHST19]. It would be interesting to see if an analogue of Theorem 3.5 exists for this entire family of algebraically defined invariants corresponding to the standard local representative (over F[U, V ]/(U V )) of the knot. Proof. Suppose K 1 and K 2 are concordant knots, i.e. K 1 #K 2 is slice. Then ε(K 1 #K 2 ) = 0. By Proposition 3.11 in [Hom15], we may find a basis for CFK ∞ (K 1 #K 2 ) with a distinguished element x that generates the homology HFK ∞ (K 1 #K 2 ) and splits off as a direct summand of CFK ∞ (K 1 #K 2 ). Similarly, we can find a basis for CFK ∞ (K 2 #K 2 ) with a distinguished element y with the same properties. Then to compute a 1 (K 2 #K 1 #K 2 ), by the Kunneth principle [OS04] we can consider either chain complex: Using the special bases from above, the relevant summands to a 1 are Thus, a 1 (K 2 ) = a 1 (K 2 #K 1 #K 2 ) = a 1 (K 1 ). This summand supports H * (CFK ∞ (K)) and thus determines the value of a 1 (K). It is easy to see from the complex that a 1 (K) = sgn(τ (K)). Proof of (4). If a 1 (K) = 0, ε(K) = 0. By Lemma 3.3 from [Hom14a], we may find a basis for CFK ∞ (K) with a distinguished element x which is the generator of both vertical and horizontal homology. Then a 1 (K#K ) can be computed from {x} ⊗ CFK ∞ (K ). In fact, we can extend Proposition 3.9(4) to describe the behavior of a 1 under connect sum in many (but not all) cases. Proof. Note that we use −K to denote the mirror of a knot K. (3) By Lemma 6.2 of [Hom14b], there exists a basis {x i } over F[U, U −1 ] for CFK ∞ (K 1 ) with basis elements x 0 and x 1 with the property that (1) There is a horizontal arrow of length a 1 from x 1 to x 0 . (2) There are no other horizontal arrows or vertical arrows to or from x 0 . (3) There are no other horizontal arrows to or from x 1 . Similarly, we may find a basis {y i } over F[U, U −1 ] for CFK ∞ (K 2 ) with basis elements y 0 and y 1 with the above properties. Without loss of generality, assume that a 1 (K 1 ) ≤ a 1 (K 2 ). Notice x 0 y 0 generates the vertical homology H * (C({i = 0})) of CFK ∞ (K 1 #K 2 ). Let τ = τ (K 1 #K 2 ). Consider the subquotient complex There is a direct summand of A consisting of the generators x 0 y 0 , x 0 y 1 , x 1 y 0 , and x 1 y 1 , and four horizontal arrows as shown in Figure 7. The arrow x 1 y 0 to x 0 y 0 has length a 1 (K 1 ). Clearly, ε(K 1 #K 2 ) = 1 and a 1 (K 1 #K 2 ) = a 1 (K 1 ). x 0 y 0 x 1 y 0 x 0 y 1 x 1 y 1 Figure 7. A direct summand of A = C{min(i, j −τ ) = 0} in Proposition 3.10(3) This is the summand that is relevant for computing a 1 , as it contains the generator x 0 y 0 of vertical homology H * (C{i = 0}). Proposition 3.10 can be rewritten as the following. Example 3.13. The connect sum of any knot K with the reverse of its mirror −K, i.e. the inverse of K in the concordance group C, has vanishing a 1 (K# − K) = 0. We conclude with some computations of the a 1 -invariant. Example 3.16. The Conway knot C 2,1 has a 1 (C 2,1 ) = 0. According to [Pet10], the knot Floer chain complex CFK ∞ (C 2,1 ) is generated as a F[U, U −1 ]−module by a single isolated F at the origin plus a collection of null-homologous "boxes". Example 3.17. The knot Floer chain complex of an L-space knot is a given by Theorem 2.1 in [OSS14]. If K is an L-space knot, with Alexander polynomial where n 0 > n 1 > · · · > n k , then a 1 (K) = n 0 − n 1 by Lemma 6.5 [Hom14b].
2019-03-30T21:13:21.000Z
2019-03-30T00:00:00.000
{ "year": 2021, "sha1": "512616f71aff2704f3a92083222847c81ca6c923", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://doi.org/10.1090/proc/15212", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "512616f71aff2704f3a92083222847c81ca6c923", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
23943454
pes2o/s2orc
v3-fos-license
Tunnel widening prevention with the allo-Achilles tendon graft in anterior cruciate ligament reconstruction Surgical tips and short term followup Background: Tunnel widening (TW) after anterior cruciate ligament (ACL) reconstruction can be a serious complication, and there is controversy over how to prevent it. This study aimed to suggest surgical approaches to prevent TW using an allo-Achilles tendon graft, and then to evaluate TW after these surgical tips were applied. Materials and Methods: Sixty two patients underwent ACL reconstruction with an allo-Achilles tendon graft. Four surgical approaches were used: Making a tibial tunnel by bone impaction, intraarticular reamer application, bone portion application for the femoral tunnel, and an additional bone plug application for the tibial tunnel. After more than 1-year, followup radiographs including anteroposterior and lateral views were taken in 29 patients encompassing thirty knees. The diameter of the tunnels at postoperation day 1 (POD1) and at followup was measured and compared. Results: In 18 knees (60%), there were no visible femoral tunnel margins on the radiographs at POD1 or followup. In the other 12 cases, which had visible femoral tunnel margins on followup radiographs, the mean femoral tunnel diameter was 8.6 mm. In the tibial tunnel, the mean diameters did not increase on all three levels (proximal, middle, and distal), and there was no statistically significant difference between the diameters at POD1 and followup. Conclusion: The suggested tips for surgery involving an allo-Achilles tendon graft can effectively prevent TW after ACL reconstruction according to this case series. These surgical tips can prevent TW. IntroductIon A fter the first description of tunnel widening (TW) following anterior cruciate ligament (ACL) reconstruction in the 1990s, 1 the condition has been reported in many other studies. [2][3][4][5][6][7][8] The correlation between TW and the clinical outcomes of ACL reconstruction remains unclear; however, many researchers have investigated the causes of TW and methods of preventing it because TW can be a factor in graft failure after ACL reconstruction 9 and makes revision ACL reconstruction difficult. The etiology of TW is still unclear as both mechanical and biological factors have been suggested to play roles. 10 The biological factors proposed include an antigenic immune response, 11 a toxic effect, 1 a nonspecific inflammatory response, 12 and cellular necrosis from drilling and graft remodeling, [13][14][15] and the mechanical factors proposed include local stress deprivation of the tunnel wall, 15 grafttunnel motion, 12 aggressive rehabilitation, 16 and increased graft forces due to improper graft placement. Many recent studies have tried to prevent TW by modifying the surgical techniques used; strategies include the use of a bone plug application with press-fit, 17,18 bone impaction using a dilator, 19 proper tunnel positioning, 8 and a periosteal envelope. 5 It has been reported that these surgical procedures can effectively reduce the extent of TW; however, there is still controversy regarding the most effective method of preventing TW. This study suggests surgical techniques to prevent TW by using an allo-Achilles tendon graft and to evaluate TW after ACL reconstruction with these techniques. We hypothesized that ACL reconstruction using an allo-Achilles graft with the suggested surgical tips would lead to less TW. MAtEriAls And MEthods 85 patients who underwent ACL reconstruction by a single surgeon (DWS) between September 2011 and June 2013 were included in this retrospectively study. Twenty three patients were excluded based on the following exclusion criteria: Age under 18 (n = 5), revisional ACL reconstruction (n = 13), and associated bony surgery such as high tibial osteotomy (n = 5). Among the remaining 62 patients, 29 patients with thirty knees (one patient underwent bilateral ACL reconstruction within a 2-month interval) were followed for more than 1 year. Ethical approval for the current study was obtained from the Public Institutional Review Board of country. Operative procedure All patients underwent single-bundle ACL reconstruction by a transtibial technique with careful targeting of the femoral insertion of the native ACL. After assessing the amount of remaining fiber and the tension of the injured ACL, the surgeon chose to perform an ACL reconstruction. Because of its advantages in ligament healing and proprioception, we preferred remnant-preserving ACL reconstruction with internal sutures between the remnant and reconstructed graft. 20,21 To prevent TW after ACL reconstruction, we modified a few steps of procedure: First, we preferred bone-to-bone healing to bone-to-tendon healing. 22 Second, we used gradual reaming with bone impaction with a dilator to minimize bone loss during tunnel reaming. Third, to prevent undesired tibial tunnel reaming during femoral reaming, we applied a tibial tunnel-independent guide pin during the femoral reaming procedure. Fourth, to obtain bone-to-bone healing of the tibial tunnel, the tibial tunnel was fixed with a bone plug with a small interference screw. We designed the allo-Achilles graft with a 10 mm diameter that preserved the bone-tendon junction as much as possible [ Figure 1]. First, the bone block was cut and prepared into a cylindrical shape, 10 mm in diameter and 20 mm in length using a bone saw. To preserve the bonetendon junction as much as possible, the bone block of the allo-Achilles tendon was prepared along the direction of the tendon fiber as shown in Figure 1, not perpendicular to the junction. We also prepared a free bone block from remaining allo-Calcaneus bone to be used as a bone plug for the tibial tunnel, which was 5 mm wide and 25 mm long. Tibial tunnel reaming During tunnel reaming, reamers can cause bone debris or thermal injury to the tunnel wall. These are known to cause TW. To prevent TW and to increase the compactness of the bone around the tunnel, a previous researcher had used a dilator with bone impaction. 19 However, bone impaction using a dilator could result in a cortical bone fracture on the articular side. Therefore, we modified the previous bone impaction technique by gradually reaming from 7 to 9 mm, then carrying out bone impaction with a dilator beneath the articular cortex, and finally reaming the articular cortical wall using a 10 mm reamer. Femoral tunnel reaming For femoral tunnel reaming, a guide pin was passed through the tibial tunnel and fixed on the distal femur. Usually, the reamer is applied to this guide pin extraarticularly. However, if the 10 mm diameter reamer passes through the same 10 mm diameter tibial tunnel within the rigid guide pin, the reamer can injure the tibial tunnel if there is a mismatch between the tibial tunnel and the fixed guide pin. To prevent tunnel damage via this mechanism, we recommend intraarticular reamer application [ Figure 2]. Before the reamer was used, the guide pin was pulled proximally until the tip of the guide pin which was located in the intraarticular space. Then, the reamer was passed through the tibial tunnel freely and attached to the tip. Before reaming, the guide pin was pushed distally about 2 cm to prevent axis mismatch between the guide pin and the reamer. After reaming, the guide pin was pulled again to prevent injury that can occur while the reamer is passed through the tibial tunnel for separating. After these procedures, the guide pin was pushed in again until the tip passed the tibial tunnel. Bone plug application for tibial tunnel The bony portion of the allo-Achilles tendon graft is applied to the femoral tunnel and that an 8 mm metal interference is used for fixation. With this fixation, the graft-tunnel motion may be absent. In addition, it provides a strong bone-to-bone union between the graft and the tunnel. On the tibial side, the previous studies used the bone plug to reduce TW. 17,18 To obtain a strong bone-to-bone healing on the tibial side, we also recommended a bone plug. In our cases, a bone plug 5 mm wide and 25 mm long was prepared by using the remaining calcaneal bone of the allo-Achilles tendon graft. For stable fixation, dual fixation was recommended; the graft was fixed by a screw with a spike washer on the extratunnel part, and then intratunnel fixation was accomplished via the prepared bone plug. To prevent motion of the bone plug and to obtain a greater compression force on the graft-tunnel junction, we added a 7 mm bioabsorbable interference screw between the bone plug and the tibial tunnel [ Figure 3]. Radiographic evaluation and analysis On postoperative day 1 (POD1) and during the followup visit, patients underwent simple radiographs in both the anteroposterior (AP) and lateral views. The diameters of To prevent this injury, the guide pin is pulled proximally until the tip is located in the articular space. This allows the reamer to pass the tibial tunnel freely, without damaging the tibial tunnel. (f) On reaming, if the distance between the guide pin and the reamer is too short, then the direction of the reamer may be off causing improper and damaging tunnel reaming. Thus, the guide pin moves distally before reaming the femoral tunnel. (g) By using the 10 mm head reamer with a narrow shaft, femoral tunnel reaming can be completed without injuring the tibial tunnel. (h and i) To prevent similar tibial tunnel injury during reamer detachment, the guide pin is pulled proximally again, and the reamer is separated from the guide pin and knee joint. After that the guide pin is passed through the tibial tunnel using the cannulated guide the femoral and tibial tunnels were measured on three levels (proximal, middle, and distal) as was described in previous studies. 23,24 The automatic distance measurement tool of the PiView STAR program (Infinitt Healthcare, Seoul, Korea), a type of picture archiving and communication system, was used to export all images and perform all measurements. Between the data obtained on POD1 and during followup, a difference of up to 1 mm was considered to be clinically relevant. Statistically, data on femoral tunnels were compared using the Mann-Whitney U-test and data on tibial tunnels were analyzed by paired t-tests. All statistical analyses were performed by the Statistical Package for the Social Sciences (SPSS) software version 12.0 (SPSS, Chicago, IL, USA) and a P value under 0.05 was considered statistically significant. rEsults The mean followup duration until the simple radiographs were taken was 16.2 months (range 12 to 31 months), and patients demographics are shown in Table 1. Femoral tunnel widening Because the bone portion of the allo-Achilles graft was placed in the femoral tunnel, we could not identify the tunnel margin of the femoral tunnel in most of the simple radiographs. In eighteen of the thirty knees (60%), we could not see the femoral tunnel margin on either the POD1 or in the followup radiographs either in anteroposterior or lateral views [ Figure 4]. In four cases, the radiographs taken on POD1 had a line showing the femoral tunnel margin, and in only one case, the visible femoral tunnel margin was available in the AP view alone. That case showed an increase of tunnel diameter in the followup radiographs [ Figure 5]. It was also the case which had the f a e largest femoral tunnel width in the present study. In eight other cases, the margins of the femoral tunnel could only be identified on the followup radiographs; however, the diameters at the three levels in both AP and lateral view were smaller than 10 mm, the reamer diameter for the femoral tunnel. The mean diameter of the femoral tunnels that could be identified on followup radiographs was 8.6 mm. According to the results of the Mann-Whitney U-test, there were no differences in tunnel diameter between POD1 and followup radiographs [ Table 2]. Tibial tunnel widening The mean diameters of the tibial tunnel in POD1 were 10.2, 10.7, and 11.3 mm in the AP view and 10.4, 11.0, and 11.6 mm in the lateral view (proximal, middle, and distal levels, respectively). On the followup radiographs, the mean diameters were decreased to 9.9, 10.5, and 11.1 mm in the AP view and 10.3, 10.6, and 11.1 mm in the lateral view (proximal, middle, and distal levels, respectively) [Table 2 and Figure 4]. Of the thirty knees, half (n = 15) showed an increase in mean tibial tunnel diameter and 4 (13.3%) had more than a 1 mm increment in mean tibial tunnel diameter; the increments were 1.4, 1.5, 2.0, and 2.8 mm [ Figure 6]. Between POD1 and the followup, the paired t-tests showed no statistically significant differences in all three levels in both views. discussion With our surgical tips using an allo-Achilles tendon graft, TW at least 1 year after surgery was effectively prevented in this study. Although the clinical relevance of TW is not clear, the risk of revision after ACL reconstruction and the problems associated with revision ACL reconstruction from TW would be reduced by our surgical techniques. Many studies have evaluated TW after ACL reconstruction based on the type of graft, fixation methods, or surgical techniques. 2-4,6-8, [25][26][27] The mean extent of TW was 10-30% in those studies. According to the results of a previous study which used the same measurement methods as the present study, the mean amount of TW was about 7 mm. 23 On the other hand, the mean amount of TW was found not to increase significantly in the present study. Furthermore, there were only four cases (13.3%) of TW >1 mm. Given the results of this and other studies, we suggest that our surgical techniques are good options to prevent TW after ACL reconstruction. As various types of grafts can be used for ACL reconstruction, many researchers have focused on which graft has the best clinical outcomes and the lowest morbidity rate. In particular, "autograft versus allograft" has been an important topic in orthopedic research. [28][29][30] Regarding TW, a previous study reported that a significantly higher amount of TW was observed in the allograft group compared to the autograft group, 26 but some studies have reported that there are no significant differences between autograft and allograft groups. 25,27,31 In the current study, though Achilles tendon allografts were used for ACL reconstruction, the results revealed no increase in tunnel diameter, which is better than the results of previous studies. [25][26][27] These results imply that the allo-Achilles tendon graft is a good option in ACL reconstruction to prevent TW. For the junction between the graft and the tunnel, bone-tobone healing may be better than bone-to-tendon healing. A previous study, which compared patellar and hamstring tendons, reported that TW occurred less in the patellar tendon, which allows for bone-to-bone healing on both the femoral and tibial sides. 32 Another study used the periosteal envelope to overcome the limitation of bone-to-tendon healing and reported minimal TW. 5 To obtain bone-to-bone healing in our ACL reconstructions without complications, we used the allo-Achilles tendon graft, which can provide bone-to-bone healing on the femoral tunnel. For the tibial tunnel, we used a bone plug from the remaining calcaneal bone of the previous allo-Achilles tendon graft. For fixation, we used an 8 mm metal interference screw for the femoral side and dual fixation for the tibial side, which was composed of the press fit by a bone plug and bioabsorbable interference screw, and the postfixation by a spike washer. The results of the present study showed no change after the 1 st year, meaning that these techniques can be good options for obtaining stability after ACL reconstruction and to prevent TW. During tunnel reaming, thermal injury or mechanical injury caused by the reamer and/or bone debris can occur. Such injuries can loosen the tunnel wall and cause TW or fixation failure. A previous study used a dilator to create more compact bone on the tunnel wall and reported less TW than with reaming alone. 19 We also used a bone dilator on the tibial tunnel to prevent TW, and our results also showed a successful reduction in TW. However, there is a risk that the dilator will negatively impact the articular cortical bone, possibly resulting in the development of a fracture on the tunnel aperture. Therefore, when performing bone impaction using a dilator, we recommend impacting just beneath the articular cortex and finalizing on the articular cortex with a 10 mm reamer. Using this technique, no fracture of the aperture occurred in our cases. During transtibial ACL reconstruction, guide pin application and tunnel reaming for the femoral tunnel were performed through the tibial tunnel. Ideally, the guide pin and reamer for the femoral tunnel would be smaller or of an equal diameter to the tibial tunnel so that no injury would occur when constructing the femoral tunnel. However, the femur and tibia are not fixed to each other, and the axis of the femur and tibia can change during ACL reconstruction. Thus, an axis mismatch between the femoral and tibial tunnels can develop, potentially leading to an injury on the tibial tunnel wall. We recommend that the "intraarticular reamer application" method can be used to prevent this injury. This can minimize tibial tunnel injury during tibial tunnel-dependent femoral tunnel reaming ACL reconstruction. The previous studies have had success using a bone plug in the tibial tunnel to improve bone-to-tendon healing and to prevent TW in ACL reconstruction. 17,18 Jagodzinski et al. reported that press-fit bone plug fixation decreases the amount of TW. 17 Another study used an autogenous bone plug for tibial press-fit fixation and also reported that autogenous bone plugs reduce tibial TW compared with bioabsorbable interference screws. 18 We also used a bone plug for the tibial tunnel; however, we were concerned about the limited expansive force of the cylindrical bone plug compared to the cone-shaped interference screw. Therefore, we added a smaller bioabsorbable interference screw between the bone plug and tunnel to increase the compression force between the graft and bone plug. With this method, there was no fixation failure 1-year postoperatively, and TW was effectively reduced. There is a probable complication inherent in our surgical procedures due to a mismatch between the femoral tunnel and the metal interference screw. After passing the allo-Achilles graft through both tunnels, the metal interference screw cannot be applied through the tibial tunnel. Therefore, we applied the screw through the anteromedial portal after full flexion. With this modification, almost no cases had mismatches. However, among the 63 knees in the present study, the postoperative magnetic resonance images of eight cases (12.7%) showed mismatch [ Figure 7]. Although there was no difference in TW or revision rate according to the occurrence of the mismatch, more studies on the modified methods, such as transportal tunnel reaming, and the clinical results of long term followup are needed to further bolster our present results. There are certain limitations to the current study. First, there was no untreated control group, and it was not a prospective, randomized study. Second, there were no clinical results and the followup rate was small. Third, the followup period was relatively short. In the current study, some patients' followup radiographs were taken only 1 year after the operation. This period might be too short to prove the effectiveness of our surgical techniques. Fourth, we cannot prove which of our tips had the greatest effect in preventing TW. We need more prospective randomized multicentric studies to reach to conclusions. conclusion Our surgical techniques using an allo-Achilles tendon graft effectively prevented TW after ACL reconstruction in our case series. Therefore, these surgical tips are good options to prevent TW. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T02:24:23.720Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "9b125bc8e580d3fe7fcfdc935d7e1600a3cb6665", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc5361468", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "dd21eed8dbb153c6ee89c804a6f7c4a014b7de48", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233555069
pes2o/s2orc
v3-fos-license
Software Sensors for Order Tracking Applied to Permanent Magnet Synchronous Generator Diagnostics: A Comparative Study : The paper deals with software sensors which facilitates the diagnosis of electrical machines in non-stationary operating conditions. The technique targeted is order tracking for which different techniques exist to estimate the speed and angle of rotation. However, from a methodological point of view, this paper offers a comparison of several methods in order to evaluate their performance from tests on a test bench. In addition, to perform the tests, it is necessary to initialize the different methods to make them work correctly. In particular, an identification technique is proposed as well as a way to facilitate initialization. The example of this paper is that of a synchronous generator. Angular sampling allows the spectrum to be stationary and the interpretation of a possible defect. The realization of the angular sampling and the first diagnostic elements require the knowledge of two fundamental quantities: the speed of rotation and the angular position of the shaft. The estimation of the rotation speed as well as the estimation of the angular position of the shaft are carried out from the measurement of an electric current (or three electric currents and three voltages). Four methods are proposed and evaluated to realize software sensors: identification technique, PLL (Phase Locked Loop), Concordia transform and an observer. The four methods are evaluated on measurements carried out on a test bench. The results are discussed from the diagnosis of a mechanical fault. Introduction In many industrial diagnostic applications, operators are faced with cases of nonstationary operating modes. This is, for example, the case in the wind energy sector. In this case, order tracking, carried out from angular sampling, makes it possible to consider the study of the diagnosis of an electromechanical drive in non-stationary operation. For cases where the industrial device does not have a speed sensor or a position sensor, order tracking is really complicated to implement. The major difficulty is the estimation of the speed and the position for the angular resampling. In this context, the main objective of the paper is the order tracking technique from just current measurement. The difficulty is then to estimate the speed of rotation in order to deduce the angle which serves as the synchronization signal for the sampling [1,2]. References [1,2] present a recent and complete bibliography in the case of order tracking, for systems used at variable speed, from a vibratory signal. For this same reference, order tracking from current measurements is not developed in detail. In the case of applications used at a fixed speed, the problem is simpler. One can find interesting results and an important bibliography in reference [3]. The current context, industry 4.0, requires the processing of a large number of data to improve the diagnostic of an industrial installation. The data, for cost reasons, come mainly from software sensors [4]. The signature of faults can be found in different physical quantities: mechanical, acoustic vibrations, temperature, magnetic flux, rotation speed, torque, electric currents, electric voltages, electric powers. More commonly, the diagnosis is in particular possible from the vibration signals [5,6] measured by correctly placed piezoelectric sensors [7] or strain gauges [8]. It is also interesting to be able to make the diagnosis from the measurement of the electric currents which supply a motor. From a physical point of view, many faults are characterized by a harmonic component for which the frequency evolves. In this paper, we consider that the speed is not measured directly. If the speed signal is not available from a sensor, the order tracking is difficult to perform. This additional difficulty requires finding the speed of rotation from a harmonic analysis of the vibratory signal or the "current" signal. As an example, we can give the many electric motors, which in the industry, are not used with a constant rotational speed. The appearance of a fault generally results in the appearance of a harmonic signal which is a multiple of the speed of rotation. From electrical current signals, to identify a fault, it is possible to search for it in the signal spectrum. An electric current measurement must be processed to extract harmonic components. For the consideration of future use, simplicity is important. This is why software sensors are established from current measurements. The simplest case would be the measurement of a single electric current. This technique is compared to those using the measurement of the three currents. In a non-stationary context, we focus this paper on the extraction of the instantaneous angular speed of rotation and the angular position of the shaft. This information can be used to make an angular sampling of the signals in order to stationarize the signals and therefore the spectrum. The angular sampling part is not developed in this paper. Many papers deal with the frequency estimation of a signal with methods presenting advantages for particular cases. These methods, for an industrial application, present several difficulties: initialization and the possibility of using it in real time [9,10]. The paper [1] presents a survey of techniques used to estimate the rotational speed in the case where no speed sensor is available and in the case where the speed is variable. Many methods are based on vibration measurement. However, it is possible to obtain an estimate of the speed of rotation from the measurement of an electric current (motor power supply) or of the three electric currents (in the case of a three-phase power supply). This paper deals with a comparison of several techniques to help the development of software sensors in order to help the use of order tracking techniques. The context in the research work presented in this paper is that of a diagnostic tool for electrical machines without a speed sensor or angular position sensor. The only measurements used will be the measurements of electric currents (possibly measurements of electrical voltages). The electric machine does not operate at constant speed: case of non-stationary use. In the literature different methods exist, it remains however to specify a methodology for adjusting the parameters. The different methods presented in the paper have already been the subject of previous work, the methods for initializing the parameters are specified and the results obtained are compared. This work allows to consider the choice of a method to develop sensors with angular synchronization for the same benchmark. Thus the calculation of the spectra of the signals x(θ) is performed (x can represent for example a current or another physical quantity). This angular synchronization makes it possible to process signals for non-stationary operation.The different techniques presented in this paper can be used either offline or online. The paper also indicates methodological elements and improvement of the initialization phase to help in the choice of a technique compared to another. The Section 2 details the different methods that are evaluated on a test bench. The tests allow to evaluate the measurement of the speed of rotation of an engine according to the following techniques: technique based on the measurement of a single current (1) or on the measurement of the three currents (three-phase motor) (2) or on the measurement of the three currents and three voltages (3). For method (1), we first present signal model identification approach (Section 2.1 and secondly a demodulation approach (Section 2.2). For method (2) and method (3), we detail respectively a technique based on the Concordia transform (Section 2.3) and an observer-based technique (Section 2.4). The paper deals with tools allowing the design of software sensors for monitoring particular frequencies. This approach is a physical approach, the frequencies tracked correspond to physically determined fault frequencies. Other approaches are exploited in this area such as machine learning [11] but they are not discussed in this paper. Frequency Tracking: Different Tools The different methods exposed can be used from electrical measurements (currents, voltages) to find a harmonic component. The detected frequency is the frequency corresponding to the instantaneous angular speed of rotation. Signal Model Identification The algorithm used is a non-linear algorithm based on an adaptive selective filter (notch filter), it was initially proposed in [12] and used in several applications: estimation of symmetrical components in three-phase electrical networks [13], "in situ" efficiency estimation of asynchronous machines [14] and measure harmonics in power electronics [15]. The algorithm is detailed in the article [14] for which Equations (1)-(8) are indicated. This paper takes up the structure proposed in [14] and proposes a new technique for initializing the parameters of the algorithm (detailed below). We recall, that the objective of this paper is to compare the results obtained by this algorithm with three other methods. Let's consider a sinusoidal signal: To this signal, we add other sinusoidal components and noise n(t) : With A(t) the signal magnitude, ω(t) the pulsation, α(t) the phase angle and n(t) the noise component. The algorithm uses the gradient method and minimizes the square of the error function: The algorithm is based on the following equations: The functional diagram of the algorithm ( Figure 1) shows the use of three parameters, m 1 , m 2 and m 3 . No methodology is specified to initialize the three parameters. The speed of convergence depends on the value of m 1 while m 2 and m 3 have an impact on frequency and phase tracking. We notice the use of integrals in the block diagram, it is therefore very important to correctly initialize the parameters m i to ensure a convergence of the algorithm. This step is developed below. Block diagram of the non-linear identification algorithm, (Diagram from the article by [14]). The Figure 1 shows an interesting algorithm for applications where the instantaneous frequency must be estimated in real time. Indeed, its implementation requires only elements of elementary calculations (additions, subtraction, multiplications, integration). The identification algorithm allows to estimate both the frequency and the magnitude of a sinusoidal signal. Its realization seems attractive for a "real time" application. However, the initialization of the three parameters m 1 , m 2 and m 3 remains difficult due to the couplings between the magnitude estimation and frequency estimation loops. In addition, the loops are non-linear, which makes setting the parameters a little more complex. In this paper, we propose to linearize the algorithm presented ( Figure 2) in order to help initialize the parameters. Note that linearization degrades the accuracy of the results. However, linearization is the technique we propose to initialize the algorithm. Once the parameters are initialized, the algorithm is used in its initial version. To consider linearization, Figure 2 separates the variations of the signal into two parts: the amplitude part and the frequency part. The input signal and the output signal are considered respectively as and φ 0 is the phase of the signals u(t) and y(t) for t = 0, it is set to an arbitrary value. As shown in Figure 2, after linearization, two transfer functions H 1 and H 2 are performed. Linearization results in a decoupling of the estimate of the amplitude and the estimates of the frequency. Thus, the initialization phase, the estimation of the amplitude can be managed independently of the frequency estimation. The calculations are not detailed in this paper. The result of linearization leads to: From the simplified models represented by Equations (10) and (11), resulting from linearization, it is possible to adjust m 1 , m 2 and m 3 to fix the dynamic performance of the estimator algorithm (magnitude and frequency). For example, from the given relation Equation (10), it is necessary to fix a response time (t r ) as well as a damping coefficient (m) for the system H 1 (model of a 2nd order system). By identification, the value of the parameters m 2 and m 3 can be deduced: m 2 = 9/(m 2 * A0 2 * π * t 2 r ) and m 3 = 2 * m 2 * t r /3. Demodulation Approach Phase Locked Loop (PLL) are widely used in different sectors such as communications or electrical networks. The main function sought is the phase or frequency demodulation for which the PLL are efficient in terms of speed and precision, [16]. In general, a PLL is made up of three parts: the phase detector (PD: Phase detector), the loop filter (LF: Loop Filter) and the Voltage Controlled Oscillator (VCO: Voltage controlled oscillator). Many improvements have been made, motivated by the need to design digital PLL [17,18]. The solution most encountered is the use of a Quadrature Phase Detector (QPD) [19]. The QPD uses an orthogonal signal generator (OSG) to create 90-degree phase shifted signals. The harmonic after multiplication is canceled due to orthogonality. The OSG solution appears today as a robust topology against noise and rapid changes. The OSG proposed by [20] uses two derivative elements (DE: derivative element) to calculate the phase error ( Figure 3). Each block DE x is composed of two filters whose phase difference is constant and equal to π/2 whatever the frequency of the input signal as shown in the Figure 4. The transfer functions of the two filters G x and G x are given by: with: -G x : Bandpass filter with central frequency ω R , -G x : Low-pass with cut-off frequency filter ω R . More details are given in [21]. This version of the PLL does not work correctly for signals having both a variation of the frequency and a variation of the magnitude. To solve this problem, a normalization of the magnitude of the input signal is proposed. The structure modified is presented on Figure 5. v α and v β are the output of OSG and are nomalized using the amplitude (v 2 It is noted that, following standardization, the PLL is working properly. If we take the expressions of the two OSG filters (12) and (13), we note these filters are centered on their central pulsation ω R which is fixed here at ω R = 2πFc. In order to improve the PLL, we use adaptive filters so that the OSG filters can follow the input frequency variations. The adaptation is carried out according to the estimated pulsation notedω as indicated in the Figure 6. Figure 6. The PLL input normalization. In order to realize the adaptive OSG filters, we use a state variable structure (Figure 7). --- Band Pass Low Pass ? The adaptation of the input filters associated with normalization now makes it possible to follow the variations in frequency of the input signal while remaining insensitive to variations in magnitude. Concordia Transform Method When three current measurements are available, the simplest method for estimating the magnitude and the instantaneous phase is the Concordia transform. Consider that the instantaneous magnitude I A(t) and the instantaneous frequency IF(t) can be estimated by the following relations: Estimation of different signals can be obtained from two currents under the assumption of load balance (Figure 8). The instantaneous pulse IW(t) is proportional to the derivative of the instantaneous phase IP(t). In practice, we avoid directly calculating this derivative numerically since the current measurements are noisy. An alternative is to use a phase in a closed loop structure (Figure 9). The system is first order in closed loop. The transfer between the instantaneous phase θ e (t) reconstructed by Concordia and the phase estimated at the output of the closed loop θ s (t) is a low pass filter. θ s (s) θ e (s) The dynamic is directly fixed by the parameter k. In the case of a phase variation in the ramp (fixed speed for example), There is a speed error. The transfer between the instantaneous phase θ e (t) and the instantaneous pulsation IW est (t) is a high pass filter. For k > 1 this transfer is a high pass type (derivative filtered in high frequency) with a constant static gain for the pulsations such as ω > k. In theory, the transfer between the real instantaneous pulse IW(t) and the estimated pulse IW est (t) at the input of the integrator is governed by the system (15). However, this is only correct if the relation IW(t) = d dt IP(t) is true. In reality, θ e (t) is deduced from the measurement of the currents and from the computation of the function ATAN. The imperfections introduced by these calculation mean that the transfer between IW(t) and IW is (t) does not really behave like a first order low pass filter. This will be discussed in the Section 4. A priori, the transformation of Concordia appears as a solution easy to use to estimate both the mechanical angle and the speed of the electric generator. After a series of tests on the experimental setup, the use of the Concordia transform is tricky. This technique is very sensitive to measurement noise. Observer-Based Technique The adaptive observer used in this subsection is defined as direct and quadrature (d/q) components [22,23]. The rotor speed and the angular position are estimated from the error between the stator currents measured and those estimated by the model. An adaptation mechanism is designed using the error of estimation of the stator currents in order to estimate the speed of the rotor. The adaptive model takes as feedback the output of the adaptation mechanism (i.e., the speed of the rotor). The adaptive observer structure is shown in Figure 10, where the stator currents are chosen as state variables in the adaptive model. Regarding the stability study, the paper specifies the elements that characterize the robustness property of stability from experimental tests. with: u s = u d u q T represents the vector of stator voltages. -î s = î dîq T represents the vector of stator currents. -R s represents the stator resistance. pω m =˙θ m represents the estimated position of the rotor (rotor angle). -J is a square matrix of order 2: -ĩ s represents the error of estimation of the stator currents: The flux equation is defined by: with: ψ pm represents the permanent magnet flux expressed as: ψ pm = ψ pm 0 T . -L represents the matrix of inductances which depends respectively on the inductances along the direct axis and the quadrature axis L d and L q : λ in the Equation (17) represents the feedback gain matrix. In order to place two poles of the observer in the complex plane at a specific position, λ must include a symmetrical part and an antisymmetric part as follows: with: -I represents an identity matrix of order 2. λ 1 and λ 2 represent scalar gains. The current error is computed as follows: To estimate the electrical angular speed of the rotor, an adaptation based on a proportionalintegral regulator (PI) is performed. with k p and k i the coefficients of the PI regulator. The estimated rotor positionθ m (t) is obtained by integrating the estimated angular speed of the rotorω m (t). Usually this type of observer is used for machine control. The observer parameters λ 1 , λ 2 , k p and k i are determined to tune the dynamics of estimation of the currents (λ 1 , λ 2 ) and the speed ( k p and k i ). In [24], this observer was adjusted specifically for mechanical diagnosis at fixed speed. For the diagnosis, a study carried out by [24] shows the behavior of the observer and thus makes it possible to adjust parameters λ 1 , λ 2 , k p and k i . By linearization, a study of the dynamic functioning of the observer makes it possible to calculate the transfer functions which allow to compute the speed. Experimental Setup The tests are carried out on the test bench presented in this section. However, the tools presented in this paper can be transposed to other synchronous machine technologies. Simply for the observer-based technique, it is necessary to modify the model used. The test bench is presented on Figure 12. The test machines are synchronous machines from Leroy Somer with their approximate power of 8 kW. They are connected to the electrical network through two speed drivers. The generator operates in regeneration and energy return mode to the electrical network. The speed control strategy or a torque control strategy can be chosen for both machines. They are linked by a COMPABLOC multiplier (LEROY SOMER) with speed ratio N = 4.57 located on the motor side. The Figure 13 recalls the structure of the software sensor proposed in this paper. The angle measurement, Θ(t) is used for angular sampling of the signals. This sampling mode makes it possible to make the spectrum stationary in the case of a non-stationary signal. The Section 4 evaluates the different techniques proposed in this paper by comparing the spectra obtained to a theoretical spectrum. To evaluate and compare the four proposed techniques, we propose to work on a current signal (or current signals) in the case where there is no fault and in the case where there is a fault. The fault is created from the device described in Figure 14. The device makes it possible to disrupt operation with nine impacts per revolution. The test bench multiplier has a ratio of 4.57, which therefore corresponds to 1.97 impacts per revolution (generator side). The different algorithms compute hand the speed of rotation and the angle used for an angular sampling of the signal. Therefore, the measured signal no longer depends on time but on the angular position. The signal is denoted x(Θ). With Fourier theory, we calculate the "spectrum" of x(Θ). The result will be expressed in "ev/tr". Thus, the spectrum is fixed despite the non-stationary operation of the electric machine. The quality of the estimation of the speed and of the angle can be evaluated from the quality of the spectrum. Therefore, on the spectrum obtained with the signal x(Θ), the information to be observed is located at g = g d = 1.97 events per revolution. In fact, the multiplier is defined with a speed ratio N = 4.57 located on the motor side. So with a default of 9 impacts per revolution g = 9 4.57 = 1.97 events per revolution. Experimental Results and Discussion of Numerical Results From the description of the experimental setup described in Section 3, each technique requires initialization of parameters as indicated in Table 3. The parameters were initialized for each method by considering the dynamics of the magnitude estimation and the frequency estimation. In fact the frequency of rotation evolves over time with a certain dynamic that it is important to consider to initialize the different parameters. The estimation of the rotational speed and the angular position must be made with sufficient dynamics and precision for the angle to be used for angular resampling. The temporal dynamics to extract the value of the angular position must be carried out quickly with respect to the speed of rotation of the machine. So the bandwidth of the software sensor thus developed is greater than the frequency of rotation of the electric machine. Several experimental tests were carried out to initialize the algorithms proposed in this paper. Table 3. Parameters for setting the estimation methods. Method Parameters Initialization Identification Phase control: k, τ k = 100, τ = 0.003 Observer For Figures 15 and 16, the first plot named "measure" is obtained from the sensors available on the test bench. These plots are used to check and compare the efficiency of the four alogorithms proposed in this paper. The first results ( Figure 15) obtained show the influence of the noise on electric current measurements. In this test, the noisy currents are directly transmitted to the algorithms. Their ability to naturally filter out measurement noise is therefore tested. The recording of 30 cycles has a duration of 3 min and 45 s. First, the algorithms are applied to a faultless data set, then a data set with a default of 9 impacts per revolution on the motor side (g = g d = 1.97 events per revolution). We define a ratio of the amplitudes found in g = 1.97 events per revolution with and without fault. This ratio is computed by dividing the magnitude of the spectrum defect (with defect) by the average magnitude around the spectrum defect (without defect). For the test considered, the average magnitude is computed in the band [1.95; 2] events per revolution. The results are summarized in the Table 4. Note that all the methods give estimated noisy speeds, except the PLL. Filters used to normalize the current signal at the PLL input have filtered out noise. When the estimate is too noised, there is a risk that the information about a defect is drowned in the noise. The addition of an additional filter possible, however its adjustment is delicate: it is not necessary to filter the information concerning a sought fault. Despite the noise on the estimated speeds, the identification algorithm and the observer allow the location of the fault by their natural filtering: an integrator (low pass filtering) is necessary to obtain the angular position which is used for the resampling. Note that the least effective method without filtering seems to be the Concordia transform. The Table 4 shows that the ratio calculated for the PLL is the best, which is predictable because this method is the only one which offers bandpass filtering of the signal by its OSG filters. In this new test, the currents are filtered by a low-pass filter before estimation. The mechanical rotation frequency varies between 2.5 and 12.5 Hz , from an electrical point of view. In fact, for the electric machine we are using, these frequencies correspond to 75 round/min and 375 round/min respectively. The currents are filtered by a first order low-pass filter with a cut-off frequency equal to 100 Hz. The results are given on the Figure 16 and Table 5. In this test, the estimated speeds are less noisy for all the methods. The speed estimated by Concordia is always noisy, especially at low frequencies. Indeed, in this method, we first estimate the position by the ATAN function and then we derivate to obtain the instantaneous pulsation. Even if the position control allows a derivative filtered at high frequencies, the amplification of the noise remains. The speed estimate has improved for the identification algorithm and for the observer. To understand the interest of comparing results, it is important to remember the context of the work: tools for the development of a synchronized sensor from an angular position. The signals used are electrical currents and voltages. The electrical machine is used in a non-stationary context. The quality of the estimate with respect to noise is important because the signal obtained by angular synchronization is be used for a future diagnostic step (not treated in this paper). Each method behaves like a filter, the effects of noise are visible on the tests carried out on the test bench. From the Table 5, by comparing the magnitude ratios, Concordia method seems the weakest to detect the defect compared to the other algorithms. Identification and PLL have a small advantage over the observer. It can be noted that the magnitude of the fault estimated by the observer is the greatest, but the noises are also amplified. Conclusions In this paper, several methods have been tested allowing to estimate simultaneously the frequency of rotation and the mechanical position of the generator shaft in order to carry out an angular sampling. The comparative tests show good results for the PLL and the monitoring by the observer and Concordia which gives the least good results. This is an interesting result because it is not intuitive. Indeed, on looking here that a single current may be sufficient to achieve the isolation of the component sought. The use of 3 currents does not seem to be a relevant criterion here. The observer also gives good results, but it is necessary to have measurements of tensions and especially the model of the machine that complicates its use in industrial environment. For the two remaining methods, PLL and identification, it seems that the second remains the most robust if we consider the interpretation of the parameters m 1 , m 2 and m 3 as valid.
2021-05-04T22:04:29.739Z
2021-04-08T00:00:00.000
{ "year": 2021, "sha1": "6363046aa5d94fce48f14e2abf6956a509de10a9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-6581/10/2/59/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1965474090845f99e9198e27bb585ae17e0bb338", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
226066426
pes2o/s2orc
v3-fos-license
Optimization design for countersink depth error prediction and compensation of CFRP/Al stacks The countersink depth accuracy is one of the most important quality indexes of the rivet holes in modern aerospace industry, especially in the drilling of thin-walled carbon fiber reinforced plastic (CFRP) and aluminum (Al) stack. However, the carbon fiber of CFRP is a kind of typical difficult-to-machine material, which results in the terrible wear of the cutting tool, and then the continuously increase of thrust force in the practical machining process, leading to the significant stack deformation and making it difficult to achieve the required countersink depth tolerance. Focusing on the countersink depth control in the drilling of the thin-walled CFRP/Al stack, this paper attempts to optimize the integrated methodology [10] in order to predict and compensate for the stack deformations more accurately. In this paper, an analytical flexible countersinking thrust force model that considers both tool wear and stack deformation is first developed with detailed theoretical analysis. Then a finite element model is established to identify the key features of the stack deformation in the countersinking process. An optimized iterative algorithm, which consists of three nested iterative loops, is designed to calculate the feasible stack deformation and make decision for the error compensation. Finally the optimized methodology are verified by groups of cutting experiments, and the results have shown that the methodology could effectively guarantee the countersink depth accuracy. The work in this paper enables us to understand the generating mechanisms of the countersink depth error in the drilling of the thin-walled CFRP/Al stack, and provides a novel approach to improve the countersink depth accuracy. Introduction Carbon fiber reinforced plastic (CFRP) and aluminum (Al) stack has been widely used in the modern aerospace industry because of the excellent attributes such as low specific weights and high structural efficiency [1][2]. In order to satisfy the assembly purposes of this special structure, millions of rivet holes need to be drilled out and most of the holes are made by the drilling-countersinking operation [1,[3][4]. The countersink depth accuracy is one of the crucial quality indexes to guarantee the subsequent assembly quality of the aircraft [5]. However, due to the different mechanical properties of the two materials and the low stiffness of this structure, the varying cutting and clamping forces excite the stack leading to significant deformation, which results in less countersink depth accuracy, thus, knowledge and ability are required to predict and compensate for such depth error, and it provides the wider space and stage for new explanations. Different from the general machine tools, there is a pressure foot in the machining end effector and it could perform clamping operation against the stacks during the whole drilling-countersinking process, and it contributes to reduce cutting vibration of the thin-walled stack and minimize the interlayer gap and burrs [6]. This machine tool structure has been widely recognized and applied by the aerospace industry. For such machine operations, there are two categories of methods to control the countersink depth. The first is the real-time compensation strategy, which means that the stack deformation could be measured through the force or displacement sensors on the pressure foot and compensated in real time based on the feed motion control system [7][8][9]. The second is the non-realtime compensation strategy, and this method assumes that the stack deformation could be measured and compensated after the spindle feed movement, and this method could reduce machining vibration and guarantee the stability of the compensation process [4,6]. In spite of these totally deformation measurement and compensation method, the properties of the varying thrust force, especially for countersinking the extreme abrasiveness CFRP, and the deformation mechanism of the stack are still scarcely reported, thus, it limit the further improvement of the countersink depth accuracy. Moreover, the widespread application of CAE/CAM and advanced computational methods provide a new attempt to predict and compensate for the countersink depth. In our previous research [10], the study of countersink depth control in the drilling of thin-walled square Al plate was carried out with theoretical and experimental analysis; the cutting force and deformation mechanism was presented firstly, and then an approach of integrated iterative algorithm was established to predict the plate deformation and make decision for compensating the countersink depth error. However, when it comes to the special CFRP/Al stack with low stiffness, the proposed iterative algorithm in previous research [10] need to be optimized and modified, because the increasing of tool wear and thrust force cannot be neglected in the countersinking process due to the poor machinability and abrasiveness of CFRP, which results in more complex stack deformation than the previous only drilling the single Al plate. What's more, the prediction and compensation of the countersink depth error in the drilling of CFRP/Al stack is much more difficult, because the coupling effect of cutting force and tool wear and stack deformation has not been well understood. Given all these considerations, this paper attempts to develop a new approach to optimize the prediction and compensation of the stack deformation for higher countersink depth accuracy. In this paper, an optimized theoretical flexible countersinking thrust force model, that considers both tool wear and stack deformation, is first developed to help to analyze the complex coupling effect. Then a finite element model is established to investigate the CFRP/Al stack deformation with the key input of thrust force based on the practical machining process, and an optimized iterative decoupling algorithm is established to predict the countersink depth error and calculate the compensation value. Finally the proposed model and the optimized approach are verified by the multivariate thin-walled CFRP/Al countersinking experiments. Analysis of flexible thrust force The whole CFRP/Al stack drilling-countersinking process can be simplified into two machine operations, the first is the clamping operation and the second is the drilling and countersinking operation. More details about the cutting process and the cutting tool can be found in [10,11]. In our previous work [11], the effect of cutting tool wear on the CFRP/Al stack countersinking thrust force has been analyzed with developing a theoretical model, and there are three distinct abrasion region in the flank wear land of the cutting edge, and the cutting force that happens under the flank wear land is a continuously increscent component and it contributes a lot to the variation of the total countersinking thrust force. Therefore, in order to predict and compensate for the countersink depth error, it is significant to analyze the special coupling relation between the stack deformation and tool wear and thrust force. As shown in figure 1, and 율 , 율 , respectively represent the length of projection of the whole flank wear land and the abrasion regions of , 2, in the axial direction of the cutting tool, and this paper assumed that the nominal countersink depth is equal to , which is also equal to the countersinking feed depth of the cutting tool based on the practical machining process. According to the above-mentioned definition and assumption, the relationship between the real countersink depth and the stack deformation can be expressed as Based on our previous research [11], an approach of infinitesimal elements is used to model the influence of tool wear on the countersinking thrust force: the countersinking edge can be divided into finite elements, the differential cutting forces on the engaged infinitesimal elements can be modeled firstly and then integrated into the resultant thrust force. The expression of the CFRP/Al stack countersinking thrust force can be represented as [11] max max 0 0 where d and d t represent the differential cutting forces, and are the cutting parameter coefficients and their detailed definitions and the theoretical expressions can be found in [11]. And then a more concise equation has been formulated, that is ‫ݓ‬ ݁ 香 , which means that total thrust force can be expressed in the sum form of force increment caused by tool wear ‫ݓ‬ ݁ and the initial thrust force 香 [11]. According to equation (1), the stack deformation could affects the real countersink depth and changes the upper limit of integral ݁ in equation (2), which can be expressed as ݁ cos , and is the is the half point angle of the countersink. Therefore, it is assured that the classification modeling need to be carried out based on the stack deformation value and the length of the three abrasion regions. Case one: 0 ≤ < . As shown in figure 1, in this case, the abrasion regions of and 2 of the cutting tool are in the state of complete contact with the workpiece during the countersinking process, while the abrasion region 3 is in the state of partial contact. Therefore, the following inequality about Based on the differential forces analysis on the engaged infinitesimal cutters of the countersinking edge and the modeling approach of integral operation [11], the comprehensive influence of tool wear and stack deformation on the thrust force can be formulated particularly with the classified integral (3) to (5). Because there are three cases of the upper integral limit in the theoretical model, the final countersinking thrust force is also classified into three cases. Case one: 0 ≤ < . Based on inequality (3), the whole integral limit need to be divided into three parts because the integration computing is required separately for abrasion regions of 1, 2 and 3. That is from 0 to 율 cos first, and from 율 cos to 율 율 cos next, and then from 율 율 cos to ݁ . The wear-dependent increment of the countersinking thrust force ‫ݓ‬ ݁ can be expressed as where ‫ݓ‬ ݁ , ‫ݓ‬ ݁ 2 and ‫ݓ‬ ݁ respectively represent the increment of the thrust force in the abrasion regions based on the integral limit partition, and the definition and evaluation of other parameters in the equations can be found in [11]. Case two: ≤ < 율 . Similarly, based on inequality (4), the whole integral limit need to be divided into two parts, that is first from 0 to 율 cos and then from 율 cos to ݁ . The force ‫ݓ‬ ݁ can be expressed as 1 Case three: 율 ≤ < 율 율 . Based on inequality (5), the whole integral limit is from 0 to ݁ , and the force ‫ݓ‬ ݁ can be expressed as As for 香 , it is equal to the countersinking thrust force if using the fresh edges [11], which can be calculated based on the model developed in [1,12], and the relationship between this force and stack deformation has been analyzed in [4]. Thus, the initial thrust force are no more discussed detailed in this section. Modelling of stack deformation Because the stack deformation is the significant factor that results in the countersink depth error, it is necessary to develop the deformation prediction model for the error compensation. However, it may be difficult to accurately calculate the stack deformation analytically, especially for the varying clamping and cutting forces acting on the stack structure of CFRP/Al. In this case, the finite element analysis (FEA) method is one of the effective choice. The input to the FEA model is the stack material properties and the boundary conditions based on the practical experimental apparatus, and the thrust force predicted by the analytical model in section 2. The experimental apparatus are shown in figure 2, and the CFRP/Al stack, with the length and width of 125 mm × 125 mm and the thickness respectively of .4 mm/4 mm , are mounted on the fixture in the form of cantilever in order to approximate the practical machining conditions. The CFRP plies are made of carbon/epoxy prepregs, and the material properties of the stack can be found in the previous work of our group [3]. A 3-D FEA model is built to analyze the stack deformation in the countersinking process, as shown in figure 3, and this simulation model is developed with the same dimensions and material properties to the practical stack shown in figure 2. Two groups of holes, respectively labelled with A1 to A5 and B1 to B5, are designed to be drilled in the positions with a different stiffness distribution. Two different clamping forces of 2 0 N and 46 N are used in this simulation and the next experiment section, and the clamping forces are exerted by the pressure foot driven by the pneumatic cylinder with the calibrated pressures of 0. MPa and 0. 5 MPa respectively. What's more, the thrust force is an input to this simulation model, which results in the stack deformation and then the smaller countersink depth. However, predicted deformation also has much effect on the thrust force, as mentioned in section 2, and the thrust force tends to become smaller due to the stack deformation, until the equilibrium state is achieved. Therefore, the obtained deformation only based on the static FEA is actually not the final feasible stack deformation, and the coupling effect between these factors need to be decoupled by a more advanced computational method [10], which will be discussed in the next section in detail. Optimization of the iterative algorithm Based on our previous work [10], an approach of integrated flexible iterative algorithm has been proposed to search for the equilibrium state of force and deformation in order to predict the countersink depth error. However, when it comes to the thin-walled CFRP/Al stack machined with a worn tool, the proposed algorithm in the previous research [10] need to be optimized, especially for the calculation method of compensation value. According to the above-mentioned analysis of the thrust force and deformation in the drilling of cantilevered CFRP/Al stack, the optimized iterative algorithm is designed and the computational procedure is described in the flowchart, as shown in figure 4: with the initialization of the parameters and the modelling of the thrust force and deformation, the three nested iterative loops constitute the core content of this algorithm. The inner loop, which can be referred as − loop, is used to calculate the deformation value, and the detailed iteration process of the − loop can be found in [10]. Based on the inner loop, the middle loop, which is referred as − loop, is designed to compute compensation value. The outer loop, which is referred as − loop, is used to record the drilling position. In this iterative algorithm, ‫ݓ‬ ǡ ǡ and ǡ ǡ respectively represents the calculated stack deformation and cutting force in the drilling position ǡ , ǡ is the computed feed depth of the cutting tool, and t ǡ is the computed compensation value, is the nominal countersink depth. There are two functions in the flowchart, the first is the deformation function, ‫ݓ‬ ‫ݓ‬ , and its essential attribute indicates that the stack deformation increases with the increase of cutting force in the machining process; the second is the cutting force function, − ‫ݓ‬ , and the essential attribute is that the cutting force increases with the increase of real cutting depth − ‫ݓ‬ . The core point of the optimized iterative algorithm is that, after the parameters initialization, the feed depth of the tool is 0ǡ , however, due to the low-stiffness of the stack, the deformation ‫ݓ‬ ǡ0ǡ of the equilibrium state can be calculated based on the inner loop and it is the countersink depth error without any compensation. Then, with the assignment of ‫ݓ‬ ǡ0ǡ to the first compensation value t 0ǡ , the feed depth is modified to ǡ t 0ǡ , however, another stack deformation of the equilibrium state t ǡ can be calculated and difference value t ǡ − t 0ǡ is construed as the deformation increment for the compensated feed depth t 0ǡ . Based on the middle loop, the feed depth can be continuously revised until the compensation value reach the convergence precision, and the final compensation value t ǡ is the computed one to achieve the required countersink depth. Finally the countersink depth error and the compensation value in this drilling position has been figured out, and the calculation of the next hole start according to the outer loop of this iterative algorithm. Experiment and discussion Groups of cutting experiments are carried out in order to verify the optimized methodology proposed in this paper. The experimental apparatus and CFRP/Al stack are shown in figure 2, and a countersink depth gage of Trulok SR903 is used to measure the countersink depths. The nominal countersink depth is 2.50 mm. The clamping forces and the drilling positions has been described in section 3. A worn cutting tool, with the diameter of the drill bit of 7.928 mm and point angle of the countersinking edge of 00°, is used in this study, and the micrograph of the wear condition of the countersinking edge of the tool is shown in figure 5. figure 6 shows the theoretical and experimental thrust force before the countersinking experiments, and it is assured that the cutting force model works well. Based on the optimized methodology proposed in this study, the theoretical calculated deformation and compensation values are shown in figure 7. It can be seen from this figure that the compensation value is bigger than the deformation value in any drilling position, and as for the cantilevered stack, these two values of group B are much bigger than that of group A. What's more, the increase rate of the countersinking thrust force is about 16 N /10 holes based on the calibration, and the designed experiment order is Group A/230N firstly, then B/230N, thirdly B/346N, and finally A/346N, thus, the calculated thrust force need to be modified in these four groups. The experiment results of countersinking with and without compensation are shown in figure 8. In order to eliminate the effect of cutting tool setting error and stack installation error, a method of trial cut is used before the experiment. It can be seen from this figure that the countersink depths that drilled without compensation are more sensitive to the changes of clamping force and stack stiffness, for example, if the clamping force is 230 N, the variation range of the countersink depths of Group A that drilled without compensation are from 2.386 mm to 2.432 mm, with the average value of 2.415 mm, while that of the holes drilled with compensation in the same conditions are from 2.502 to 2.52 mm, with the average value of 2.510 mm. Therefore, the optimized iterative algorithm proposed in this paper could guarantee the countersink depth accuracy. 8 Although the effectiveness of the methodology, including the thrust force model and the optimized iterative algorithm, developed in this paper has been validated by experiments, there still exists some deviations between the required countersink depth and the average values of the countersink depths of the holes drilled with compensation. It is mainly results from two categories of influential factors: the first is the random and system errors of the experimental equipment, such as the feed motion error of the machine tool and the calibration error of the clamping forces; the second and the important is the system errors of this methodology, such as the simplification errors of the thrust force model, and the calculation error of the stack deformation FEA model. Conclusion In this paper, an optimized methodology that focus on the problem of accurate control of the countersink depth in the drilling of thin-walled CFRP/Al stack has been developed with detailed theoretical analysis and experimental verification. The proposed approach is based on modeling of the cutting force that considers the effect of both tool wear and stack deformation, identifying the key features of this deformation in the CFRP/Al stack countersinking process; and the modeling of these two factors provides an input for the downstream optimized iterative algorithm to calculate the feasible deformation value and the compensation value. Groups of countersinking experiment have been carried out and the results show that the optimized integrated methodology proposed in this paper could effectively guarantee the countersink depth accuracy. What's more, this optimized algorithm is also a programmable alternative for designing new countersinking approach that take the tool wear and stack deformation into consideration.
2020-07-09T09:12:13.526Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "789135f0ffd9e904d73e67cfa731cd33f96b6a27", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1507/4/042001", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "14fd90bcbb9361f98f29f5eb008318c378190434", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
256918013
pes2o/s2orc
v3-fos-license
Validation of Rapid and Economic Colorimetric Nanoparticle Assay for SARS-CoV-2 RNA Detection in Saliva and Nasopharyngeal Swabs Even with the widespread uptake of vaccines, the SARS-CoV-2-induced COVID-19 pandemic continues to overwhelm many healthcare systems worldwide. Consequently, massive scale molecular diagnostic testing remains a key strategy to control the ongoing pandemic, and the need for instrument-free, economic and easy-to-use molecular diagnostic alternatives to PCR remains a goal of many healthcare providers, including WHO. We developed a test (Repvit) based on gold nanoparticles that can detect SARS-CoV-2 RNA directly from nasopharyngeal swab or saliva samples with a limit of detection (LOD) of 2.1 × 105 copies mL−1 by the naked eye (or 8 × 104 copies mL−1 by spectrophotometer) in less than 20 min, without the need for any instrumentation, and with a manufacturing price of <$1. We tested this technology on 1143 clinical samples from RNA extracted from nasopharyngeal swabs (n = 188), directly from saliva samples (n = 635; assayed by spectrophotometer) and nasopharyngeal swabs (n = 320) from multiple centers and obtained sensitivity values of 92.86%, 93.75% and 94.57% and specificities of 93.22%, 97.96% and 94.76%, respectively. To our knowledge, this is the first description of a colloidal nanoparticle assay that allows for rapid nucleic acid detection at clinically relevant sensitivity without the need for external instrumentation that could be used in resource-limited settings or for self-testing. Introduction Since its identification in China in late 2019, the SARS-CoV-2-induced COVID-19 pandemic continues to overwhelm many healthcare systems and has generated a major economic burden on the world's economies. Even with the development and widespread uptake of COVID-19 vaccination programs, most countries are still struggling to control the spread of the disease and the incidence of SARS-CoV-2 infection, and mortality continues to accrue with more than a doubling of reported cases (31 December 2021 (287,115,877 cases)- 19 October 2022 (623,161,924 cases)) and more than 1 million attributed mortalities during the last year alone (https://covid19.who.int/ (accessed on 21 October 2022)). Consequently, the demand for massive scale molecular diagnostic testing of SARS-CoV-2 remains a key strategy to control the ongoing pandemic. The global gold standard for COVID-19 clinical diagnosis remains the quantitative RT-PCR detection of SARS-CoV-2 in nasopharyngeal swabs. The vast majority of PCR testing is carried out on samples collected at remote testing centers that are transported and then processed by centralized diagnostic laboratories. Typical times from patient sampling to testing by RT-PCR are greater than 6 h, and limitations in capacity frequently occur due to the lack of suitable infrastructure and trained personnel when demand is greatest. Moreover, many low-and middle-income countries lack the necessary facilities for laboratory testing, causing a clear disparity in COVID-19 testing capabilities (average daily tests per 1000 population) of greater than 100-fold between high-income and low-income countries (https://apps.who.int/gb/COVID-19/pdf_files/2022/17_02/Item2.pdf (accessed on 21 October 2022))). Indeed, there have been >3 billion COVID-19 tests worldwide, of which only 0.4% were carried out in low-income countries, despite these countries representing 9% of the global population (source: WHO_press_release_ (28 October 2021)). Although rapid antigen tests can help fill this gap to some extent by identifying high viral titer symptomatic patients, concerns over poor performance hinder their widespread usage as a front-line diagnostic tool [1,2]. Therefore, there is a clear and immediate need to develop instrument-free, economic and easy-to-use molecular diagnostic alternatives to the PCR and antigen tests to detect the SARS-CoV-2 virus and other infectious pathogens. Reflecting this requirement, the World Health Organization (WHO) has developed the ASSURED criteria to which tests should conform (Affordable, Sensitive, Specific, User-friendly, Rapid and robust, Equipment-free and Deliverable to end-users) [3]. Moreover, they have made pointof-care (POC) molecular COVID-19 tests the highest priority category for Emergency Use Listing (EUL) (https://extranet.who.int/pqweb/vitro-diagnostics/coronavirus-diseasecovid-19-pandemic-%E2%80%94-emergency-use-listing-procedure-eul-open (accessed on 21 October 2022)). Consequently, in recent years, there has been renewed interest in alternative technologies to PCR for nucleic acid detection. For example, loop-mediated isothermal amplification (LAMP) is the most widely used alternative to PCR for SARS-CoV-2 detection [4], although poor clinical sensitivity in low viral load samples [5], the need for a separate heating device and the relatively high price point of LAMP (compared to antigen tests) have prevented the widespread uptake of this technology. Other researchers and ourselves have shown that nanoparticle-based colloidal biosensors can be used to detect nucleic acid sequences with high specificity, although, until now, this technology has suffered from poor sensitivity [6][7][8] or has required additional technologies or equipment to reach clinically relevant sensitivity [9]. Other technologies that have been developed to detect SARS-CoV-2 RNA include the use of aptamers [10] and Cas13a bound to nanoparticles [11], as well those that utilize the CRISPR/CAS system [12]. In addition to nucleic acid tests, there have been many attempts to improve protein immunoassay-based detection, including the use of magnetic beads [13,14], nanoparticle enhanced surface plasmon resonance (SPR) [15], selenium nanoparticles [16], electrochemical [17] and localized surface plasmon resonance (LSPR) nanostructures [18]. Below, we describe the development and validation of a novel colloidal nanoparticle assay technology, Repvit (Rapid Economic Personal VIrus Test), a molecular diagnostic test that can detect SARS-CoV-2 RNA directly from either nasopharyngeal swab or saliva clinical samples in less than 20 min and is detectable by the naked eye without the need for any instrumentation. This test can be used by untrained persons with a simple four-step workflow (Figure 1), needs no complicated sample treatment such as RNA extraction and can be massively scaled with a manufacturing price of <$1. Biosensors 2023, 13, x FOR PEER REVIEW 3 of 12 step workflow (Figure 1), needs no complicated sample treatment such as RNA extraction and can be massively scaled with a manufacturing price of <$1. Biosensors 2023, 13, x FOR PEER REVIEW 3 of 12 step workflow (Figure 1), needs no complicated sample treatment such as RNA extraction and can be massively scaled with a manufacturing price of <$1. Test Development and Characterization Spherical 60 nm gold nanoparticles (NanoXact in 0.05 mg/mL citrate buffer (1.5 OD)) were purchased from NanoComposix (San Diego, CA, USA) with a diameter of 61 ± 6 nm (characterized by transmission electron microscopy (JEOL 1010)), a surface charge of −60 mV and a plasmon resonance wavelength of 532 nm (characterized by dynamic light scattering (Malvern Nano ZS)). The nanoparticles were functionalized with thiolated oligonucleotides (Biomers, Ulm, Germany)) as we previously described [7,20]. In brief, nanoparticles were resuspended in 0.005% sodium dodecyl sulfate (SDS) and 0.05 M phosphate buffer pH 7.8 (PB) before adding 2 µM oligonucleotides and gradually increasing the concentration of NaCl (in SDS/PB buffer) to 0.1 M NaCl following the method of Hurst et al. [20]. After functionalization, the nanoparticles displayed an increase in hydrodynamic diameter from 69.4 nm to 73.3 nm (measured by dynamic light scattering). Oligonucleotide sequences against the E and N genes of SARS-CoV-2 were designed and selected based on thermodynamic stability, sequence homology and RNA secondary structure prediction using ViennaRNA package [21]. Candidate sequences were screened using complementary RNA target sequences, prioritized and optimized in a semi-empirical manner until selection of the final probe sequences. Final nanoparticle configurations were subsequently optimized for performance using a range of detergents and proteases with negative control saliva and nasal swab samples spiked-in with full length SARS-CoV-2 RNA. Purified full length RNA was obtained from SARS-CoV-2 Slovakia/SK-BMC5 strain prepared from Vero E6 cell culture (ECACC Catalogue No. 85020206) and supplied at a concentration of 25 ng/µL (equivalent to 1.48 × 10 9 copies µL −1 ) and accessed through the European Virus Archive (EVAg-access code 006N-03938). This RNA was used for analytical sensitivity (limit of detection (LOD)) and analytical specificity experiments as well as a positive control during clinical validation. In addition, we carried out LOD calculations for visualization of the assay by taking photos of a series of dilutions (prepared as above), randomizing the photos and asking five independent individuals to score the photos as positive (color change) or negative (no color change). The highest dilution where the majority of observers (i.e., greater than 3/5) scored as positive was taken as the visual LOD. Nasopharyngeal Sample RNA Testing Purified RNA prepared from 188 nasopharyngeal swab samples of suspected COVID-19 patients (positive n = 70; negative n = 118) using the Allplex SARS-CoV-2 RT-qPCR assay (Seegene, Seoul, South Korea) were retrospectively obtained from the Microbiology Department of HUD. Individual patients' details can be found in Supplementary Table S2. RNA samples were blinded to researchers before being evaluated using the Repvit assay. After 30 min, photos of the corresponding tubes were taken, and five independent observers marked the tests as positive or negative visually. The consensus opinion was recorded for each sample before unblinding the samples and comparing them to PCR results. All statistical analyses were carried out using MedCalc software (v.14.8.1). Asymptomatic Saliva Testing Saliva samples were collected from asymptomatic individuals as part of the local (Gipuzkoa, Basque Country, Spain) government screening program to identify individuals infected with SARS-CoV-2 in residential homes (n = 473) and in a healthcare institute (Biodonostia; n = 109). A further 53 saliva samples were obtained from symptomatic individuals: a total of 635 samples. Individual patients details are described in Supplementary Table S3. Samples were collected in universal viral transport media in dedicated containers containing a funnel with which to collect saliva. Collected samples were then transported to the Microbiology Department of HUD for PCR testing (Allplex SARS-CoV-2 assay, Seegene). Samples with a Ct-value higher than 35 were considered negative in line with local policies. Samples were collected between September 2021 and March 2022. The samples were subsequently tested in a blind fashion using the Repvit test. Twenty µL of saliva was taken from each sample and added to a tube containing 17 µL of nanoparticles and 93 µL of lysis buffer. After 20 min of incubation, the samples were spectrophotometrically measured with an Agilent BioTek Synergy 2 plate reader (Agilent, Santa Clara, CA, USA). For each 96-well plate used, the cutoff value (Abs(540 nm)/Abs(750 nm)) was calculated by ROC analysis from negative and positive control samples (negative saliva with spike-in of 1 × 10 6 copies mL −1 ). Nasopharyngeal Swab Testing Three nasopharyngeal swab samples were prospectively collected from 320 individuals of PCR-confirmed COVID-19 status (129 positive, 191 negative). Individual patient details are given in Supplementary Table S4. One swab was used for the clinical standard test (i.e., qRT-PCR), one for antigen test and one for the Repvit test. Swabs were assigned to each test in a random manner. Testing was carried out in a blinded fashion by researchers with tubes anonymized and numerated. Tests were unblinded after deposition of the results with an independent researcher. qRT-PCR was carried out in Tongren hospital, Shanghai, according to established protocols (using the Novel Coronavirus (2019-nCoV) Nucleic Acid Detection Kit from Shanghai Biogem (Shanghai, China)). In accordance with local protocols, samples with Ct values less than 40 were considered positive. The antigen test (Novel Coronavirus (2019-nCoV) Antigen Rapid Detection kit) was obtained from Tianjing Bioscience Diagnostic Technology (Tianjing, China). For the Repvit test, nasopharyngeal swab samples were tested according to protocol and after 20 min, the color of the solution was recorded for each sample by visual inspection. Development and Characterization of the Test We used gold nanoparticles functionalized with specific probes targeting conserved areas of the E and N genes of SARS-CoV-2 in order to develop the Repvit test [22]. As can be seen from Figure 2A,B, when the functionalized gold nanoparticles were dispersed, a red color corresponded to an absorption peak of~540 nm. Once the nanoparticles bound to the RNA, they formed agglomerates which caused a red-shift and broadening of the absorbance spectra due to plasmonic coupling, thereby changing the solution to a transparent/blueish color (Figure 2A,B) [23]. Probe sequences were selected on the basis of a combination of thermodynamic criteria, modelled secondary structure of the RNA sequence and homology searching. Using the selected probes, we were able to detect a visible color change in 2 min in the presence of synthetic RNA fragments corresponding to the E and N genes of SARS-CoV-2 ( Figure 2C). We further went on to develop a buffer system containing detergents, salts and proteinases that not only acted to stabilize the functionalized nanoparticles, but also degraded proteins in the saliva or nasopharyngeal samples matrices to release (and inactivate) the viral RNA, allowing for direct detection of SARS-CoV-2 RNA in a single solution without the need for a separate purification step. The test was further refined and optimized to detect whole length genomic SARS-CoV-2 RNA in COVID-19-infected clinical samples. Optimization was first carried out using pooled clinical samples and then individual samples, before arriving at the final formulation for the assay that was used for subsequent assay characterization and clinical validation ( Figure 2D). Using this final formulation, we measured the analytical sensitivity (limit of detection (LOD)) with three different batches of the assay using a dilution series of SARS-CoV-2 RNA spiked into negative saliva and carried out in five replicate experiments. Based on a cut-off of 95% reproducibility between tests, we determined the LOD to be 8 × 10 4 copies/mL, although it should be noted that we could detect down to 1 × 10 4 copies in up to 40% of the replicates. In addition, we carried out an LOD assay to determine the lowest level of virus that was detectable by the naked eye, which was calculated to be 2.1 × 10 5 copies/mL. For analytical specificity, we tested both potentially interfering substances and related pathogens (bacterial and viral). A full list of the compounds tested can be found in materials and methods section. We found no evidence of cross-reactivity or interference with the Repvit test ( Supplementary Figures S1 and S2). Detection of Extracted RNA from Nasopharyngeal Swabs Purified RNA from 188 nasopharyngeal swab samples (70 positive and 118 negative) were retrospectively obtained from the Microbiology Department of HUD, which were collected from persons suspected of having COVID-19 infection between July and December 2020. Samples were anonymized before being added to the Repvit test. After 20 min of incubation at room temperature, the test solutions were photographed and sent to five independent observers for visual scoring as positive (solution color change) or negative (no color change). An example of the photo used for scoring is shown in Figure 3. Once the scoring data from the observers was collated, the consensus decision for each sample was recorded before unblinding the tests. In total, there were eight false positive and five false negative cases compared to qRT-PCR results (Supplementary Tables S1 and S2). This corresponds to a sensitivity of 92.86% (84.11-97.64%; 95% CI), a specificity of 93.22% (87.08-97.03%; 95% CI) and an accuracy of 93.09% (88.47-96.27%; 95% CI). The average Ct value of the positive samples of used in this test was 15.8 (range 11.12-32.66). was recorded before unblinding the tests. In total, there were eight false po false negative cases compared to qRT-PCR results (Supplementary Tables S corresponds to a sensitivity of 92.86% (84.11-97.64%; 95% CI), a specifi (87.08-97.03%; 95% CI) and an accuracy of 93.09% (88.47-96.27%; 95% CI). T value of the positive samples of used in this test was 15.8 (range 11.12-32.6 Asymptomatic Saliva Testing Saliva samples were collected from 582 asymptomatic persons as a pa 19 screening program, and a further 53 saliva samples from symptomatic p cohort of 635 saliva samples (91.7% asymptomatic, 8.3% symptomatic) w positive (7.6%) for SARS-CoV-2 and 587 negative (92.4%) by qRT-PCR. T values of positive samples was 28.03 (range 15.63-34.86). Saliva samples w tively obtained from the Microbiology Department of HUD and anonymiz ing 20 µL to 93 µL lysis buffer containing detergents and proteases to inact and dissociate the saliva matrix before placing in the Repvit solution for 20 min incubation at room temperature, the absorbance was measured and scored as positive if the Abs(540 nm)/Abs(750 nm) ratio was less than th (calculated by negative and positive samples), and negative if higher than total, there were twelve false positive and three false negative cases com PCR results (Supplementary Tables S1 and S3). This corresponds to a sensit (82.80% to 98.69%; 95% CI), a specificity of 97.96% (96.46% to 98.94%; 95% curacy of 97.64% (96.13% to 98.67%; 95% CI). Saliva samples from the 53 patients were also visualized by eye and found to be concordant with the metric results (data not shown). Asymptomatic Saliva Testing Saliva samples were collected from 582 asymptomatic persons as a part of a COVID-19 screening program, and a further 53 saliva samples from symptomatic patients; a total cohort of 635 saliva samples (91.7% asymptomatic, 8.3% symptomatic) with 48 samples positive (7.6%) for SARS-CoV-2 and 587 negative (92.4%) by qRT-PCR. The average Ct values of positive samples was 28.03 (range 15.63-34.86). Saliva samples were retrospectively obtained from the Microbiology Department of HUD and anonymized before adding 20 µL to 93 µL lysis buffer containing detergents and proteases to inactivate the virus and dissociate the saliva matrix before placing in the Repvit solution for 20 min. After 20 min incubation at room temperature, the absorbance was measured and samples were scored as positive if the Abs(540 nm)/Abs(750 nm) ratio was less than the cut-off value (calculated by negative and positive samples), and negative if higher than that value. In total, there were twelve false positive and three false negative cases compared to qRT-PCR results (Supplementary Tables S1 and S3). This corresponds to a sensitivity of 93.75% (82.80% to 98.69%; 95% CI), a specificity of 97.96% (96.46% to 98.94%; 95% CI) and an accuracy of 97.64% (96.13% to 98.67%; 95% CI). Saliva samples from the 53 symptomatic patients were also visualized by eye and found to be concordant with the spectrophotometric results. to qRT-PCR (Supplementary Tables S1 and S4). This corresponds to a sensitivity of 13.18% (7.87-20.26%; 95% CI), a specificity of 100% (98.07-100%; 95% CI) and an accuracy of 64.78% (59.25-70.03%; 95% CI). For the Repvit assay, there were ten false positive and seven false negative cases compared to qRT-PCR (Supplementary Tables S1 and S4) corresponding to a sensitivity of 94.57% (89.14-97.79%; 95% CI), a specificity of 94.76% (90.58-97.46%; 95% CI) and an accuracy of 94.69% (91.63-96.88%; 95% CI). Discussion In this work, we describe the development and validation of an economic, rapid and easy-to-use molecular test to diagnose COVID-19 infection by non-trained persons that could be used with either saliva or nasopharyngeal swabs. Due to the easy-to-use workflow (Figure 1), which is similar in concept to existing antigen tests, this could easily be adapted to self-test usage, and as it requires no specialized equipment or cold-chain transport or storage conditions, the Repvit test lends itself perfectly to resource-limited situations. Moreover, as it is a chemical test based on industrially available reagents without the need for biotechnologically produced materials required for other molecular tests (i.e., enzymes in LAMP and PCR tests and antibodies in antigen tests), it is rapidly and highly scalable within existing industrial infrastructure. Furthermore, unlike antigen immunoassay tests that are reliant on the development and scaling of novel antibodies, as a nucleic acid test, the Repvit technology is rapidly adaptable to novel emerging infectious disease outbreaks or strain adaptations that can render existing antibody-based technologies inoperable. However, it should be pointed out that a recent study found that the major SARS-CoV-2 variants, to date, were detected effectively by current antigen tests based on N protein binding [24]. As the Repvit technology is based on probes that target multiple regions of the N and E genes of SARS-CoV-2, there is a lower likelihood of newly acquired mutations affecting the diagnostic ability of this assay compared with PCR or similar based technologies that rely on two primer sequences per amplicon. Indeed, we did not see any difference in the detection characteristics between the Wuhan-Hu We validated the Repvit technology in a multi-center setting with both nasal swabs and saliva samples and demonstrated a performance better than that reported for antigen tests (Table 1), particularly with regard to asymptomatic samples [2]. The performance of the test between the different validation trials was similar with sensitivities of 92.86%, 93.75% and 94.57% and specificities of 93.22%, 97.96% and 94.76% in RNA samples extracted from nasopharyngeal swabs, saliva and nasopharyngeal swabs, respectively (Table 1), despite different average Ct values (15.8, 25.67 and 34.35, respectively). Presumably, this reflects chronological performance improvements in the development of the assay, although it cannot be ruled out that differences in either the sample matrix type or between the origins of the cohorts might account for this variability. The latter explanation might also explain differences between the apparently high LOD of 8 × 10 4 copies mL −1 for the Repvit assay (equivalent to Ct~25-33 [25]), compared to 10 2 -10 4 copies mL −1 for PCR and 5 × 10 6 copies mL −1 for antigen tests [26], and the ability of the our assay to detect multiple clinical samples with higher Ct values. Indeed, differences in the proteomic composition of saliva in individuals infected with COVID-19 have been documented [27]. It should also be noted that the LOD determined by spectrophotometric measurement was 2.6× higher than that obtained by naked eye visualization. The use of nanoparticles as colloidal nucleic acid tests (NATs) have been described previously, however, until now they relied on electronic, optical or electrochemical signal amplification to achieve clinically relevant sensitivity [28]. Moitra et al. used gold nanoparticles attached to DNA probes targeting the N gene that resulted in visual color change in the presence of purified SARS-CoV-2 RNA when the mixture was further treated with RNase H at 65 • C [29]. However, the authors concluded that the LOD (0.18 ng µL −1 ) obtained was not sufficiently sensitive to detect clinical samples [30]. The same authors therefore added a LAMP amplification step to their protocol and achieved a sensitivity of 96.6% and a specificity of 100% with 61 nasopharyngeal samples, although details of these experiments are not provided in the publication [30]. Rodriguez-Díaz et al. recently described gold nanoparticles attached to molecular beacon-like cholesterol-containing hairpin structures to detect synthetic SARS-CoV-2 RNA transcripts, but due to sensitivity issues, a PCR-based amplification step was added in order to detect clinical samples [31]. Kumar et al. used specific oligos against the RdRp gene of SARS-CoV-2 mixed with pre-extracted viral RNA and salt before heating to 95 • C and then 60 • C [32]. This system was able to detect SARS-Cov-2 RNA with a sensitivity of 85% and specificity of 94% in a cohort of 136 nasopharyngeal samples. The Repvit technology offers nucleic acid detection at clinical sensitivity without the need for heating, a dedicated detection device or a separate RNA extraction step. Multi-step RNA extraction is a bottleneck in molecular testing that has been addressed by several groups recently in order to reduce assay turnaround times and consumable usage to improve the usability of tests [33][34][35]. We developed a single buffer detection system containing a combination of proteases and detergents that not only disrupts the viral membrane allowing for the detection of SARS-CoV-2 viral RNA but also reduces the high protein content of the sample matrices (i.e., saliva and nasopharyngeal swabs), thereby preventing non-specific fouling of the nanoparticles, a well-known problem of nanoparticle detection systems [36]. The inclusion of SDS and Triton X-100 (amongst other detergents) in the buffer system acts to inactivate the SARS-CoV-2 virus, thereby reducing the biohazard risk of the test [37]. Moreover, the addition of detergents enhances protease activity and also acts to stabilize the functionalized gold nanoparticles [38]. Nevertheless, the current research has several limitations including it being a colorimetric test, which therefore gives a qualitative output relying on subjective visual interpretation that could lead to user errors, particularly when used by non-trained personnel or by persons with visual impairments. Indeed, it should be noted that the best performance for specificity, although not sensitivity, was obtained when we used spectrophotometric means to determine whether samples were positive or not (i.e., asymptomatic saliva samples). Consequently, we are currently developing a simple and economic spectrophotometric reader that could be used to remove the subjectivity of the result and thereby enhance the performance of the assay. Furthermore, although the test compares very favorably against the analytical sensitivity of most antigen tests, it is still several magnitudes away from the sensitivity of PCR. This is a limitation of the chemical amplification system used by the test compared to the enzymatic amplification of PCR and similar systems (e.g., LAMP). Nevertheless, we believe the Repvit test has a great deal of potential for mass screening and triage as well as for self-testing and its use within resource-limited settings. In summary, we have developed a new rapid molecular diagnostic POC assay with better performance characteristics than antigen tests used in this study, that is readily adaptable to detect many infectious diseases, both current and emerging, without the infrastructure requirements or high costs of other molecular diagnostic techniques. This will allow for its global use as a tool in the ongoing battle between mankind and infectious pathogens. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: All data associated with this study are available in Supplementary Materials.
2023-02-17T16:05:36.053Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "d1578d69a315a958ab2699a035cb8bb072534f41", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "772411fd27d3608a96c5f690b50692c003320005", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267561131
pes2o/s2orc
v3-fos-license
A New Efficient Classifier for Bird Classification Based on Transfer Learning the Introduction Classifcation of diferent bird species in images is important for diferent areas [1,2]: (1) Environmental studies: Identifying bird species in images can assist in monitoring and analyzing birds for ecosystem research, determining changes in populations and studying the efects of climate change and other factors on birds.(2) Conservation and biodiversity study: Recognition of bird species in images can serve to study and protect bird diversity by helping to identify diferent species and their migratory paths.(3) Agricultural research: Automatic bird classifcation can be used to identify species that may afect agriculture.For example, the allocation of species that can harm agricultural crops or participate in natural processes that are useful for agriculture.(4) Studying bird behavior: Image analysis can help study the behavior of diferent bird species, such as their migratory pathways, nesting, feeding, and other aspects of behavior.(5) Biological research: Te classifcation of birds in images may be important for biological research aimed at understanding the evolutionary and genetic aspects of diferent bird species.(6) Environmental pollution monitoring: Changes in bird populations can serve as indicators of environmental pollution.Classifcation of birds in images can help in assessing the impact of pollutants on the bird world. Te use of deep learning and computer vision methods for automatic classifcation of birds makes it much easier to process a large amount of data and provide fast and accurate analysis [3,4]. As it is known [5], classifcation is a type of problem in the area of artifcial intelligence, the essence of which is to assign each object a certain class based on its characteristics.Te main purpose of classifcation is to teach a model to recognize regularities or patterns in data and determine which class a new, previously unknown object belongs to. Te task of classifying diferent species of birds in images belongs to the area of computer vision. Computer vision is a feld of science and technology that studies and develops systems that provide computers with the ability to "see" and interpret images and videos in the same way that human vision does [6,7].Te main purpose of computer vision is to give computers the ability to recognize objects, determine their characteristics, and interact with the environment based on visual data. Problems related to this area are best solved by applying deep learning methods.Deep learning is a sub-branch of machine learning that uses neural networks with a signifcant number of layers (deep architectures) to automatically identify and research high-level data representations.Deep learning becomes especially powerful when using neural networks with many layers, as it allows models to automatically identify complex dependencies and abstractions in the input data [8]. Eventually, we analyzed the current state of the problem studied in our research work on the classifcation of diferent species of birds in the images.At the same time, the main advantages and disadvantages of each of the considered works are highlighted.As a result, in Section 3, we will provide a table of comparison of the best values of the performance indicators of existing models on the test data and the one proposed by us.So, the preprocessing of images, namely, increasing their resolution is described in detail in [9]. Te authors introduced a investigate framework in [10] that explores the classifcation of bird species by integrating deep neural features from visual and audio data through the kernel-based fusion technique.Specifcally, the deep neural features are derived from the activation values of the inner layer of the convolutional neural network (CNN).Te authors employed multicore learning (MKL) to fuse these features for the ultimate classifcation.In experimental trials, the proposed CNN + MKL method, which incorporates both types of data, demonstrates superior performance compared to single-modal approaches, certain basic kernel combination methods, and the traditional early fusion approach. A method for identifying birds based on visual features using convolutional neural networks was proposed by the authors in [11].Two methods are proposed in this paper.Te frst is an attention-driven data enhancement.And the second method is a compression model for distilling disjointed knowledge.As a result, a fne-grained bird classifcation model was created and an accuracy of 87.63% was achieved. In [12], the authors compared three approaches to solve the problem of classifying diferent bird species in images.Te frst approach is the use of a traditional machine learning algorithm (SVM), the second is the use of a deep learning algorithm (ResNet50 model), and the third is the use of a deep learning algorithm in combination with transfer learning (ResNet50 trained model).As a result, it was concluded that the classifer based on deep learning in combination with transfer learning achieved the best results.Tis experiment completely proved the inefectiveness of using traditional machine learning methods to classify diferent species of birds in images.It was also shown that the use of deep learning methods in combination with transfer learning gives a signifcant advantage to deep learning in the context of solving this problem. In [13], the authors investigated the use of convolutional neural networks to classify diferent bird species in images.Four models with diferent architectures based on transfer learning were used to classify images: ResNet152V2, Inception V3, DenseNet201, and MobileNetV2.Te models were trained on the BIRDS 400 SPECIES dataset, containing about 50 thousand images of 400 diferent species of birds.As a result, models of ResNet152V2 and DenseNet201 had the best results.For ResNet152V2, accuracy � 95.45% and loss (categorical cross-entropy) � 0.8835.At the same time, DenseNet201 resulted worse accuracy � 95.05% but better loss � 0.6854. In the article [14], the author researched the application of deep learning methods to classify diferent species of birds in images.For these researches, the BIRDS 400 SPECIES dataset was selected.Tere were relatively many diferent deep learning-based models, both simple and complex, trained on the selected dataset.As experiments have shown, the best solution to this problem was done by a complex of models based on transfer learning.In addition, the technique of image augmentation was applied, which had a very positive impact on the fnal efectiveness of the models.In conclusion, it was determined that the best performance was shown by the pretrained VGG19 model (the frst 17 layers were frozen), which was trained on augmented training data.It was able to achieve the best results: loss (categorical crossentropy) � 0.1426. In [15], the authors applied deep learning techniques combined with transfer learning to solve the problem of classifying diferent bird species in images.A number of models were created, previously trained on the ImageNet dataset, based on the following deep architectures: Ef-cientNetB0, DenseNet201, MobileNetV2, MobileNet, ResNet152V2, VGG16, and VGG19.Next, these models were customized to perform the task of classifying diferent bird species in images using the BIRDS 400 SPECIES dataset.Experiments have shown that the best solution to this problem coped model EfcientNetB0.After that, all models of the EfcientNet (B0-B7) family were compared in detail.As a result, it was concluded that the EfcientNetB0 model showed the best results on the test data: accuracy � 98.60%. Te research paper [16] conducts an extensive examination of bird detection and species classifcation, employing the YOLOv5 object detection algorithm and the EfcientNetB3 deep learning model with retraining.Te dataset utilized by the authors corresponds to the one employed in our study.Consequently, in Section 3, we will perform a comparative analysis of the achieved outcomes. Te aim of this work is to develop a new classifer using deep learning methods that would allow for high accuracy and efectively classify as many diferent bird species as possible in images. Te novelty of this paper is summarized as follows: (1) A dataset including a large number of bird species was analyzed and preprocessed in detail.(2) Te optimal architecture of the model is proposed, using the approach of transfer learning.(3) In addition, a new efcient algorithm has been developed to classify diferent bird species in images based on deep learning.(4) Trough large-scale testing, it was established that on the basis of the proposed algorithm, it was possible to signifcantly increase the performance indicators of the model: loss, accuracy, precision, recall, and F score. Te rest of the text is built according to the following structure: In Section 2 data preprocessing is described, the optimal architecture of the classifcation model based on deep learning is built, and model training is discussed in detail.In Section 3, the model training process is carried out based on two phases, and the test results are given.Te last section contains conclusions and prospects for further research. Dataset. To train the model in this research work, the BIRDS 525 SPECIES dataset [17] was chosen, which contains about 90 thousand images with 525 diferent species of birds.Each image is color, has a size of 224 × 224 pixels, and is stored in jpg format.It is a very high-quality dataset where there is only one bird in each image, and it usually occupies at least 50% of its pixels.Likewise, it should be noted that all images are original and not created by applying augmentation techniques. Te dataset is predivided into three samples: training, validation, and test.Te training set contains 84,635 images (94%), while the validation and test sets each contain 2,625 images (3%). Next, we analyze how well the dataset is balanced.For this purpose, graphs are built with visualization of the number of images for each species of birds.Tis step was taken for each sample of the dataset separately. As can be seen in Figure 1, the training sample is quite imbalanced because the number of images for diferent species of birds ranges from 140 to 260.However, this imbalance is not critical and can be simply ignored.At the same time, Figures 2 and 3 demonstrate that the validation and test datasets are perfectly balanced, as each species of bird in these datasets has exactly fve images. Ten it was decided to apply the technique of image augmentation [18].It provides for applying various transformations to the original images to create new, slightly modifed versions.Tis will increase the invariance of the work of the model, making it more stable and able to classify images in more difcult circumstances, and will contribute to additional regularization due to the introduction of randomness and diversity into the training data, thereby preventing overftting. Te following transformations were applied: (1) RandomFlip ("horizontal") horizontally fips the image with a probability of 50%.( 2) RandomTranslation (0.05, 0.05, fll_mode � "nearest") shifts the image vertically and horizontally by a random amount in the range of [−5%, +5%], flling the resulting empty pixels with the value of the nearest pixel from the original image.Te result of applying the augmentation technique is shown in Figure 4. Te dataset for model training was also optimized by using the prefetch method [19], which allows data batches to be loaded asynchronously into memory even before the models need them.Tis approach helps improve workout performance, especially when working with large amounts of data, which is a relevant case for this research work.bufer_size � AUTOTUNE automatically determines the optimal bufer size for maximum performance. In order to apply transfer learning, we also used the following dataset: ImageNet [20].Te latter is one of the largest and most famous datasets in the feld of computer vision (see Figure 5).It includes more than 14 million images that belong to more than 21 thousand classes of completely diferent kinds.ImageNet has become popular due to its wide variety because it covers a wide range of diferent objects, from animals and plants to household items and vehicles.Tis dataset is publicly available and is provided to researchers free of charge for noncommercial use. Model Architecture.Further, it was necessary to build the optimal architecture of the classifcation model based on deep learning.Tis is an essential task, as it has a signifcant impact on the fnal efciency of the model.It is also laborious since it requires a large number of experiments. It was decided to use the architecture of a convolutional neural network [21], because this is the method of deep learning, which is ideal for solving problems of image classifcation.Te main characteristic of convolutional neural networks is the use of convolutional layers.Tey apply convolution operations to input images, thereby detecting local patterns (features) on the basis of which classifcation will take place. Initially, we used Inception V3 and VGG19 convolutional neural network architectures.As a result, the highest accuracy on the test sample was achieved, which was 92% and 95%, respectively, when training with the Adam optimizer [22].Eventually, the experiments led to the conclusion that the most optimal solution would be to use transfer training [23].Tis is a popular approach in deep learning, the essence of which is that knowledge obtained during the solution of one problem is reused to solve another.Tat is, a model previously trained to solve a problem is reused to solve a new one.So, in order to apply this approach in this work, you need to choose a ready-made model of a convolutional neural network, which is pretrained on a suitable dataset.As a result of an analysis of modern literature [16], it was concluded that the most efective would be the use of the EfcientNetB5 model [24], previously trained on the ImageNet dataset, which was previously described in Section 2.1.Te EfcientNetB5 model belongs to the EfcientNet family.Te latter belongs to a family of models designed for problems in the feld of computer vision, including image classifcation.Tese models are characterized by high efciency with a small number of parameters.Te EfcientNet family includes diferent versions of models, designated from B0 to B7, where B0 is the least powerful model and B7 is the most powerful.EfcientNetB5 difers from smaller versions such as EfcientNetB0, in more options and a deeper architecture.Typically, a deeper architecture allows the model to better adapt to solving complex problems that require a large amount of training data. In order to load the pretrained model, the keras.applications.EfcientNetB5 function was used.Te following parameters were passed to it: Input_shape � (224, 224, 3): it determines the size of the input data of the model.Include_top � False: it indicates that the top layer of the model (the one directly responsible for classifcation based on extracted features) will not be included in the loaded model.Weights � "imagenet": itindicates that the model will be loaded with the fnished weights that it received during training on the ImageNet dataset.Pooling � "max": it indicates that the last pooling layer in the loaded model architecture will use the maximum pooling operation. As a result, was loaded a model containing 578 layers.After the pretrained model was loaded, it was necessary to build the architecture of the fnal model.To do this, a pretrained model was integrated into it and top layers were added, which already directly perform the classifcation. Data augmentation: it applies the augmentation technique to the images the model takes as input.Tis layer is active only during the training of the model because when it is used to solve real problems, augmentation will be completely redundant.Dense (fully connected layer): it is used to perform classifcation based on features that have been extracted by the convolutional layers of the pretrained model. RELU is an activation function for introducing nonlinearity into a convolutional neural network. Batch Normalization: it is used to normalize the input data by applying a transformation that makes the average value close to 0 and the standard deviation close to 1. Dropout: it isused to prevent overtraining of the model, by deliberately losing a certain part of random neurons. Te last dense layer has the same number of neurons as the number of classes in the selected dataset and uses the Softmax activation function to output the vector of the probability of image belonging to a particular class. Model Training. After the architecture of the fnal model was built, it was necessary to proceed to the process of training it.It is necessary in order to fll the model with weights that will be optimal for solving the problem of this work.Tis stage is key since it is the level of optimality of the weights that determines the efectiveness of the model to a very large extent. In order to start training the model, it was necessary to determine the optimization algorithm, loss function, and training indicators. As an optimization algorithm, it was decided to use Adam.Tis algorithm is an optimization algorithm widely used in deep learning to train models of convolutional neural networks.It combines ideas from other optimization algorithms, such as stochastic gradient descent (SGD) and RMSprop, to enable efcient and adaptive updating of model weights during training.Adam uses stochastic gradient descent to update the model parameters.It also adjusts the learning rate for each parameter separately using the previous gradients and their squares.Tis adaptability helps the model converge faster and prevents it from getting stuck in local lows.Adam also uses the concept of momentum.Its essence is to accumulate the history of gradients for each parameter of the model during optimization.Tis story helps stabilize the optimization process, especially under conditions of noise in the data or unstable gradients. As a function of loss, it was decided to use sparse categorical cross-entropy [25].Tis function is used in deep learning to measure the diferences between the probability distribution that the model predicts and the authentic class distribution.Tis loss function is especially useful in multiclass classifcation problems, where each object can belong to one of the possible classes.Sparse categorical crossentropy is calculated by comparing the true class distribution with the probability distribution predicted by the model.If the predicted probabilities correspond exactly to the authentic distribution, the loss function equals zero.In other cases, the loss increases, indicating diferences between predictions and true values.In order to be able to clearly track the progress of the model training, it was decided to use the accuracy indicator. It is worth noting that the size of the batches of images the model will take as input for training � 32. Callbacks for model training were also identifed as follows: EarlyStopping (monitor � "val_loss," patience � 12); ModelCheckpoint ("model1.h5"monitor � "val_loss," save_best_only � True).EarlyStopping is used to prematurely stop the model's training process if it has stopped improving [26].It avoids the unreasonable waste of expensive training resources when the value of the specifed indicator of the efciency of the model ceases to improve over a certain number of eras.Monitor � "val_loss" indicates that the EarlyStopping callback will observe the value of the loss function on the validation sample of the dataset in order to understand whether the model has stopped improving.Patience � 12 indicates that to prematurely stop model training, it requires a lack of improvement in the value of the selected indicator for the next 12 epochs from the moment the best value is fxed. ModelCheckpoint callback is used to automatically store the state of the model (including weights and architecture) during training at certain times [27].It can be very useful because training on a large amount of data is expensive, and if for some reason it does not end successfully, you can lose all the previously gained weight, which will be a very unpleasant situation. Also, this callback is useful because it allows you to save exactly the best version of the model, overwriting the old version with it and discarding all the others with worse values of efciency indicator. Filepath � "model1.h5"specifes the fle name in which the model will be saved.Monitor � "val_loss" indicates that the Mod-elCheckpoint callback will observe the value of the loss function on the validation sample of the dataset.When this value improves, ModelCheckpoint saves the model.Save_best_only � True indicates that only the best version of the model will be saved, that is, the one for which the value of the selected indicator will be the best. Ten, it was necessary to go directly to the process of training the model. Since for the pretrained model EfcientNetB5 trainable � false, its layers will be frozen during training (they will not change the weights).Te weights will only change at the top layers that were added to the fnal model after the pretrained one. Te following indicators are used to evaluate the model's performance: loss, accuracy, recall, precision, and F1 score. Te following values will be used for the formulas of the following indicators: Journal of Engineering True Positives: the number of images that belong to a certain class and have been correctly defned as this class.False Positives: the number of images that do not belong to a particular class and were mistakenly defned as this class.True Negatives: number of images that do not belong to a particular class and were correctly defned not as this class. False Negatives: the number of images that belong to a certain class and were mistakenly defned as not belonging to this class.Accuracy: it measures the overall correctness of the model classifcation.It is defned as the ratio of the number of correctly classifed images to the total number of images.Accuracy can be useful in the case of a well-balanced dataset: accuracy � true negatives + true positives true positives + false positives + true negatives + false negatives . ( Phase 1. Te model was compiled with a certain optimization algorithm, a loss function, and training indicators. After that, the training process was started using the model.ftfunction, in which the parameters were transferred to the training and validation samples of the dataset, and the callbacks that were previously defned.It was also determined that the maximum number of training epochs � 100. Te training took place in the Google colab environment, using premium resources (A100 GPU and Colab Pro+), and took several hours.As a result, the model trained the maximum number of epochs (100), and the results achieved are given in Table 1. Te obtained results are additionally visualized in Figures 7 and 8.In particular, the change in the loss indicator is shown in Figure 7, and the change in accuracy during eras is shown in Figure 8. As can be seen, the accuracy of the model on the validation sample of the dataset was quite high (val_accuracy � 0.9463).Tis result is satisfactory, but it was decided to conduct additional experiments in order to increase the efectiveness of the model. Phase 2. To improve the efciency of the model, a certain number of layers of the pretrained EfcientNetB5 model were activated (not including BatchNormalization layers, because changing their weights will negatively afect the efectiveness of the model) and continue training the model (start the second phase of training).In general, the trained model kneads 578 layers, and the deeper the layers are, the more complex features they form.Tat is, it makes sense to activate the last layers, since they are responsible for the formation of high-level features that could be adapted to our task.At the same time, the initial layers form low-level features, and since they will be suitable for our task, the modifcation of their formation (changing the weights of the initial layers) will not make sense. Te performed experiments demonstrated that the best results were obtained by activating the last 92 layers (not including BatchNormalization layers) of the pretrained model, so this is the number of layers activated for the second phase of training. In this phase, the learning rate for the Adam optimization algorithm was set to 1e − 5. Callbacks for model training were also changed: EarlyStopping (monitor � "val_loss," patience � 13) ModelCheckpoint ("model2.h5,"monitor � "val_loss," save_best_only � True) ReduceLROnPlateau (monitor � "val_loss," factor � 0.2, patience � 3) For the EarlyStopping callback, there was a variable parameter patience, now it � 13 For ModelCheckpoint, the flepath parameter has been changed, now it � "model2.h5"Factor � 0.2 indicates that the learning speed will be reduced by 5 times, if there is no improvement in the value of the selected indicator for a certain number of eras.Patience � 3 indicates that in order for the learning speed to decrease, the selected indicator should not improve over 3 subsequent eras. Te second phase of training once again took place in the Google colab environment, using premium resources (A100 GPU and Colab Pro+), and also took several hours.As a result, the model was trained for 48 epochs, and the results are given in Table 2. Visualization of the obtained results in the second phase of model training is additionally presented in Figure 8.It was decided to stop the model training process, since the accuracy of the model on the validation sample of the dataset was extremely high (val_accuracy � 0.9745), and there were no more ideas for its further improvement. After the process of training the model was fully completed, it was necessary to evaluate the efectiveness of its work.Te efciency results after the model was run on the test sample of the dataset are given in Table 3. Te model error matrix on the test sample of the dataset was also visualized (see Figure 9). In Figure 10, the operation of the model on specifc images is demonstrated. Comparing the model proposed by this research work with those proposed in the analyzed literary sources (see Tables 4 and 5), we can conclude that there are two signifcant advantages of our model: (1) High efciency of the model for the number of bird species equal to 400: in other words, the values of the model efciency indicators on the test data were lower than in this work-loss � 0.0224, and accuracy � 99.86%. Conclusions Te BIRDS 525 SPECIES dataset which contains 525 species of birds was selected, analyzed, and preprocessed.It was eventually used to train the proposed model.Te optimal architecture of the model was also built using the transfer learning approach.For this purpose, the EfcientNetB5 model, which had been previously trained on the ImageNet dataset, was integrated into it. A new optimization algorithm for the classifcation of diferent species of birds in the images was developed.Te model training process was carried out, which included two phases: (1) only the upper layers were activated and (2) the last 92 layers of the pretrained EfcientNetB5 model (not including BatchNormalization layers) and the top layers were activated.Te efciency of the model was also evaluated using indicators: loss, accuracy, precision, recall, and F score.An error matrix was visualized so that the model could be analyzed for diferent classes.As a result, the efciency indicators accuracy � 98.86%, precision � 0.99, recall � 0.99, and F1 score � 0.99 were obtained.A comparative analysis of the obtained indicators with the corresponding indicators obtained by other authors was also carried out.From the analysis of the results obtained, it can be argued that the task of classifying various species of birds was efectively managed while ensuring high accuracy. As future research, it would be important to optimize the proposed algorithm in order to accelerate model training and the possibility of obtaining a real-time solution [29,30]. ( 3 ) RandomRotation (0.05, fll_mode � "nearest") rotates the image by a random amount in the range of [−18 °, +18 °], flling the resulting empty pixels with the value of the nearest pixel from the original image.(4) RandomZoom (0.05, fll_mode � "nearest") scales the image by a random amount in the range of [−5%, +5%], flling the resulting empty pixels with the value of the nearest pixel from the original image.(5) RandomContrast (0.2) adjusts the contrast of the image randomly by the formula (x − mean) * factor + mean, where the factor is in the range of [0.8, 1.2]. Figure 2 :Figure 3 :Figure 1 : Figure 2: Visualization of the number of images for each bird species in the validation sample of the dataset. Figure 4 : Figure 4: (a) An example from the selected dataset and (b) the same image, but after applying the augmentation technique. Figure 5 : Figure 5: Example of images from the ImageNet dataset. Figure 9 : Figure 9: Model error matrix on the test set of data. TRUE LABEL: GREEN MAGPIE PREDICTED LABEL: GREEN MAGPIE TRUE LABEL: ASIAN DOLLARD BIRD PREDICTED LABEL: ASIAN DOLLARD BIRD TRUE LABEL: AMERICAN AVOCET PREDICTED LABEL: AMERICAN AVOCET TRUE LABEL: COMMON GRACKLE PREDICTED LABEL: COMMON GRACKLE TRUE LABEL: BALD IBIS PREDICTED LABEL: BALD IBIS TRUE LABEL: GOLDEN EAGLE PREDICTED LABEL: GOLDEN EAGLE TRUE LABEL: VULTURINE GUINEAFOWL PREDICTED LABEL: VULTURINE GUINEAFOWL TRUE LABEL: FAIRY PENGUIN PREDICTED LABEL: FAIRY PENGUIN TRUE LABEL: BLUE THROATED TOUCANET PREDICTED LABEL: BLUE THROATED TOUCANET Figure 10 : Figure 10: Demonstration of the operation of the model on specifc images. ) Precision: it measures how much of all images defned by the model as a specifc class really belong to this class.It is useful in situations where it is important to maximize the accuracy of the defnition of a particular class and avoid erroneous defnitions of that class: Table 3 : Efciency indicators of the model run on the test sample. Table 4 : Comparison of models from the relevant literature with the model proposed in this research paper for the number of bird species equals 400. Table 5 : [16]arison of our model with the models from[16]for the BIRDS 525 SPECIES dataset-precision, recall, and F1 score.High efciency of the model for a large number of diferent bird species (525) that the model can classify: accuracy � 98.86%, precision � 0.99, recall-� 0.99, and F1 score � 0.99.
2024-02-09T16:13:32.182Z
2024-02-07T00:00:00.000
{ "year": 2024, "sha1": "770ee68d1b136cd098a018a399d1f69af29faae0", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/je/2024/8254130.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f966df394be370c7a41c272cc1e542fb5285fcd5", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [] }
268041598
pes2o/s2orc
v3-fos-license
Particle detectors under chronological hazard : We analyze how the presence of closed timelike curves (CTCs) characterizing a time machine can be discerned by placing a local particle detector in a region of spacetime which is causally disconnected from the CTCs. Our study shows that not only can the detector tell if there are CTCs, but also that the detector can separate topological from geometrical information and distinguish periodic spacetimes without CTCs (like the Einstein cylinder), curvature, and spacetimes with topological identifications that enable time-machines. Motivation The theoretical underpinnings of time machines are grounded in general relativity, which permits solutions with intricate causal structures like Gödel's rotating universe or wormhole spacetimes.Traversable wormholes, if they exist, offer theoretical pathways to constructing time machines, where closed timelike curves (CTCs) occur within certain regions of spacetime.However, the presence of closed timelike curves typically leads to Cauchy horizons in these spacetimes, with the stability of viable time machines linked to the stability of these horizons. Within the framework of quantum field theory (QFT) in curved spacetimes, the instability of Cauchy horizons can be assessed by analyzing the divergences in the renormalized stress-energy tensor near the Cauchy horizons.Since the background geometry containing a time machine is necessarily multiply-connected and not globally hyperbolic, defining a QFT on such a background spacetime requires the introduction of new tools.Extensive research has been devoted to the general construction of QFT on multiply connected manifolds, often employing the framework of automorphic fields.The approach involves studying the quantum fields on the universal covering space, which has a trivial topology, while imposing specific automorphic conditions on them [1][2][3]. A classic illustration of the automorphic construction involves a scalar field on the Einstein cylinder.In this scenario, a scalar field on the cylinder is determined by the field configuration in Minkowski space (the universal covering of the cylinder), subject to (anti-)periodic boundary conditions along the spatial direction.For massless scalar fields in flat space subject to periodic or Neumann boundary conditions, or when the spatial sections of curved spacetime are compact, it is known that canonical quantization can be somewhat pathological due to the appearance of zero modes [4][5][6][7][8].Since the zero mode is dynamically equivalent to a free particle, it admits no Fock representation and hence the vacuum state of the full theory is ambiguous.A more modern interpretation of this phenomenon is that the zero mode Hilbert space does not have any preferred vacuum state, hence the full theory has a continuum family of unitarily inequivalent vacuum states [9]. Another example, which will be our main object of study in the work, is the time machine spacetimes obtained by suitable topological identification of Anti-de Sitter (AdS) spacetime, which is not globally hyperbolic itself.QFT on AdS geometry has been studied to great detail by many authors in very different contexts [10][11][12][13][14], often due to its current importance for Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence and holography (see, e.g., [15][16][17]). In a previous paper [18], it was proposed that one can single out a "regularized" vacuum state for the zero mode of the Einstein cylinder by considering a scalar field living on a suitable topological identification of the Poincaré patch of two-dimensional Anti-de Sitter (AdS 2 ) spacetime.In this so-called Poincaré time machine 1 background, a fixed (large) AdS length serves as a natural regulator for the zero mode. In this work we will extend the analysis in [18] to study the possible extraction of topological information associated with the time-machine model by a observer carrying localized quantum-mechanical probe.In particular, we will see that a localized particle detector will be able to tell if there are closed-timelike curves in some region of the spacetime far from the detector's trajectory (even though this region is hidden by a horizon).A natural framework for this is the so-called Unruh-DeWitt (UDW) particle detector model [20,21], which has been argued to be a good model for the light-matter interaction consisting of a localized two-level quantum system interacting with the quantum field via dipole-type interaction in quantum optics [22].The localized nature of the interaction enables us to obtain useful local information about the field that is consistent with what local observers can measure.Crucially, in the case of the Einstein cylinder, the choice of zero modes will have phenomenological impact since the UDW detector couples to all the modes of the field including the zero mode [4][5][6].For our purposes, we will use a variation of the UDW detector model called the derivative coupling model [23][24][25] since it is less sensitive to the infrared (IR) ambiguities that appear in two-dimensional massless QFT. Note that the goal of this paper is not simply to study the dynamics of an UDW detector applied to an AdS background geometry and its topological identifications (this has been studied in, e.g., [26][27][28][29][30]).Furthermore, it is known that measurements made with localized probes can indeed be sensitive to the topology of spacetime and not only its geometry [31].Armed with this knowledge, we also want to understand under what conditions a measurement carried out by a localized probe can distinguish between a spacetime with a time machine and spacetimes with similar geometries where there are no time machines even when the detector does not travel on a CTC itself. We begin by observing that the Ricci scalar of the time machine geometry considered in [3,18] is constant and determined by the two parameters characteristic of the periodic identification that leads to the time machine: the spatial period of the time-machine construction and the strength of the time-machine, that is, what is the time warp in the identification.Note that already for the zero-curvature limiting case, taking the limit of vanishing time warp and is not equivalent to taking the limit of large spatial periods: while both limits recover (locally) flat spacetime, the former leads to the Einstein cylinder and the latter to Minkowski spacetime.This suggests that for fixed curvature, keeping the spatial period fixed and varying the time-machine strength is not physically equivalent to keeping the latter fixed while varying the former.We can thus isolate the global topological and chronological information associated with the existence of the time machine from the geometrical information by fixing the local curvature in two distinct ways 2 .In particular, we will concentrate on the two regimes that we call "slow" and "fast" time machines with the same AdS curvature3 where only in the second limit the Poincaré-AdS 2 limit is reached. In this paper we will first show that the detector is sensitive to topological information of the background geometry as well as the presence of CTCs beyond the bifurcate Cauchy horizon.We then show that for the Poincaré time machine geometry, the two time-machine configurations are physically inequivalent and that this is operationally manifest in terms of very different detector responses in each regime.The fact that local measurements of particle detectors can be used to track non-local features of a QFT in regions that are causally disconnected from the detector is a well studied phenomenon, and can be ultimately traced back to the fact that equilibrium states of quantum fields store global information locally [31][32][33][34][35][36][37]. Our paper is organized as follows.In Section 2 we briefly review the scalar field theory in two-dimensional geometries, focusing on the Einstein cylinder and the Poincaré time machine geometry.In Section 3 we review the derivative coupling Unruh-DeWitt model and calculate the relevant derivative-coupling Wightman functions and how they are related in appropriate limits.In Section 4 we calculate the detector response and show how it can be used to understand the slow and fast time machine regimes.We adopt the convention that c = ℏ = 1 and the metric signature is such that for a timelike vector V we have g(V, V) = g µν V µ V ν < 0. Scalar QFT in multiply connected spacetimes For the purpose of this work, we will consider quantum field theory for a massless scalar field in three different two-dimensional spacetimes: (1) the time machine geometry constructed by topological identification of the Poincaré patch of anti-de Sitter (AdS 2 ), which we shall call "Poincaré time machine" for brevity; (2) the universal covering space of the time machine, i.e., the full Poincaré patch itself; and (3) the Einstein cylinder, a flat spacetime with topology R×S 1 .Following [18], we will label quantities in the time-machine spacetime with a bar: for example, the field living in Poincaré AdS 2 is written as ϕ, while the field on the time machine geometry with ϕ.Quantities associated with the Einstein cylinder will be written in a manner that clearly distinguishes from the ones for the Poincaré patch and the time machine, by using capital letters. A real massless scalar field ϕ in (1 + 1)-dimensional curved spacetime conformally coupled to gravity obeys the Klein-Gordon equation Recall that canonical quantization of a real scalar field gives us an operator-valued distribution φ(x) whose mode decomposition is given by where u j are the eigenmodes of Eq. (2.1) and the sum is understood to be over continuous or discrete set of modes depending on the spectrum.In this work we will consider both the time machine geometry (where the basis that we will use and hence j are discrete) and its universal cover, namely the full Poincaré patch of AdS 2 (where they will be continuous). The operators âj , â † j are the creation and annihilation operator satisfying the canonical commutation relation where δ jj ′ represents Kronecker delta if j is discrete or Dirac delta function if j is continuous. QFT in time machine geometry and its universal cover In this section we summarize the main results obtained in a previous paper by the authors [18] on the computation of Wightman function for fields on a (1 + 1) canonical time machine M [18].The metric on the time machine geometry is given by where A ≥ 1 is the warp parameter giving an estimation of the "strength" of the time machine and L > 0 is the proper length.In order to have a time machine points are identified via the equivalence relation (t, y) ∼ (t ′ , y ′ ) if and only if t ′ /t = A and y ′ − y = L, giving rise to a multiply connected spacetime.One can directly see that this model is fully characterized by two parameters A and L. The universal covering space M of this multiply-connected spacetime is the Poincaré patch of AdS 2 spacetime with line element (2.4) (one can see that the canonical time-machine model M is the quotient space obtained by previous identification of the Poincaré patch).We use universal covering techniques [2,3,38,39] to construct a field theory on the multiply-connected spacetime.In simple spacetimes such as the Einstein cylinder, char-acterized by A = 1, these techniques reduce to just imposing boundary conditions on the field, and in our (1 + 1)-time-machine model the procedure involves constructing the automorphic field ϕ from the corresponding field ϕ living on the Poincaré patch of AdS 2 (for more details on the construction of the model, see [18]). Let us now describe a change of variables to the null-coordinates that we will use in this paper.We start by writing the metric in the more standard Poincaré-patch coordinates (η, ξ) ∈ R × R + by the change of coordinates η = t , ξ = e Wy /W . (2.5) The metric takes the form where the AdS 2 length scale is given by W −1 .The null-coordinates are then defined by in terms of which the topological identification that gives rise to the time-machine model reads Finally, in order to clarify the structure of regions in the model, we display the Penrose diagram of the maximal analytic extension of the Poincaré patch.For that purpose we first transform to yet another new set of coordinates (τ, ρ) defined by tan(ρ ± τ ) = 2Wζ ± . (2.8) Thus, the Penrose diagram of the maximal analytic extension (see Figure 1) is defined by the range ρ ∈ (0, π), τ ∈ R with a conformal boundary I, consisting of the disconnected In order to construct our field theory, we perform now the quantization via the universal covering approach to the standard Poincaré patch.There, the massless Klein-Gordon equation is (−∂ 2 η + ∂ 2 ξ )ϕ = 0 with Dirichlet boundary conditions, i.e. ϕ| ξ=0 = 0 [18] at the conformal boundary ξ = 0, as AdS 2 is not globally hyperbolic. We obtain a canonical quantization of the field ϕ in terms of ζ ± coordinates that can be written as (2.9) where the positive-frequency eigenfunctions are given by and a(ω) and a † (ω) are the annihilation and creation operators acting on the Fock space defined in terms of the positive frequency modes, with commutation relation [a(ω), a(ω We can obtain the corresponding QFT defined on the time-machine spacetime M from the QFT defined in the covering space M .We need to construct which is defined in the universal cover.The topological identification implies that the values of the scalar fields in the time-machine spacetime have to coincide in both identified points, That is, that the field ϕ has to be automorphic under the action of the fundamental group, and it is imposed by means of the following requirement in the "annihilation variable" a(ω) that can be satisfied if a(ω) takes the following form [18] a(ω where c n are constants fully determined by a(ω) and vice versa.We can write the decomposition of the automorphic solutions ϕ(ζ where c n , c † n are annihilation and creation operators acting on the Fock space defined in terms of the positive frequency modes u n , with canonical commutation relations [c n , c † n ′ ] = iδ nn ′ .The normalized modes u n under the induced Klein-Gordon inner product in the fundamental domain [18] are given by where s ± = sign(ζ ± ), and the positive frequency modes u n form an orthonormal basis (u n , u n ′ ) = δ nn ′ .Let us remark that for the rest of the paper we perform the computations in the region where there are no CTCs in the time-machine spacetime.As we mentioned before, this corresponds to the diamond-shaped region ζ ± > 0 of the spacetime diagram(for more details, see the discussion in [18]). (1+1) Einstein cylinder The Einstein cylinder is obtained from two-dimensional Minkowski spacetime with flat line element ds 2 = −dt 2 + dy 2 (2.16) by a topological identification (t, y) ∼ (t, y+L), where L is the circumference of the cylinder and t ∈ R. The Klein-Gordon equation (2.1) for the scalar field Φ is that of flat-space wave equation with periodic boundary condition (2.17) The resulting massless quantum scalar field has Fourier mode decomposition given by [4,5] Φ(t, y) = Q zm (t) + Φ osc (t, y) , (2.18) b n e −i|kn|t+ikny + h.c., (2.19) where k n = 2πn/L and n ∈ Z \ {0}.We call Φ osc the oscillator modes and the spatially constant piece Q zm (t) the zero mode because it corresponds to a zero-frequency oscillator.A massless scalar field on Einstein cylinder has a non-unique ground state due to the existence of the zero mode [4,5].We can define the Fock vacuum |0 osc ⟩ for Φ osc as the state satisfying b n |0 osc ⟩ = 0 for all n ̸ = 0, but the zero mode is dynamically equivalent to a free particle and hence does not have a Fock ground state.The zero mode is naturally associated with position and momentum operators Q s zm , P s zm respectively (the subscript "S" denotes the Schrödinger picture) which obey equal-time canonical commutation relation [Q s zm , P s zm ] = i1.We can write the zero mode Q zm (t) as In [18], a way to prescribe a (regularized) zero mode ground state was given by taking the zero-curvature expansion of the quantum scalar field on the time machine geometry. Particle detector model In this section we calculate within second-order perturbation theory the detector response in the time machine geometry covering the "slow time machine" and "fast time machine" regimes.We will then compare the results with the detector responses in the Einstein cylinder and Poincaré-AdS 2 geometries.In order to avoid certain IR issues due to the zero mode, we will use a variant of the Unruh-DeWitt detector model known as the derivative coupling model (see, e.g, [23][24][25]). The setup The derivative coupling variant of the Unruh-DeWitt model is defined by the interaction Hamiltonian [23][24][25] The time evolution operator is given by the unitary where T is the time-order operator.Working to second order in perturbation theory, the time evolution can be expanded in a Dyson series where Θ(z) is the Heaviside function.Suppose that the initial state of the joint detectorfield system is uncorrelated, i.e., ρ0 = ρd,0 ⊗ ρϕ,0 . ( The final state of the detector can then be computed perturbatively as ρd = ρd,0 + ρ(1) where the correction term ρ(j) For our purposes, we are interested in the case where the field is in some vacuum state, namely ρϕ,0 = |0⟩⟨0|.This state has vanishing odd-point functions, hence ρ(1) d = 0 and the leading-order correction to the detector's density matrix is O(λ 2 ). For simplicity, let us assume that the detector's initial state is the ground state ρd,0 = |g⟩⟨g|.Then in the {|g⟩ , |e⟩} basis, we have ρd = 1 − P 0 0 P + O(λ 4 ) , (3.8) where P is the excitation probability given by The bi-distribution A(τ, τ ′ ) is the proper-time derivative of the Wightman two-point function W(x, x ′ ) = ⟨0| φ(x) φ(x ′ )|0⟩ pulled back along the detector's trajectory x(τ ), i.e., Therefore, the detector response P (Ω) depends on the pullback of the Wightman function W(x(τ ), x(τ ′ )) along the detector's trajectory.Finally, we need to specify the detector's trajectory.Since we will be comparing the Poincaré time machine with its universal cover, we need to restrict the detector's motion to be confined within the diamond-shaped region of the Poincaré patch with ζ ± > 0. This is to ensure that the detector encounters no CTC anywhere during its interaction with the quantum field.Let us restrict our attention to the simple case of a stationary trajectory ξ = constant, namely x(τ ) = (η(τ ), ξ) = (Wξτ, ξ) , (3.11) or in null coordinates ζ ± (τ ) = ξ ± Wξτ .This trajectory has constant two-acceleration This acceleration is independent of ξ, which reflects the maximally symmetric nature of AdS 2 .For the case of a time-machine geometry, in order to stay within the region without CTC we require that the detector-field interaction is confined to the region ζ ± > 0, i.e., the detector's interaction must be constrained to be within |η(τ )| ≤ ξ.More concretely, the requirement translates to Thus the detector's interaction duration can be longer when the spacetime curvature is weaker. Calculation of the derivative two-point functions Recall that our goal is to analyse the detector response in the time machine geometry and understand the different regimes related to the strength of the time machine.For this, it will be very useful to evaluate the derivative two-point functions A(τ, τ ′ ) for Minkowski space, Einstein cylinder, and Poincaré-AdS 2 as they correspond to taking certain limits of the two-point function for the Poincaré time machine.Furthermore, these can be straightforwardly obtained from their well-known Wightman two-point functions, which in turn follow directly from standard mode-sum calculations and exploiting the conformal flatness of two-dimensional geometries for curved geometries (see, e.g., [40]). For convenience, let us first define the double-null coordinates u = x − t, v = x + t, so that by writing ∆t = t − t ′ , ∆x = x − x we have ∆u = ∆x − ∆t and ∆v = ∆x + ∆t.The Wightman two-point function for two-dimensional massless scalar field in Minkowski spacetime is well-known and is given by where Λ > 0 is some IR cutoff.The dependence on the IR cutoff is the origin of the IR ambiguity for massless fields in two-dimensional Minkowski spacetime.The Wightman two-point function for the massless field in the Einstein cylinder is given by [4] Here L is the perimeter of the Einstein cylinder.Notice that the field is effectively given a periodic boundary condition so that L serves as an IR cutoff.Hence, the IR ambiguity from Minkowski spacetime no longer appears in the cylindrical spacetime.The Wightman two-point function for the massless field in the Poincaré-AdS 2 spacetime can be readily obtained from the mode functions in the Poincaré patch.Using the corresponding null coordinates in the Poincaré-AdS 2 patch ζ ± := ξ ± η, the Wightman two-point functions for field living in the Poncaré-AdS 2 patch is given by Note that this is functionally similar to the case of a static mirror geometry in flat spacetime since ζ ± takes the role of v, u coordinates respectively [41].This follows directly from the fact that we needed to put the boundary condition at the conformal boundary of the patch, effectively putting a mirror at the boundary.Therefore, the Wightman function also does not have any IR ambiguity.The derivative two-point functions A(τ, τ ′ ) for the different background geometries can now be readily obtained by taking proper-time derivatives with respect to the static trajectory (3.11): The parameter γ defines the zero mode regularization in Einstein cylinder [4,5].Recall that the parameter W ≥ 0 is the inverse AdS radius, corresponding to the constant Ricci scalar R = −2W 2 .Clearly, from Eq. (3.17b) the Einstein cylinder Wightman function can be decomposed into the oscillator and zero mode two-point functions: Note that all the time dependence is carried by the oscillator contribution.Finally, we are ready to calculate the derivative of the two-point function for the time-machine geometry.From the automorphic construction, it can be shown that the Wightman two-point function of the field on a multiply-connected spacetime is related to the corresponding two-point function on its universal covering space via an image sum [1,39] (see also [18]).This gives5 Numerically it is convenient to evaluate this sum as finite truncations of the sum which serves as an ultraviolet (UV) cutoff.Before we proceed with the detector response calculations, it is worth pointing out certain useful limits.First, we can check that the derivative Wightman two-point functions have good IR behaviour: lim This tells us that the zero mode contribution in the Einstein cylinder scenario vanishes in the large L limit and we recover the Minkowski spacetime.Similarly, we recover the Minkowski spacetime case when we set the AdS radius of curvature W −1 to zero.Observe that the oscillator component of the Einstein cylinder Wightman function A osc EC can itself be written as an image sum: where the n ̸ = 0 terms scales like L −2 ; hence the image sum provides the required oscillatory corrections to the flat-space Wightman functions.Second, for the derivative Wightman function in the Einstein cylinder, it is possible to actually set the zero-mode regulator γ to zero: the original Wightman function W EC has a constant zero-mode divergence originating from the zero mode variance ⟨ φzm (t) φzm (t ′ )⟩ ∼ γ −1 +O(γ): the derivative coupling removes the problematic γ −1 term.One of the main goals of this work is to understand the limiting behaviour of the time machine geometry.To illustrate this problem, suppose we would like to know the weakcurvature regime W ≈ 0 + .Because W = log(A)/L, there are actually two ways to take the limit: Intuitively, we may expect that Case (i) approaches the Einstein cylinder in the limit which has zero curvature and finite L, while Case (ii) approaches the Poincaré-AdS 2 in the limit since it is an open spacetime with nonzero curvature.We will see that this indeed the case in the next section. Detector response For simplicity, in this work we consider a Gaussian switching where the switching width T prescribes the effective duration of the interaction.This allows for exact computations in some cases (e.g., Minkowski spacetime and Einstein cylinder) and is numerically favourable.However, there is a small technical detail we need to deal with: recall from Eq. (3.13) that we require |τ | ≤ W −1 in order for the interaction to be confined within the no-CTC region.From a numerical viewpoint, this seems to be at odds with our choice of non-compact Gaussian switching, but we will argue later that this choice is a matter of practical convenience.For Gaussian switching, the requirement in Eq. (3.13) amounts to constraining the Gaussian width T to be This sets the scale for the size of W in units of T .We will also use the dimensionless energy gap ω := ΩT and the dimensionless spatial period ℓ = L/T .Furthermore, it is convenient to write A = 1 + δ for some δ > 0, so that Note that for the derivative coupling model, the coupling constant λ is dimensionless in Eq. (3.1) since φ has dimension zero in (1+1) dimensions.The requirement that w ≪ 1 can be achieved in two ways, borrowing the terminology from [19]: (i) Slow time machine regime: for fixed ℓ, choose δ ≪ 1 such that w ≈ δ/ℓ ≪ 1. The terminology is motivated by the fact that w measures the amount of redshift in the identification ζ ± ∼ Aζ ± ≡ (1 + wl)ζ ± for a fixed dimensionless circumference l.Because the dimensionless curvature scale w is required to be small, we can think of both regimes as being weakly curved over the duration of interaction T .Furthermore, it is possible to set w to be equal for Case (i) and (ii), hence the differences in the detector response is purely topological in nature.This generalizes the result in [31]. For Minkowski spacetime, we see by inspection that A M is proportional to the pullback of the (3+1)-dimensional Wightman function in Minkowski spacetime to an inertial observer at rest: Since the right hand side can be computed using the well-known plane-wave expansion given by the detector response (3.9) can be written in terms of the Fourier transform of the switching function: where k = |k|.For the Einstein cylinder, we can use the fact that we can split where A reg EC = A EC − A M is a regular well-behaved function.This splitting leads to simpler numerical calculations as it reduces the computation of the detector response to a onedimensional (numerical) integration P EC = P M + P reg EC , (4.8a) For AdS 2 spacetime, we can also do this: ) where the contour C(ϵ) is over the real axis R but deformed to the lower complex plane near the simple poles u = ±2W −1 of A reg AdS 2 .Finally, the detector response for the time machine geometry is not easy to calculate because we need the image sum (3.19) and we no longer have stationarity property A(τ, τ ′ ) = A(τ − τ ′ , 0) that the previous three geometries possess.However, we can still evaluate the integral directly by breaking (3.19) term-wise and calculate the integrals separately.That is, we write ) In practice, we will need to truncate the summation over n from −N to N for sufficiently large N and one can verify the convergence numerically.We also compare this with another expression for the time machine Wightman function where the real part is in terms of Jacobi elliptic theta function [18], and in the parameter regimes where both are numerically stable, they give the same results.Before we present our results, let us address the issue mentioned earlier that the requirement (3.13) seems to be at odds with the choice of non-compact Gaussian switching function.In practice, due to the strong support of the Gaussian we truncate numerically the Gaussian to finite duration while performing an appropriate contour integration for the calculation of the excitation probability.This truncation may appear unsatisfactory from a mathematical standpoint, since sharp truncation can lead to UV divergences.Suppose we cut the Gaussian so that it is only supported at J := [−5T /2, 5T /2] with effective width 5T .Then the truncated Gaussian is equivalent to applying an indicator function supported on J, i.e., the excitation probability for sufficiently small ϵ is indistinguishable from using the full Gaussian switching.In effect what the UV regulator does is to "smoothen the corners" in χ J (τ ), and from this we can instead interpret the original Gaussian switching as "UVregulated" version of compact, truncated switching demanded by Eq. (3.13).One could also insist on using genuine smooth compactly supported switching functions as in [42,43], but since the UV-divergent piece is purely a property of the switching and not about the trajectory, background spacetime or the detector parameters, this will not modify our We fix the dimensionless parameters to be ω = 0.1, w = 0.05, γ = 0.01, N = 10.In the limit of large ℓ, the detector response for the time machine geometry approaches the value of the detector response in the Poincaré-AdS 2 patch.In the limit of small ℓ, the detector response approaches instead to the case of Einstein cylinder. analysis in any significant manner. Let us first demonstrate the transition from slow to fast time machine regimes by fixing the dimensionless curvature scale w = 1/20, and vary ℓ ∈ [10,150].For clarity, we chose to vary ℓ = log A w to change the strength of the time machine since ℓ for fixed curvature w is equivalent to varying A. Figure 2 shows that in the limit of large ℓ, corresponding to the fast time-machine, the detector response for the time machine geometry approaches the value of the detector response in the Poincaré-AdS 2 patch.Since the local curvature scale is fixed by w, this shows that the fast time machine geometry is locally indistinguishable from the standard AdS background.In contrast, if the time machine geometry is slow (relative to interaction time T ), then the detector response becomes more similar to the detector response in the Einstein cylinder.The deviation always persists since the Einstein cylinder has zero spacetime curvature while the time machine geometry is set to have finite nonzero curvature.Note that we have fixed the local curvature parameter w of the time machine geometry to be equal to the corresponding Poincaré-AdS 2 limit: consequently, the differences in the detector responses as we vary ℓ is purely a topological in nature.This can be regarded as a curved spacetime generalization to the result in [31] that distinguishes Minkowski spacetime from Einstein cylinder spacetime in (3+1) dimensions 6 . A different way of interpreting the slow and fast time machine regimes can be given if we now fix ℓ and find how the detector response varies as a function of curvature parameter w.We chose ℓ = 100 to ensure that the spacetime is large enough for the support of the switching to be far from the Cauchy horizons.The results are shown in Figure 3.In the large curvature regime, the detector response for the time machine geometry approaches We fix the dimensionless parameters to be ω = 0.1, ℓ = 100, γ = 0.01, N = 15.In the limit of large w, the detector response for the time machine geometry approaches the value of the detector response in the Poincaré-AdS 2 patch.In the limit of small w, the detector response approaches instead to the case of Einstein cylinder. the value of the detector response in the Poincaré-AdS 2 patch, while for weak curvature the detector response that of the Einstein cylinder.An equivalent way of saying this is that if the time machine geometry is chosen to be of the same "size" as the Einstein cylinder, the strength of the time machine is controlled by curvature.Figure 3 also seems to suggest another interesting interpretation: since Poincaré-AdS 2 is an open universe, it appears as though stronger gravitational fields make it harder to distinguish the global topology of the spacetime, while for very small w the detector can distinguish Minkowski spacetime from Einstein cylinder through the constant zero-mode contribution. Conclusion and outlook In this work we have shown that local measurements carried out with particle detectors can reveal if the spacetime where the detector moves possess CTCs.Remarkably, this is true even though the CTCs are causally disconnected from the detectors and protected by a horizon.We have done this by studying a particular spacetime with CTCs (arising from the topological identification of the Poincaré patch of two-dimensional AdS spacetime) and comparing the limits where the time warp in the CTCs is strong and when the limit where the CTCs disappear (and we recover the flat Einstein cylinder).We have also compared the response of the detector in the time machine case with the response of the detector in pure AdS, showing that the detector can distinguish between effects of the geometry of spacetime and the topological effects associated with the spacetime's chronological structure. In [19] the holographic bulk dual to a time machine geometry is constructed in order to study the connection between chronology protection and the geometry of the bulk spacetime dual to the time machine geometry.One crucial difference is that the field that lives in the time machine geometry is taken to be a strongly interacting conformal field theory that admits an explicit semiclassical dual spacetime.It is unclear whether two time machine regimes, which is separated by the size of the zero-mode phenomenon in the free theory, has an analog in the strongly interacting case and if it does, what is the corresponding statement in the bulk dual.The study of UDW model coupled to strongly-interacting fields has also been largely unexplored.We leave these two questions for future work. 0 and I R at ρ = π.The Poincaré patch, our covering space M , covers the colored region ρ > |τ − π/2| and it is bounded by two past and future Cauchy horizons at ρ = |τ − π/2|, i.e. at ζ + = ∞ and ζ − = ∞, respectively.Beyond the horizons AdS spacetime possesses CTCs 4 .The time-machine model M introduces new Cauchy horizons H ′ ± defining new regions beyond them with CTCs, such that only the diamond-shaped region ζ ± > 0 is free of CTCs. The ladder operators b n , b † n satisfy the canonical commutation relation [b m , b † n ] = δ mn for all n, m ̸ = 0. Figure 2 . Figure 2. Detector response as a function of dimensionless circumference ℓ = log Aw .We fix the dimensionless parameters to be ω = 0.1, w = 0.05, γ = 0.01, N = 10.In the limit of large ℓ, the detector response for the time machine geometry approaches the value of the detector response in the Poincaré-AdS 2 patch.In the limit of small ℓ, the detector response approaches instead to the case of Einstein cylinder. Figure 3 . Figure3.Detector response as a function of dimensionless curvature parameter w.We fix the dimensionless parameters to be ω = 0.1, ℓ = 100, γ = 0.01, N = 15.In the limit of large w, the detector response for the time machine geometry approaches the value of the detector response in the Poincaré-AdS 2 patch.In the limit of small w, the detector response approaches instead to the case of Einstein cylinder.
2024-02-29T06:44:18.856Z
2024-02-27T00:00:00.000
{ "year": 2024, "sha1": "45cb410a8e8c581e20cf9891d282352f8c0e760e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/jhep07(2024)001", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "65752bce7c4f999360e532ae056f9c18f4a4cbab", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55525333
pes2o/s2orc
v3-fos-license
A Review of Some Recent Work in the Area of Imaging and Optical Signal Processing A concise review is presented of research recently carried out within our research group. Topics discussed include: (i) Imaging through turbulent media using Lucky Imaging combined with synthetic apertures; (ii) Numerical algorithms and simulation of quadratic phases systems, i.e. the Fast Linear Canonical Transform, appropriate sampling, aliasing and the Wigner Distribution Function; (iii) Controlling speckle in such optical systems and speckle based metrology; (iv) Digital holographic systems and their application; and (v) Optical encryption and multiplexing. ©2010 Optical Society of America OCIS codes: 070.4560, 080.2730, 100.2000, 200.2610, 200.305, 200.4560, 200.4740, 110.0115, 070.2580, 000.4430, 070.2580, 030.6140, 110.6150 Introduction In this paper I will try to very briefly introduce several research topics which I, collaborators and several current and former graduate students have been actively involved in studying. Most of the technical results below have already, or will soon appear in the literature, and a fairly thorough listing of our recent papers is provided to facilitate the interested reader . One of the novelties of the presentation below lies in the overview provided. The other is the interconnections made between the various topics in Optical Signal Processing, (OSP). Every research topic was informed by my and my co-authors shared backgrounds in Fourier Optics and the recent extension of these ideas using the Collins ABCD matrix, Linear Canonical Transform (LCT), Wigner Distribution Function (WDF) and what is sometimes referred to as Phase-Space Optics (PSO) [1]. Furthermore in every topic discussed below the development and use of numerical algorithms plays a crucial role in processing or interpreting the experiments performed. It is also fair to say that the algorithms themselves are developed to 'fit the physics efficiently' and that often in trying to achieve such an optimum fit interesting insights into the systems being examined and the models themselves emerge. Thus these tools and ideas provide a unifying picture which allows ray matrices, paraxial wave optics and the numerical simulation of such optical Quadratic Phase Systems (QPS) to be brought together in a systematic and insightful way. Measurement: Phase, Motion and Speckle A detailed overview of recent work in the area of PSO, to which we contributed, is given in [1]. PSO is an approximate phase space representation firmly grounded in the more physical WDF. While other such representation of signals exist and have been used to interpret optical systems, i.e., the Ambiguity function [2], the WDF and the PSO offer simplicity. Furthermore, with use they can provide intuitive insights and suggest practical applications. One such insight is that the phase of a signal can be retrieved without the use of interferometric techniques by capturing projections (marginals) of the same signals' WDF in different domains. To do this in [3] the Fractional Fourier transform, (FRT), distributions (intensities) for different FRT orders were imaged. Thus the FRT, which is a special case of the LCT, was used and the signal phase extracted experimentally. The effects of FRT orders, noise in the system and the spatial frequency response were quantified. Based on another insight, provided by use of the WDF and LCT, it has been shown that simple Speckle Photography systems can be used to measure in-plane tilts and rotations by capturing two sequential images of the surface before and after motion [4,5]. Once again these images must be captured in different domains, i.e. the light had passed through different QPSs, undergoing different LCTs or ABCD matrix transforms. This sounds complex but can be understood very simply using the graphical techniques associated with PSO [4,5]. IWA1.pdf It was found necessary, when using such a speckle based metrology systems, in order to avoid ambiguity in relation to the direction of motion, to use correlation techniques. The measurement process, including the resolution and dynamic range of the system, thus depended critically on the speckle correlation properties. In order to design and interpret the results from such systems it was necessary to determine how the speckle evolved as it passed through apertures and through QPSs [6][7][8][9]. In our most recent work in this area we have shown how longitudinal and lateral speckle size depends on the aperture (shape, size) and varies on-and off-axis [10][11]. Fast Algorithms, Sampling and Aliasing The Fast Fourier Transform (FFT) is a way of numerically calculate in NlogN time the Discrete Fourier Transform (DFT), which requires a calculation time proportional to N 2 (where N is the number of samples being processed). The Fourier Transform (FT) is a very special case of the LCT, therefore is seems reasonable to ask whether a Fast LCT algorithm (FLCT) exists. It does and we have recently provided a very detailed description of such an algorithm in [12]. Given such a fast algorithm the selection of the number of samples N then become critical. It is well known that the Space Bandwidth Product (SBP), i.e., the product of spatial extent of a signal by its spatial frequency bandwidth, can be used to estimate the number of regular samples (uniform sampling rate) necessary to meet the Shannon sampling criterion (i.e. the Nyquist rate). We note that once this rate of sampling is available the analogue signal can be extracted from the digitally processed representation. However real signals cannot be finite in both the space and spatial frequency domains and therefore replication and aliasing (overlap) occurs. These effects are well understood in Fourier Optics but are more complex to examine when using PSO. The effects of compactness (finite extent) in various domains on sampling, and thus performing calculations, is explored in [13], while the rates of sampling used are further explored in [14,15]. Furthermore while the FLCT discussed above provides one approach to rapidly calculating the LCT of a signal, returning to the matrix description of such systems and decomposing the system ABCD matrix into the product of several sub-system matrices, other fast algorithms can be identified. The Fresnel Transform, (FST), is another special case of the LCT, and describes paraxial propagation in free space. For obvious reasons the FST has received a great deal of attention in the literature. At the moment it is of particular interest because of its' use in processing Digital Holographic image data. Our approach has allowed us to re-evaluate one of the most popular decomposition based algorithms, the Direct Method of calculating the FST [16]. We have also been able to extend the PSO analysis in such a way as to allow us to re-interpret aliasing as related to the WDF cross terms produced by sampling [17]. Optical Encryption There has been significant interest in encryption schemes which can be implemented optically [18][19][20][21][22][23]. Practical optical implementation of such systems is difficult [19,22] and the security and robustness of these systems is still a matter of study and analysis [18,[20][21][22][23]. However not only do such systems pose a very interesting problem for the fast algorithms described above in Section 3, it has been shown that the problems of encryption and multiplexing can be discussed fruitfully using PSO [24]. Lucky Imaging with Synthetic Apertures (LISA) In a recent paper [25] it was shown that combining the technique of lucky imaging with that of aperture synthesis advantages can arise, which allows atmospheric turbulence to be effectively eliminated over a large image field. Lucky imaging involves rapidly capturing many images and then selecting out some small subset of these images captured when at least some part of each image selected is close to being diffraction limited (minimally effected by the temporally fluctuating turbulence present). Aperture synthesis involves capturing images using sets of subapertures and then combining these in such a way that the diffraction limited image from a larger aperture is well approximated. By using smaller apertures the likelihood of capturing a lucky image increases. In [11] and [25] the effects of the size and separation of apertures, the criteria used to select lucky images, and the effects of Kolmogorov turbulence on the probability of capturing a lucky image and on the quality of the image produced have been examined. Work to date on LISA has been performed using purely Fourier Optics techniques. Conclusion Our work to date has raised many questions but also offered fruitful insights into possible applications of OSP. Acknowledgements: We acknowledge the support of Enterprise Ireland, Science Foundation Ireland, and the Irish Research Council for Science Engineering, and Technology under the National Development Plan.
2018-12-11T07:53:27.985Z
2010-06-07T00:00:00.000
{ "year": 2010, "sha1": "59b300881b65ec855ae2289bf1f842767bb10bed", "oa_license": "CCBYNCSA", "oa_url": "https://researchrepository.ucd.ie/bitstream/10197/3344/2/A%20Review%20of%20Some%20Recent%20Work%20in%20the%20Area%20of%20Imaging%20and%20Optical%20Signal%20Processing.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "6fc07031b7e903c98f799122ff70506db21b65b2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
255040090
pes2o/s2orc
v3-fos-license
Association between FLT3-ITD and additional chromosomal abnormalities in the prognosis of acute promyelocytic leukemia Objectives Internal tandem duplications of the Fms-like tyrosine kinase 3 gene (FLT3-ITD) and additional chromosomal abnormalities (ACA) are prognostic factors in patients with acute promyelocytic leukemia (APL). This study aimed to determine the effect of the association between FLT3-ITD and ACA in the prognosis of APL. Methods This was a retrospective cohort study including 60 patients with APL treated with all-trans retinoic acid (ATRA) and chemotherapy. Five-year overall survival (OS) and progression-free survival (PFS) were analyzed in patient groups according to the presence of FLT3-ITD and ACA. Results FLT3-ITD was an independent adverse factor for 5-year PFS, and ACA was an independent adverse factor for 5-year OS. There were significant differences in OS and PFS among the groups: FLT3-ITD-negative without ACA, FLT3-ITD-positive without ACA, FLT3-ITD-negative with ACA, and FLT3-ITD-positive with ACA. The OS times were 52.917, 45.813, 25.375, and 23.417 months, and the PFS times were 48.833, 38.563, 23.250, and 17.333 months, respectively. Conclusion FLT3-ITD and ACA are associated with the poorest OS and PFS outcomes in patients with APL treated with chemotherapy plus ATRA. Introduction Acute promyelocytic leukemia (APL) is a subtype of acute myelogenous leukemia characterized by the proliferation of abnormal promyelocytic cells, with rearrangements involving the retinoic acid receptor alpha (RARa) located at 17q21. About 95% of patients with APL show the translocation (15;17)(q22;q21) including promyelocytic leukemia (PML)/RARa, 1 while the remaining patients show the nucleo-phosmin1 (NPM1)/RARa translocation (5;17)(q35;q21) or the promyelocytic leukemia zinc finger (PLZF)/RARa translocation (11;17)(q23;q21). 2 Abnormal promyelocytic cells with the (15;17)(q22;q21) translocation are considered to be susceptible to all-trans retinoic acid (ATRA) and arsenic trioxide (ATO), and APL is thus considered to be a curable malignant disease. 3 However, in addition to this major translocation, patients with APL may have other chromosomal abnormalities that may interfere with the therapeutic efficacy of these specific drugs. Previous studies found that these additional chromosomal abnormalities (ACA) were associated with poorer treatment outcomes and survival times. [4][5][6] Unfortunately, a relatively high percentage of patients with APL (29%-43%) may have ACA at diagnosis. 7 Internal tandem duplication of the Fmslike tyrosine kinase 3 gene (FLT3-ITD) is associated with increased survival and proliferation of hematopoietic progenitors and is related to leukocytosis in patients with APL, and thus plays a role in the pathogenesis of APL. [8][9][10] Given the high incidence of FLT3-ITD in APL (13%-40%), it is important to understand its potential effect on patient survival. 11,12 However, few studies have investigated the association between FLT3-ITD and ACA and their effect on survival in patients with APL. The current study aimed to determine the role of the association between FLT3-ITD and ACA in the prognosis of APL. Patients This was a retrospective cohort study conducted at the Center for Hematology and Blood Transfusion, Bach Mai Hospital, Hanoi, Vietnam, from January 2015 to December 2019. The study enrolled all consecutive patients diagnosed with new APL according to the FAB classification with positive PML/RARa, who were treated with chemotherapy plus ATRA. 13 The Institutional Review Board of Hanoi Medical University waived the need for approval and patient consent because of the retrospective observational nature of the study. All patient details were de-identified. Reverse transcription polymerase chain reaction Bone marrow samples obtained from all patients at the time of diagnosis were analyzed by reverse transcription polymerase chain reaction to detect PML/RARa and FLT3-ITD. Cytogenetics Bone marrow samples obtained at diagnosis were also subjected to chromosome analysis to detect t(15;17)(q21;q22) and ACA. Definition Risk groups were classified according to the Sanz score. 14 Disseminated intravascular coagulation (DIC) was defined according to the criteria of the International Society for Thrombosis and Hemostasis. 15 The response criteria to induction therapy were determined according to the International Working Group criteria. 16 Statistical analysis The patients were grouped according to the presence of FLT3-ITD and ACA. Patients were further divided into three groups based on the number of ACA: no ACA, one ACA, two or more ACA. Differences in quantitative variables (hemoglobin, white blood cells, platelets, fibrinogen, D-dimer, bone marrow cell count, bone marrow blast percent) among groups were analyzed by one-way ANOVA, and differences in qualitative variables (DIC and risk group) were analyzed by v 2 or Fisher's exact test. Univariate and multivariate analyses were performed using the Kaplan-Meier method and Cox proportional hazards model to identify independent prognostic factors for overall survival (OS) and progression-free survival (PFS). OS was defined as the time from diagnosis to the last follow-up, or death, and PFS was defined as the time from remission to relapse or death. There were no missing data in this study and the bias was therefore controlled. The reporting of this study conforms to the STROBE guidelines. 17 Clinical data Sixty patients were included in the study and their data were analyzed retrospectively. There were 28 men (46.7%) and the mean age was 38.6 years (range: 15-68 years). The characteristics of the patients are presented in Tables 1 and 2. There were no significant differences in laboratory indices, except for Hb levels, including the distribution of risk groups or the frequency of DIC, between the groups of patients according to the presence of FLT3-ITD and ACA. Survival analysis Univariate analysis showed that both FLT3-ITD and ACA were associated with poor 5-year OS and 5-year PFS (FLT3-ITD: P ¼ 0.027, P ¼ 0.008; respectively; ACA: P ¼ 0.007, P ¼ 0.015; respectively) ( Table 3). However, multivariate analysis Table 3). We also analyzed the survival time of the patient groups based on the number of ACA (no ACA, only one ACA, !2 ACA). OS and PFS differed significantly among the three groups according to univariate analysis (P ¼ 0.02, P ¼ 0.019; respectively), but multivariate analysis found no significant difference (Table 3). Discussion The PML/RARa fusion gene derived from translocation t(15;17)(q22;q21) generates the oncogenic PML/RARa fusion protein, which is considered to act as the main pathogenic factor in APL via deregulation of transcriptional control of the RARa gene and disruption of PML function. 18 ATRA is the main drug used to treat APL. Its mechanisms of action include relocalizing PML and degrading PML/RARa protein, and converting PML/RARa from a transcription inhibitor to a transcription activator. 3 However, ACA and/or FLT3-ITD may be associated with other comorbid mechanisms, resulting in reduced treatment efficacy and survival. Pantic et al. showed that ACA was an independent adverse factor for survival time in patients with APL treated with ATRA, 19 and Wiernik et al. also suggested that APL patients without ACA had better OS and disease-free survival (DFS) than those with ACA, following treatment with ATRA. 20 Cervera et al. showed that ACA was associated with lower relapse-free survival according to univariate analysis, but there was no significant association in multivariate analysis. 21 Epstein-Peterson et al. suggested that ACA was not always an adverse prognostic factor for event-free survival in patients with APL treated with ATO, but did have an adverse effect in cases with a complex karyotype, 22 while Chen et al. found that an abnormal karyotype was associated with a high risk of early mortality, even in patients treated with ATO. 23 Poir e et al. also suggested that patients with ACA had poorer OS, despite ATO therapy. 24 Table 4. Overall and progression-free survival according to presence of FLT3-ITD and additional chromosomal abnormalities. Overall, these studies suggest that ACA may be an adverse factor affecting survival time. The current multivariate analysis showed that ACA was an independent adverse prognostic factor for OS but not PFS in patients with APL treated with ATRA plus chemotherapy. This result appears to be similar to that of Pantic et al. and Wiernik et al.,19,20 suggesting that ATRA has little effect on ACA. The FLT3-ITD mutation is associated with a poor prognosis in patients with acute myelogenous leukemia with a normal karyotype. 8 However, FLT3-ITD is often present in patients with APL and is associated with hyperleukocytosis, suggesting that it is also an adverse risk factor for treatment and survival outcomes in APL. 1 Breccia et al. showed that FLT3-ITD was an independent unfavorable factor for OS, relapse-free survival, and DFS in patients with APL treated with ATRA plus chemotherapy, 11 but Lucena-Araujo et al. suggested that this mutation only had an impact on OS, but not on DFS. 25 However, Singh et al. compared patients with FLT3-ITD and wild-type FLT3 and concluded that FLT3-ITD was a poor prognostic factor for DFS, but not for OS. 26 Poir e et al. and Deka et al. assumed that FLT3-ITD had no effect on OS or DFS in patients treated with ATO, 24,27 while Song et al. suggested that FLT3-ITD was an adverse factor in terms of OS and eventfree survival, even after ATO therapy. 28 These studies have thus produced controversial results regarding the role of FLT3-ITD as a poor prognostic factor, regardless of ATRA or ATO treatment. However, the current multivariate analysis suggested that FLT3-ITD was an independent adverse prognostic factor for PFS, but not for OS, in accord with the findings of Lucena-Araujo et al. 25 It is generally difficult to treat patients with APL with ACA or FLT3-ITD. Furthermore, the prognostic effect of the combination of these two abnormalities is unclear, and studies examining the effects of these two factors on survival time using multivariate analysis are lacking. We therefore analyzed the effect of the association between FLT3-ITD and ACA on survival time using the Kaplan-Meier method and log rank test. We found significant differences in both OS and PFS among the groups: FLT3-ITD-negative without ACA, FLT3-ITD-positive without ACA, FLT3-ITD-negative with ACA, and FLT3-ITD-positive with ACA (P ¼ 0.006, P ¼ 0.003; respectively), with FLT3-ITD-positive patients with ACA having the worst OS and PFS. Appropriate treatment strategies thus need to be considered for patients with both FLT3-ITD and ACA. Our study had some limitations. The European Leukemia Network 2019 considers normal chromosomal acute myelogenous leukemia with NPM1 mut , without FLT3-ITD or with low-level FLT3-ITD, as having a favorable prognosis. 29 Low levels of FLT3-ITD may thus not cause adverse effects, and the adverse prognostic influence of FLT3-ITD may also depends on its expression level. This may also help to explain some of apparently conflicting results regarding the prognostic significance of FLT3-ITD. Unfortunately, we did not quantify FLT3-ITD levels in the current study. As noted above, some studies showed that FLT3-ITD had no effect on survival time in patients treated with the ATO regimen. 24,27 The prognostic factors thus change in line with advancements and changes in treatments, and different prognostic factors should be analyzed and applied at different stages of treatment development. The current study is applicable to patients treated with the ATRA regimen; however, further studies are needed in patients treated with the ATO regimen. More research is also needed to analyze treatment outcomes in relation to FLT3-ITD levels. Conclusion FLT3-ITD may be an independent adverse prognostic factor for PFS and ACA may be an independent adverse prognostic factor for OS in APL patients treated with chemotherapy plus ATRA, with FLT3-ITD-positive patients with ACA having the worst OS and PFS times.
2022-12-24T16:33:19.736Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "e639a2528495755de82987575155803ced427c65", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "af3c76d818d6ec31534ca886eaf11def1104ef3c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
81289901
pes2o/s2orc
v3-fos-license
Zona Pigment Glaucoma: Surgery or Medical Treatment? Hepes Zoster Ophthalmicus (HZO) does not a fatal cause but a blind cause as well as prolonged pain lead to disturbance and difficulty in treatment for patients and may be a marker for AIDS particularly in young persons. Zona Glaucoma or Zona Pigment Glaucoma (ZPG) commonly caused by uveitis with or without blockage pupil or obstructed trabecular meshwork. Accumulation of macrophages in severe inflammation over a short period of time may acutely obstruct the meshwork and result in transient elevation of intraocular pressure in association with exercise of dilation of pupil. In this paper three typical cases of ZPG were treated. The first was young patient with HIV (+), trabeculectomy was done; the second and third cases were elderly patient HIV (-), without glaucoma surgery. Among of 2 medical treatment cases: one was treated with steroid and the other was treated with antiviral drug. These 3 patients have restored vision and have normalized intraocular pressure, and some satisfying results were reported here after one year follow-up. Some considerations on HZO were discussed in this paper for General Practitioners and Eye Doctors. Introduction Hepes Zoster Ophthalmicus (HZO) does not a fatal cause but a blind cause as well as prolonged pain lead to disturbance and difficulty in treatment for patients and may be a marker for AIDS particularly in young persons. Zona glaucoma or (ZG) commonly caused by uveitis with or without blockage pupil or obstructed trabecular meshwork. Accumulation of macrophages in severe inflammation over a short period of time may acutely obstruct the meshwork and result in transient elevation of intraocular pressure in association with exercise of dilation of pupil. We have treated three typical cases of zona glaucoma. The first was young patient with HIV (+), trabeculectomy was done; the second and third cases were elderly patient HIV (-), without glaucoma surgery. Among of 2 medical treatment cases: one was treated with steroid and the other was treated with antiviral drug. According to the Mayo Clinic, evidence from clinical trials shows that treatment with steroids tends to be more successful than treatment with antivirals. These 3 patients have restored vision and have normalized intraocular pressure, and some satisfying results were reported here after one year follow-up. So, ZPG it should be operated or medical treatment? Some considerations on HZO were discussed in this paper for General Practitioners and Eye Doctors. Case 1 A 29-year-old, male, worker. Two months ago, he had an eruption in left eye accompanied with intensive pain; he was then diagnosed and treated for ophthalmic zona by eye doctors. His pain has prolonged until three weeks previously, he was treated pulmonary tuberculosis. One week before he had a severe pain in left eye and he was then admitted author's provincial hospital. Up to this time he and his family were not disclosed HIV (+) by any doctors (Figure 1). Case 2 A 59-year-old, female, farmer. Ten days before she had suddenly headache then located at right frontal region and 2 days follow an eruption appeared on the same site in right eye accompanied with intensive pain. She was then treated for ophthalmic zona by general practitioner. Her pain has not decreased during one-week treatment until she was admitted author's provincial hospital. Case 3 A 62-year-old, male, farmer. Five days before he had suddenly headache then located at right frontal region and 2 days follow an eruption appeared on the same area in right eye accompanied with intensive pain and he was admitted author's provincial hospital. Zona Pigment Glaucoma (ZPG) ZPG commonly caused by uveitis with or without blockage pupil or obstructed of trabecular meshwork. Accumulation of macrophages in this severe inflammation over a short period of time may acutely obstruct the meshwork and result in transient elevation of IOP in association with exercise of dilation of pupil [3,4]. Uveitis may be occurs after some day post herpetic zona, 40% of patients may have a long period 2 years with no symptom [4]. 3/4 Case 1: glaucoma occurred 2 months after zona with severe condition. Case 2: 10 days after zona with moderate condition. In uveitis 25% of patients may be changes of pigment of iris [3]. Posterior uveitis, papillitis, retinitis were rarely seen after zona [4]. Case 3: 5 days after zona with moderate condition treatment with antiviral drug and inhibition of carbonic anhydrase drug. Our diagnosis of ZPG depended on IOP. Case 1 IOP was 28mmHg; the other cases had a moderate elevation of IOP 22 & 24 mmHg on hospital admission. In case 2, it was hardly to differentiate with trabeculitis. For treatment ZG, two problems were faced: treatment of zona and of glaucoma which consisted medical and surgical treatment. Antiviral drugs were prohibitively expensive and were not taken in case 1 and 2. Local and general steroid has to use for treatment of herpetic uveitis but the risk for open-angle glaucoma (OAG) which should be warned. According to the Mayo Clinic, evidence from clinical trials shows that treatment with steroids tends to be more successful than treatment with antivirals. Some studies showed using local steroid from 4-6 weeks increasing IOP from 6-15mmHg. Now OAG can be caused by gene TIGR (Trabecular meshwork inducibleglucorticosteroid response gene) [6]. Surgery According to Henry Saraux surgical glaucoma should be done in the case of ocular hypertension. Case 1: IOP 28 mmHg did not decrease and visual acuity did not restore after medical treatment, therefore trabeculectomy had been done but in case 2: IOP and visual acuity restored with steroid treatment, and then surgical glaucoma should not be done. If the IOP does not elevate and visual acuity does not restore surgical glaucoma should be done or not? In case 3: moderate condition, onset 5 days after zona, then treatment with acyclovir and acetazolamide having good result, so surgical glaucoma should not be done. Trabeculectomy in case 2 and 3 is necessary or not? Ophthalmic Zona and HIV In Kenya a study of Haroon Awan, Henry Alada showed 98% of AIDS patients having ocular manifestations and 23 % of ophthalmic zona with HIV (+) in the age range 8 to 47 years old. Our case 1 belonged this age group. Ophthalmic zona may be a marker for AIDS1-2. Diagnosis of typical zona is usually easy with the eruption of vesicles distributed along trigeminal nerve but in the atypical case is difficult and now with polymerase chain reaction (PCR) is a gold standard in diagnosis DNA of zona virus. The general practitioners, eye doctors should be cautious in atypical cases of zona, as well as particularly in the phrase of pre-eruption of vesicles because of transmission both zona and HIV. Others Problems with Ophthalmic Zona Lagophthalmia: May be caused by contracted scar of frontal skin plus upper eye lid with or without paralysis of elevator muscle. Tarsography should be done first in order to decrease the evaporating of eye watering contributed the regulation of pressure of eye liquefilm; the second is the upper lid reconstruction [7]. Strabismus: may be caused by the paralysis of ocular muscles need to be surgical correction [8]. Cornea: The decreasing of corneal sensibility post herpetic zoster may reversible or irreversible because of corneal epithelial damages. Surgeries in these patients as glaucoma, cataract has to be warning. Iris: The paralysis of constricted sphincter of iris may lead to dilation of pupil so-called atypical Argyl Robertson syndrome [1]. Case 1: pupil did not constrict one year later; case 2: pupil constricted well after 3 months follow-up. (Table 1). Prevention Adults 60-year-old and over should have a single dose of zoster vaccine whether they have had herpes zoster or not. This vaccine has been shown to decrease the incidence of zoster [9,10].
2019-03-18T14:02:54.886Z
2018-07-30T00:00:00.000
{ "year": 2018, "sha1": "9394f43cf81f80a643a11a35326ff122f71d614c", "oa_license": "CCBY", "oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.001497.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "af960645d25a3a12257291aed0dcc1bb805eb89f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267579309
pes2o/s2orc
v3-fos-license
Multiomic molecular characterization of the response to combination immunotherapy in MSS/pMMR metastatic colorectal cancer Background Immune checkpoint inhibitor (ICI) combinations represent an emerging treatment strategies in cancer. However, their efficacy in microsatellite stable (MSS) or mismatch repair-proficient (pMMR) colorectal cancer (CRC) is variable. Here, a multiomic characterization was performed to identify predictive biomarkers associated with patient response to ICI combinations in MSS/pMMR CRC for the further development of ICI combinations. Methods Whole-exome sequencing, RNA sequencing, and multiplex fluorescence immunohistochemistry of tumors from patients with MSS/pMMR CRC, who received regorafenib plus nivolumab (REGONIVO) or TAS-116 plus nivolumab (TASNIVO) in clinical trials were conducted. Twenty-two and 23 patients without prior ICI from the REGONIVO and TASNIVO trials were included in this study. A biomarker analysis was performed using samples from each of these studies. Results The epithelial-mesenchymal transition pathway and genes related to cancer-associated fibroblasts were upregulated in the REGONIVO responder group, and the G2M checkpoint pathway was upregulated in the TASNIVO responder group. The MYC pathway was upregulated in the REGONIVO non-responder group. Consensus molecular subtype 4 was significantly associated with response (p=0.035) and longer progression-free survival (p=0.006) in the REGONIVO trial. CD8+ T cells, regulatory T cells, and M2 macrophages density was significantly higher in the REGONIVO trial responders than in non-responders. Mutations in the POLE gene and patient response were significantly associated in the TASNIVO trial; however, the frequencies of other mutations or tumor mutational burden were not significantly different between responders and non-responders in either trial. Conclusions We identified molecular features associated with the response to the REGONIVO and TASNIVO, particularly those related to tumor microenvironmental factors. These findings are likely to contribute to the development of biomarkers to predict treatment efficacy for MSS/pMMR CRC and future immunotherapy combinations for treatment. INTRODUCTION Colorectal cancer (CRC) is the second leading cause of cancer-related deaths worldwide. 13][4][5][6][7] However, for metastatic CRC, the efficacy of ICIs is limited to patients with microsatellite instability-high or mismatch repair-deficient tumors, and the majority of microsatellite stable (MSS) or mismatch WHAT IS ALREADY KNOWN ON THIS TOPIC ⇒ Immune checkpoint inhibitor combinations are emerging treatment strategy in cancer.However, biomarkers of response in microsatellite stable mismatch repair-proficient (MSS/pMMR) colorectal cancer have not been identified. WHAT THIS STUDY ADDS ⇒ We identified molecular features associated with the response to regorafenib plus nivolumab or TAS-116 plus nivolumab combinations.Specifically, activation of genes in the epithelial-mesenchymal transition pathway and consensus molecular subtype 4 enrichment were predictive biomarkers in the REGONIVO trial. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY ⇒ Our analyses could lead to the further development of biomarkers for MSS/pMMR colorectal cancer and additional combinations of immunotherapies for treatment.on March 30, 2024 by guest.Protected by copyright. [10][11] The limited effect of ICIs on MSS and pMMR CRC may be attributed to a low neoantigen load and few tumorinfiltrating lymphocytes, which prevents a robust immune response.3][14] To overcome these resistance mechanisms, several immunotherapy combinations have been evaluated for MSS or pMMR CRC 15 16 ; however, most have been largely ineffective.Combinations of the MEK inhibitor cobimetinib and the programmed deathligand 1 (PD-L1) inhibitor atezolizumab as well as the multikinase inhibitor lenvatinib and the programmed cell death protein 1 (PD-1) inhibitor pembrolizumab as salvage therapy have failed to exhibit a survival benefit compared with the standard of care in phase III trials. 15 16urther development of ICI combinations for MSS or pMMR CRC is necessary. Previously, we conducted two investigator-initiated trials of ICI combinations of the PD-1 inhibitor nivolumab with drugs expected to activate the immune response, namely, the multikinase inhibitor regorafenib plus nivolumab (REGONIVO) and the HSP90 inhibitor TAS-116 (pimitespib) plus nivolumab (TASNIVO) for MSS or pMMR CRC, which demonstrated efficacy in a limited number of these patients. 17 18These findings highlighted the need to identify biomarkers to identify patients who would benefit from such combinations and to understand the mechanisms through which this efficacy was achieved for the further development of ICI combinations.To identify predictors of response to ICI combinations in patients with MSS or pMMR CRC, we characterized tumors from patients who received REGO-NIVO or TASNIVO in clinical trials using whole-exome sequencing (WES), RNA sequencing, and multiplex fluorescence immunohistochemistry (mIHC).By applying this multiomics approach, we characterized these tumors at the molecular level and identified molecular features that may contribute to the development of predictive biomarkers and future immunotherapy combinations. Patients The eligibility criteria for this study were as follows: (1) enrollment in a phase Ib trial of REGONIVO (EPOC1603) 17 or a phase Ib trial of TASNIVO (EPOC1704) 18 ; (2) patients with MSS or pMMR CRC; and Open access (3) no ICI therapy prior to trial enrollment.The detailed methods of these trials have been previously reported. 17 18he study was conducted in accordance with the Declaration of Helsinki.The results are publicly available on the official website of National Cancer Center Hospital East, and the research subjects were provided with an opportunity to decline participation. Samples Tissues were obtained from patients prior to the administration of the investigational treatment and were formalin-fixed and paraffin-embedded (FFPE).The FFPE samples were subjected to WES, RNA sequencing, and mIHC staining.Most of the samples were primary tumors that were surgically resected before patient enrollment, and all tumor samples were collected prior to any ICI combination therapy, with none collected after any immunotherapy; additional details are provided in online supplemental table S1.Peripheral blood mononuclear cells or normal colon tissue were also used as germ line controls. WES Genomic DNA tissue was extracted from FFPE tissues with the GeneRead DNA FFPE Kit (QIAGEN).DNA was enriched using the Twist Library Preparation Kit (Twist Bioscience).The DNA in the resulting libraries was subjected to next-generation sequencing, and 150 bp was sequenced from both ends on a NovaSeq 6000 (Illumina) to produce paired-end reads.Paired-end sequencing reads with masked nucleotides with quality scores less than 20 were aligned to the hg38 reference genome using BWA-MEM (http://bio-bwa.sourceforge.net/)and Bowtie2 (http://bowtie-bio.sourceforge.net/bowtie2/index.shtml).Somatic synonymous and non-synonymous mutations were called using our in-house caller and two publicly available mutation callers: Mutect2, as part of the Genome Analysis Toolkit (https://gatk.broadinstitute.org/hc/en-us), and VarScan2 (http://varscan.sourceforge.net/).Mutations meeting any of the following criteria were discarded: tumor sample variant allele frequency <0.05; mutant read number in the germline control samples of >2; mutations detected in only one strand of the genome; or the variant present in the normal human genome in either the 1,000 Genomes Project data set (https://www.internationalgenome.org/) or our in-house database.Gene mutations were annotated using SnpEff (http://snpeff.sourceforge.net).Tumor mutational burden (TMB) was defined as the total number of mutations per megabase in the WES bait region.Targeted gene panel analysis data (Oncomine Cancer Research Panel, Thermo Fisher) were used for complementarity when WES data were not available. RNA sequencing Total RNA was extracted from FFPE tumor samples using the RNeasy FFPE Kit (QIAGEN).Ribosomal RNA depletion was performed using the NEBNext rRNA Depletion Kit (New England Biolabs).RNA integrity was assessed using TapeStation (Agilent Technologies).To exclude the degraded RNA, RNA of sufficient integrity was used for RNA sequencing (RNA-seq) with an NEBNext Ultra Directional RNA Library Prep Kit (New England Biolabs).Prepared RNA libraries were subjected to next-generation sequencing on a NovaSeq 6000 (Illumina) to produce paired-end sequencing reads.For RNA-seq data expression profiling, paired-end reads were aligned to the hg38 human genome and quantified using TopHat2 (https://github.com/infphilo/tophat) and Cufflinks (https://github.com/cole-trapnell-lab/cufflinks). Gene Set Enrichment Analysis (GSEA) was performed using GSEA V.4.3.2 (https://github.com/GSEA-MSigDB).Genes were ranked based on a log2-fold change in expression and gene enrichment scores were calculated based on the rank of the genes and gene sets.Gene sets from the Molecular Signatures Database V.7.2. Consensus molecular subtypes (CMSs) were evaluated as described previously. 19 For PD-L1 staining using the anti-PD-L1 28-8 antibody, the combined positive score (CPS) was assessed by a pathologist (TKu) and defined as the percentage of total tumor cells (including tumor cells, lymphocytes, and macrophages) multiplied by 100 in the REGONIVO trial.In the TASNIVO trial, CPS was measured by the PD-L1 IHC 22C3 pharmDx assay (Agilent Technologies). Outcomes and statistics Patients experiencing a clinical benefit (responders) were defined as those who achieved a complete response (CR), partial response (PR), or stable disease (SD) lasting more than 6 months as evaluated by Response Evaluation Criteria in Solid Tumors V.1.1 criteria.Progression-free survival (PFS) was defined as the time from registration for clinical trials to disease progression or death (for any reason).Overall survival (OS) was defined as the time from registration to death (for any reason).Quantitative data are presented as the median and range.The Mann-Whitney U and χ 2 tests were used for comparisons between continuous and categorical variables, respectively.PFS and OS were estimated using the Kaplan-Meier method, and HRs and CIs were estimated using a Cox Open access proportional hazards model.All statistical analyses were performed using SAS Release V.9.4 (SAS Institute). Patients Twenty-four and 25 patients from the REGONIVO and TASNIVO trials, respectively, with MSS or pMMR CRC without prior ICI treatment, met the eligibility criteria for this study (figure 1).We successfully performed RNA-seq and WES and obtained gene panel and mIHC data for 22 and 23 patients in the REGONIVO and TASNIVO trials, respectively (figure 1).Patients with left-sided tumors were observed more frequently in the REGONIVO trial (86.4%), whereas those with right-sided tumors were observed more frequently in the TASNIVO trial (60.9%) (table 1).Ten patients in each of the REGONIVO (45.5%)TASNIVO trials (43.5%) had liver metastasis (table 1).Thirteen of 22 (59%) patients in the REGO-NIVO trial and 7 of 23 (30%) patients in the TASNIVO trial were classified as responders (CR, PR, and SD≥6 months) (figure 2).In this study, all samples used for WES, RNA-seq, and mIHC staining were obtained from patients prior to ICI therapy and were FFPE. Mutational features We analyzed the differences in the mutational profiles between responders and non-responders in each trial using WES (figure 3 and online supplemental table S2).The mutational landscape of each trial cohort was comparable with that of previous reports. 20We observed a significant association between POLE (DNA polymerase epsilon) mutations and positive response in the TASNIVO trial (p=0.015), in which two cases with POLE missense mutations achieved PR.One out of two patients harboring POLE mutations exhibited an extremely high TMB (78 mutations/Mb).The TMB of the other patient could not be analyzed because no samples were available.The frequencies of the other representative gene mutations in CRC, including KRAS, ERBB2, BRAF, PIK3CA, TP53, ATM, APC, AXIN2, LRP5, TCF7L2, SMAD2/3/4, ARID1A, and FBXW7, were not significantly different between responders and non-responders in either trial Open access (figure 3 and online supplemental table S2).TMB was not associated with the response to either of the ICI combinations even after excluding POLE cases (online supplemental figure S1). Transcriptomic features To find the difference in gene expression and upregulated signal between responders and non-responders, we next performed transcriptome analysis and GSEA on the cohorts in both trials.These analyses revealed pathways associated with the response to each combination therapy.Specifically, upregulation of the epithelialmesenchymal transition (EMT) pathway was observed in the REGONIVO responder group (figure 4).The expression of representative EMT pathway genes, such as TGFB3, VIM and FN1, were upregulated in the responder group (online supplemental figure S2A).Notably, genes related to cancer-associated fibroblasts (CAFs) were also upregulated (online supplemental figure S2B).In addition, genes related to the inflammatory response were upregulated in the REGONIVO responder group, and we observed a significant upregulation of immunerelated genes such as STAT3 (online supplemental figure S2A).Importantly, we also observed upregulation of the PDGFRA gene, a known target of regorafenib, in the REGONIVO responder group (online supplemental figure S2A). Upregulation of the MYC pathway was observed in the REGONIVO non-responder group (figure 4).Upregulation of genes associated with the G2M checkpoint pathway was observed in the TASNIVO responder group.Additionally, upregulation of PI3K_AKT_MTOR pathway genes was observed in the responder group, and AKT1 and HRAS expression were significant upregulated (online supplemental figure S2C). CMS classification of CRC Given the results of our transcriptomic analysis, we next sought to elucidate the differences in CMS classification using RNA-seq data.CMS classification was possible in 20 of 22 patients in the REGONIVO trial and 21 of 23 patients in the TASNIVO trial.CMS1, CMS2, CMS3, and Open access CMS4 were detected in 0, 7 (35%), 0, and 13 (65%) cases in the REGONIVO trial and 4 (19%), 3 (14%), 2 (10%), and 12 (57%) cases in the TASNIVO trial, respectively.In the REGONIVO trial, CMS4 was significantly associated with patient response compared with the other CMS subtypes (p=0.035),but CMS4 was not associated with patient response in the TASNIVO trial (online supplemental table S2).Among 13 patients with CMS4 in the REGONIVO trial, one had a CR, six had a PR, and three had SD lasting more than 6 months.Patients with CMS4 in the REGONIVO trial demonstrated a significantly longer PFS (median 12.3 months vs 4.2 months; HR 0.208 (95% CI 0.062 to 0.693); p=0.006) and a longer OS (median 25.3 months vs 19.2 months; HR 0.621 (95% CI 0.196 to 1.968); p=0.4139) than did those with other CMS subtypes, whereas those with CMS4 in the TASNIVO trial did not (figure 5).In addition, when considering only cases of non-liver metastasis, a significant improvement in PFS was observed in patients with CMS4 in the REGONIVO trial (median 15.0 months vs 4.1 months; HR 0.072 (95% CI 0.006 to 0.808); p=0.006), which was not observed in cases with liver metastasis (online supplemental figure S3). Multiplex fluorescence immunohistochemistry mIHC of FFPE specimens obtained prior to treatment was performed to compare the tumor immune cell infiltration of responders and non-responders in the REGO-NIVO and TASNIVO cohorts using image analysis software (figure 6A).In the REGONIVO trial, the density of CD8 + T cells (CD3 + CD8 + ), Treg cells (FOXP3 + CD3 + CD8 -), and M2 macrophages (CD206 + CD11b + ) in the intratumoral area was significantly higher in responders (n=13) than in non-responders (n=9) (figure 6B).In contrast, M2 macrophage density was significantly lower in the responders (n=6) than in the non-responders (n=15) (figure 6C) in the TASNIVO trial.Similar trends were observed when focusing on primary lesions (online supplemental figure S4).In the combined analysis of samples from both trials, higher infiltration of CD8 + T cells was observed in the CMS4 subtype than in the CMS2 and CMS3 subtypes, and infiltration of Treg cells and M2 macrophages was also Open access observed; however the differences were not statistically significant (online supplemental figure S5).One patient harboring POLE mutations demonstrated a higher-thanaverage infiltration of CD8 + T cells with lower infiltration of Treg cells and M2 macrophages.Furthermore, in line with transcriptome analysis, responders (n=7) presented higher PDGFRα expression than non-responders (n=8) in the REGONIVO trial (online supplemental figure S6A, B). We also evaluated the association between PD-L1 CPS, which is commonly associated with ICI response, and the proportion of responders, however, there was no significant difference (figure 3 and online supplemental table S2).Open access DISCUSSION To identify predictors of response to ICI combinations in patients with MSS or pMMR CRC, we conducted comprehensive biomarker analyses using WES, RNA-seq, and mIHC on the two investigator-initiated trials combining nivolumab with drugs expected to activate the immune response.We identified molecular features associated with the response to ICI combinations, particularly those related to tumor microenvironmental factors including EMT pathways and CMS4.To our knowledge, this is the first report to establish a multiomic molecular landscape of the response to ICI combinations in MSS or pMMR CRC. We found that POLE mutations were significantly associated with response in the TASNIVO trial and that no specific gene mutations were associated with response in the REGONIVO trial.][23][24] In this study, one patient in the TASNIVO trial with POLE mutations had an extremely high TMB and high CD8 + T-cell infiltration and low Treg cell and M2 macrophage infiltration.Thus, it is highly likely that the response achieved in the two cases with POLE mutations identified in the TASNIVO trial was primarily driven by nivolumab.However, aside from POLE mutation, no genomic features, including TMB, were identified as predictive markers for the response to ICI combinations in MSS or pMMR CRC in each study. With respect to the tumor microenvironment, transcriptome analysis of samples from patients in the REGO-NIVO trial revealed upregulation of the EMT pathway and genes related to CAFs in responders and upregulation of the MYC pathway in non-responders.Furthermore, mIHC results revealed that the density of CD8 + T cells, Treg cells, and M2 macrophages was significantly higher in responders, which was comparable to the findings presented in a previous report. 25nterestingly, patients with CMS4 in the REGONIVO trial were associated with better clinical outcomes, which was not observed in the TASNIVO trial.CMS4 is characterized by "mesenchymal" features, such as upregulation of the EMT and transforming growth factor (TGF)-β signaling pathways along with the high expression of genes associated with angiogenesis or extracellular matrix remodeling resulting in a high presence of CAFs, 19 26 which are known to be associated with treatment resistance.Although the tumor immune microenvironment of CMS4 is considered "immune inflamed" with the presence of a higher number of infiltrating CD8 + T cells compared with CMS2 or CMS3, 19 27 immune suppressive cells, such as Treg cells and M2 macrophages, which are involved in inhibiting cytotoxic T cells and suppressing the immune response, 27 also infiltrated this subtype.It has been reported that Treg cells are recruited via CD70 expressed on CAFs in CRC, and accumulate due to CCL28 in the hypoxic environment caused by abnormal angiogenesis. 28 29In the tumor microenvironment regorafenib leads to a decrease in Treg cells with the inhibition of CAF proliferation inducing apoptosis and potent antiangiogenic effects, which are expected to improve the hypoxic environment. 30 31It has also been reported that regorafenib inhibits TAM infiltration and M2 macrophage activation by blocking the TIE2 pathway, thereby promoting a persistent M1 phenotype. 30 32 33Indeed, in preclinical models, regorafenib modified the tumor immune microenvironments decreasing the infiltration of CAFs, Treg cells and M2 macrophages, thus restoring the antitumor activity of PD-1 inhibitors. 34 35Additionally, it has been reported that PDGFRA, PDGFRB, and KIT, which are targets of regorafenib, are highly expressed in CMS4 CRC and have been proposed as therapeutic targets. 36 37Indeed, our study found that PDGFRA was highly expressed in the REGONIVO trial responders, which is consistent with these reports.These findings suggest that combining regorafenib and PD-1 inhibitors could be effective for some CRC, specifically for the CMS4 subtype, in which infiltrating CD8 + T cells are suppressed by immunosuppressive cells. Consistent with our finding that there was no correlation between CMS4 and REGONIVO response or a favorable clinical outcome in patients with liver metastases, preclinical models have indicated that the presence of liver metastases induces apoptosis in antigen-specific activated T cells, resulting in a systemic immunological desert. 39The development of further ICI combinations may be needed to address certain molecular subtypes and immune microenvironment phenotypes. Transcriptome analysis showed upregulation of the G2M checkpoint pathway in the TASNIVO trial responders.WEE1, a client protein of HSP90, regulates the G2/M transition in the cell cycle by phosphorylating cyclin-dependent kinase 1. 40 41 The AKT pathway and the MAP kinase cascade may be inhibited by HSP90 blockade. 42 43Thus, the HSP90 inhibitor TAS-116 may exert antitumor activity in tumors with elevated G2M checkpoint-related genes or high expression of AKT1 and HRAS in the TASNIVO trial.We previously reported that TAS-116 enhanced the antitumor activity of PD-1 inhibitors by reducing Treg cells in vitro and in vivo. 44owever, in the present study, the significant infiltration of M2 macrophages in non-responders suggests that even if Treg cells were eliminated, the immune suppression by M2 macrophages could not be overcome by the HSP90 inhibitor. This study has several limitations.The primary limitation is that this study was conducted with a limited sample size of patients from early clinical trials, and not all patient data were included in the biomarker analyses due to inconsistent sample availability.For example, only CMS2 and CMS4 were observed among patients in the REGONIVO Open access trial, probably due to the small number of included patients.Therefore, the presented results should be interpreted as preliminary, and further studies are warranted to validate these findings.Furthermore, because our analysis was performed using pretreatment samples only, a future comparative analysis of pretreatment and posttreatment samples would potentially strengthen our findings regarding the tumor microenvironment. 43n conclusion, we identified molecular features, particularly those related to tumor microenvironmental factors, that were associated with the response to REGONIVO and TASNIVO.Of note, CMS classification may correlate with the clinical outcome of REGONIVO in MSS or pMMR CRC.These findings may be helpful for the development of predictive biomarkers for precision medicine applications or new combination immunotherapies. Figure 1 Figure 1 Flow diagram of the study The figure illustrates the process research sample selection and the number of analyses successfully completed with each method.Only patients with microsatellite stable or mismatch repair-proficient colorectal cancer were included in this study.Patients without WES data due to inadequate sample volume or unsuccessful WES were also included in the analysis if targeted gene panel analysis data were available.CRC, colorectal cancer; ICI, immune checkpoint inhibitors; IHC, immunohistochemistry; mIHC, multiplex fluorescence immunohistochemistry; MSI, microsatellite instability; NSCLC, non-small cell lung cancer; WES, whole-exome sequencing. Figure 2 Figure 2 Efficacy of REGONIVO and TASNIVO treatment in patients included in this study waterfall plot (A) showing the maximum percentage change in tumor size from baseline as measured by Response Evaluation Criteria in Solid Tumors (RECIST) in the REGONIVO trial.Spider plot (B) showing the longitudinal change in RECIST percentage from baseline in the REGONIVO trial.Waterfall plot (C) and spider plot (D) as above but showing data from the TASNIVO trial.CR, complete response; PD, progressive disease; PR, partial response; REGONIVO, regorafenib plus nivolumab; SD, stable disease; TASNIVO, TAS116 plus nivolumab. Figure 3 Figure 3 Molecular characterization The top section of the figure shows the duration of PFS.The middle section indicates the response status (CR, PR, or SD≥6 months), CMS, TMB, and PD-L1 CPS.The bottom section shows the distribution of gene mutations determined by WES or targeted gene panel analysis.CR, complete response; CPS, combined positive score; CMS, consensus molecular subtypes; NA, not available; PFS, progression-free survival; PR, partial response; REGONIVO, regorafenib plus nivolumab; SD≥6, stable disease duration of at least 6 months; TASNIVO, TAS116 plus nivolumab; TMB, tumor mutational burden; WES, WES, whole exome sequencing. Figure 5 Figure 5 Survival curves based on the CMS classification Kaplan-Meier plots the PFS (A) and OS (B) of patients in the regorafenib plus nivolumab trial with tumors classified as CMS4, or as other CMS subtypes.PFS (C) and OS (D) of patients in the TAS116 plus nivolumab trial with tumors classified as CMS4 or as other CMS subtypes.CMS, consensus molecular subtypes; OS, overall survival PFS, progression-free survival. Figure S1 . Figure S1.Box plots comparing TMB in responders and nonresponders 24 Box and whisker plots comparing TMB values in responders and nonresponders in each 25 trial.POLE cases were excluded.26 Table S1 . Sample information 16BMJ Publishing Group Limited (BMJ) disclaims all liability and responsibility arising from any reliance Supplemental material placed on this supplemental material which has been supplied by the author(s) Table S2 . Number of responders and nonresponders per gene mutation, TMB, PD-L1 CPS, 19BMJ Publishing Group Limited (BMJ) disclaims all liability and responsibility arising from any reliance Supplemental material placed on this supplemental material which has been supplied by the author(s)
2024-02-11T06:18:56.290Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "bd12c8b7f6f2fd4210fb0661250ef4d9f1882c2c", "oa_license": "CCBYNC", "oa_url": "https://jitc.bmj.com/content/jitc/12/2/e008210.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0bd2c2258e08e44aea79d673e4a858f5f90003d8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
80664181
pes2o/s2orc
v3-fos-license
High frequency audiometry in tinnitus patients with normal hearing in conventional audiometry Context Hearing loss is the most important risk factor of tinnitus, but this relation is not straightforward; some patients with severe tinnitus have normal hearing, whereas many patients with hearing loss do not have tinnitus. Aims The aim was to determine if high frequency audiometry (HFA) may reveal significant differences between normal hearing participants with and without tinnitus. Settings and design This is a case–control study. Participants and methods HFA was done on two groups of participants with normal hearing sensitivity. The first group was composed of 20 adults with tinnitus, whereas the control group was 15 age-matched and sex-matched participants, not suffering from tinnitus. Statistical analysis Data were analyzed using SPSS software package version 20.0. Significance of the results was judged at the 5% level. χ2 with Fisher’s exact as a correction, Kruskal–Wallis, Mann–Whitney, and Pearson’s coefficient tests were used. Results HFA showed no significant difference between the two studied groups. Conclusion Tinnitus in normal hearing participants does not necessarily indicate corresponding damage in the cochlea Introduction Tinnitus is the detection of sound without an external source [1]. Most of tinnitus patients display impaired hearing threshold in the pure-tone audiometry (PTA), especially in the high frequency range [2][3][4]. Furthermore, the frequency spectrum of some individual's tinnitus matches the frequency range of the hearing impairment [5,6]. However, some tinnitus patients present with no detectable loss in the frequency range of the conventional PTA (125 Hz-8 KHz) [7]. The human ear has an auditory range that can reach up to 20 000 Hz. Frequencies between 9000 and 20 000 Hz are named extended high frequencies (EHFs) in the international literature [8]. The involvement of EHFs in auditory pathology is diverse. They affect detecting the location of the sound [9] and understanding language, especially in noisy surroundings [10]. They are also associated with agerelated hearing loss, ototoxicity, and acoustic trauma. It has been thought that a normal PTA does not exclude cochlear damage. Damage of hair cells that code for frequencies above 8 kHz cannot be detected by the conventional audiometry. Tinnitus patients whose audiograms are normal had more frequent cochlear dead regions [11], outer hair cell damage, and impaired hearing thresholds in the EHF region [12], when compared with control groups. In contrast, tinnitus may be induced purely in the central nervous system without damage to peripheral sensory organs [13,14]. In this study, we studied the role of high frequency audiometry (HFA) in the assessment of normal hearing tinnitus patients on conventional audiometry and whether it provides more relevant information about cochlear damage not proved by the conventional audiometry. Participants This study was carried on 20 adults with tinnitus aged up to 50 years old with no sex preference and with normal peripheral hearing sensitivity in frequencies 250-8000 Hz. Otologic or neurologic disease, middle ear problems, and patients with occupations with noise hazards were excluded. Fifteen age-matched and sex-matched participants with normal hearing and no tinnitus were involved as control group. Methods All participants were subjected to history taking, otoscopic examination, tympanometry to exclude middle ear problems, conventional audiometry (air, bone conduction thresholds, and mid octaves were done). Thresholds were assessed using the American National Standards Institute (ANSI) approach, which is an ascending technique beginning with an inaudible signal; the level was increased in 5 dB steps till a response occurred. After giving a response, the intensity was decreased by 10 dB, and another ascending series is started. The threshold was the lowest decibel hearing level at which responses occurred in at least 50% of ascending trials [15]. Normal hearing sensitivity was defined as a threshold of 20 dB HL at each frequency examined in the range from 0.25 to 8 kHz. To avoid inclusion of audiograms displaying minor dips, 3 and 6 kHz were also tested. Normal thresholds at EHFs were calculated by using mean +2 SD in the control group. Each age group was calculated separately. Participants were distributed into three age groups from 20 to 30 years, from 31 to 40 years, and from 41 to 50 years. Pitch matching and loudness matching measurement The first objective was to determine whether the tinnitus sounds more like pure tone or noise. Narrowband noise centered at pitch match frequency was presented with alternation with the pitch-matched tone and the patient was asked which sounds more like the tinnitus. The pitch matching procedure is usually a twoalternative forced choice [16]. Two tones were presented to the patient and then asked to choose the one that most closely matched the tinnitus heard. This was continued until the match was made. Tinnitus is mostly found to be a few decibels above a person's threshold for the frequency being tested [16,17]. For loudness matching, a frequency that was matched to the patient's tinnitus was presented at a level just below threshold and intensity was increased in a 1 dB step until the patient indicated a match [16]. Statistical analysis of the data Data were analyzed using SPSS software package version 20.0 (SPSS Inc., Chicago, Illinois, USA). Significance of the obtained results was judged at the 5% level [18,19]. For demographic data, we used χ 2 -test for categorical variables; Fisher's exact as a correction for χ 2 when more than 20% of the cells have expected count less than 5; and Student's t-test for normally distributed quantitative variables. For comparing HFA thresholds in different age groups Kruskal-Wallis test was used. For comparing HFA thresholds in control and cases Mann-Whitney test was used. To study the correlation between age and high frequency thresholds Pearson's coefficient was used in cases and control groups. Results In the current study, 20 tinnitus patients and 15 controls were enrolled. In the cases group, there were 4 males and 16 females, whereas in the control group 3 were males and 12 were females. Age was distributed into three age groups from 20 to 30 years, from 31 to 40 years, and from 41 to 50 years. In the controls 10 ears were tested in each age group. In the cases, 12 tested ears were in the first group, 14 tested ears in the second age group, and 14 tested ears were in the last age group. Twelve patients were hearing tinnitus in the form of tones, whereas eight were hearing it as noise. All patients had bilateral tinnitus, six of them complained with right tinnitus more than left, nine patients complained with left more than right tinnitus, and in five patients tinnitus was equal on both ears. Table 1 shows the relationship between age and HF thresholds in the control group showing mean±SD, median, minimum and maximum values. Normal HFA thresholds were calculated by using mean +2 SD in the control group. Each age group was calculated separately. Normal hearing thresholds in HFA are shown in Table 2. High frequency audiometry thresholds in cases Table 3 shows comparison between the two studied groups according to high frequency thresholds. This comparison was detailed and classified according to different age groups in Tables 4-6. No significant difference was found between the two groups in terms of mean thresholds across the frequency range from 9 to 16 kHz . Table 7 illustrates the number of nonresponding cases at frequency 14 kHz and 16 kHz reaching the maximum output of the audiometry. In control group, two participants showed no response at frequency of 14 KHz and six at 16 KHz. In cases group, four participants did not respond at 14 kHz and 16 at 16 kHz. Of the 40 ears tested, only two ears showed high frequency hearing loss at 14 kHz and the remaining were normal in all other frequencies (putting in consideration that four ears out of 40 tested ears gave no response at 14 kHz, and 16 out of 40 tested ears did not give response at 16 kHz up to maximum sound level tested). Table 3 High frequency audiometry thresholds in the two studied groups High frequency audiometry in normal hearing tinnitus patients Elmoazen et al 311 Table 5 High frequency audiometry thresholds in the two studied groups at age from 31 to 40 years Correlation between high frequency audiometry thresholds and age in control and cases Tables 8 and 9 demonstrate the correlation between age and high frequency thresholds in cases and controls respectively, showing statistically significant positive correlation between age and HFA thresholds in cases starting from frequency 11.2 kHz and in control group starting from frequency 10 kHz. Pitch matching and loudness matching Table 10 shows the distribution of the studied cases according to pitch matching and loudness matching. Tinnitus pitch ranged from 1 to 9 kHz with a mean of 3.24 kHz. Loudness matching ranged from 14 dBHL up to 60 dBHL with mean of 31.42 dBHL. Discussion The main risk factor of tinnitus is HL [20]. However, this association is not simple or straightforward [21]. Some participants with troublesome tinnitus have audiometrically normal hearing and, conversely, many participants with hearing loss do not report tinnitus [20]. It has been argued that a normal PTA does not necessarily exclude cochlear damage [11]. Thus, the aim of this study was to explore the results of the HFA and see whether it provides additional information in tinnitus patients with normal hearing on conventional audiometry. High frequency audiometry thresholds in normal participants Normal HFA thresholds were calculated by using mean +2 SD in the control group. Each age group was calculated separately (from 20 to 30 years, from 31 to 40 years, and from 41 to 50 years). All participants were able to respond to the maximum sound levels tested up to 12.5 kHz in the EHF range. The number of participants not responding to the maximum sound levels presented above 12.5 kHz increased as the frequency increased, especially in older age groups. The absence of response to EHF tested in the older age groups is in accordance with other authors' reports with respect to the general tendency of a gradual decrease of hearing sensitivity at higher frequencies and with increasing age [22,23]. The shift occurs first at the highest frequencies and then progresses to lower frequencies as the participants increase in age [8]. The dispersal of the data with increasing frequency demonstrates the great variability of values present in the general population. This could be explained by individual differences in the aging process, dietary quality, and individual nutrient intake. Also environmental factors influence hearing outcomes like noise exposure, accumulation of ototoxic materials, and the aging process itself [8]. These results were supported by another study that enrolled 645 participants from healthy volunteers. They were divided into seven age groups at 10-year intervals [8]. They showed increase in the hearing threshold as frequencies increased over the conventional and EHF range and some participants started giving no response starting from 11.2 kHz [8]. High frequency audiometry thresholds in tinnitus patients HFA didn't reveal any significant difference in mean thresholds between our group of normal hearing tinnitus patients, compared with a matched group of tinnitus-free controls suggesting that tinnitus with a normal conventional audiogram does not reflect detectable cochlear damage in the EHF range. Supporting our results, a study done by Barnea included 17 tinnitus patients aged 21-45 years (mean=35 years) with normal hearing and 17 participants as control group based on the mean thresholds across the range from 2 to 8 kHz in each ear. Their results also showed that no significant differences were found between the two groups, in terms of mean thresholds across the frequency range from 9 to 20 kHz [24]. A study by Shim et al. [25] enrolled 18 tinnitus patients, who had a hearing levels less than 25 dB at frequencies of 250-8000 Hz. The HFA was performed, and the mean hearing thresholds at 10, 12, 14, and 16 kHz of each tinnitus ear were compared with those of the 10 age-matched and sex-matched normal ears. In this study,12 had significantly increased hearing thresholds at more than one of the four high frequencies compared with the normal group. When they assessed results according to the frequency, they found that eight patients had decreased hearing ability at 10 kHz, 10 at 12 kHz, eight at 14 kHz, and four at 16 kHz. The high number of abnormal cases compared with our study may be due to their use of the mean as normative value, whereas in the current study we used 2 SD from the mean. A possible explanation of tinnitus with no hearing loss may be the affection of the central nervous system with no damage to the peripheral sensory organs. In most tinnitus patients, the afferent signals are affected by damage to peripheral sensory organs, and plastic changes might follow in the central auditory pathway, which may induce spontaneous activity. However, a decrease in afferent acoustic signals is not essential [13,14]. For example, in individuals with somatic tinnitus syndrome, somatic stimuli may stimulate a specific area of the acoustic center. This may cause tinnitus, which occurs regardless of hearing ability [25]. Furthermore, in patients without a decrease in hearing ability, damage to the hair cells in peripheral sensory organs may be mild and biochemical changes preceding structural damage in the hair cells may induce tinnitus [26].Additionally, HFA had employed an evoking stimulus, and tinnitus is argued to be caused by abnormal spontaneous hyperactivity in the auditory pathways. Demonstrable differences between normal listeners with and without tinnitus might be reflected in the spontaneous activity of the auditory pathways [24]. Pitch matching and loudness matching There was wide range of intersubject variability in tinnitus pitch (1-9 kHz). Tinnitus loudness at the tinnitus pitch frequency was found to have a mean of 31.42 dBSL (ranging from 14 dB up to 60 dBSL). A study done by Barnea [24], found pitch matching in the range from 0.25 to 16 kHz with a mean of 6.8 kHz and loudness matching mean of 15.3 dB SL with a range of 0-45 dB SL. This variability in the tinnitus pitch of the participants with normal hearing sensitivity might partially be caused by the fact that these participants struggle in establishing their tinnitus pitch [24]. Conclusion The results of this study suggest that tinnitus with a normal conventional audiogram does not necessarily reflect appreciable cochlear damage in the EHFs, which might suggest a further central cause for tinnitus. Recommendations Future studies on participants with normal hearing sensitivity, particularly on the spontaneous activity of the auditory pathways, are needed to provide further information about tinnitus in normal listeners. Financial support and sponsorship Nil. Conflicting of interest There are no conflicts of interest.
2019-03-18T14:04:01.857Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "bb46af82b14e643e7f3d1e9ca539f08501ea6c89", "oa_license": "CCBY", "oa_url": "https://ejo.springeropen.com/track/pdf/10.4103/ejo.ejo_44_18", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b8623ad9f069efc7affa94676eb0a937365b13d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259942023
pes2o/s2orc
v3-fos-license
The combination of nutrition education at school and home visits to improve adolescents’ nutritional literacy and diet quality in food-insecure households in post-disaster area (De-Nulit study): A study protocol of cluster randomized controlled trial (CRCT) Nutrition education is selected as a method which often used to change eating behaviour, yet, the effectiveness of this method in adolescents who live in household with food insecurity status is rarely investigated. The purpose of this study was to assess the impact of a combination of nutritional education held at school and home visits for increasing the nutritional literacy and its effect on the quality of adolescent diet, so that the result can be used as a strategy to improve nutritional literacy dan diet quality in those adolescents who live in food-insecure households in post-disaster areas. The De-Nulit Study is a Cluster Randomized Controlled Trial (CRCT) with an intervention from a combination of nutritional education given at school and home visits conducted for three months for adolescents who live in food-insecure households with ages ranging from 15 to 17 years old. A randomization sampling was carried out at four schools located the nearest locations which were affected heavily by the major natural disasters in 2018. The nutritional education intervention groups in schools were given in eight sessions, whereas home visits with an interview approach for students with a motivational interview approach were carried out four times. The control group will receive leaflets three times a month for three months, and each group will receive a food stamp $ 7.6 per month for three months. The trial research has been recorded in Thai Clinical Trials Registry (TCTR) with identification number of TCTR 20220203003 issued on 03 February 2022. Nutrition education is selected as a method which often used to change eating behaviour, yet, the effectiveness of this method in adolescents who live in household with food insecurity status is rarely investigated. The purpose of this study was to assess the impact of a combination of nutritional education held at school and home visits for increasing the nutritional literacy and its effect on the quality of adolescent diet, so that the result can be used as a strategy to improve nutritional literacy dan diet quality in those adolescents who live in food-insecure households in post-disaster areas. The De-Nulit Study is a Cluster Randomized Controlled Trial (CRCT) with an intervention from a combination of nutritional education given at school and home visits conducted for three months for adolescents who live in food-insecure households with ages ranging from 15 to 17 years old. A randomization sampling was carried out at four schools located the nearest locations which were affected heavily by the major natural disasters in 2018. The nutritional education intervention groups in schools were given in eight sessions, whereas home visits with an interview approach for students with a motivational interview approach were carried out four times. The control group will receive leaflets three times a month for three months, and each group will receive a food stamp $ 7.6 per month for three months. The trial research has been recorded in Thai Clinical Trials Registry (TCTR) with identification number of TCTR 20220203003 issued on 03 February 2022. Background Nutritional education is a common way to do for changing eating behavior [1][2][3]. In the post-disaster areas, nutritional education becomes one crucial thing to do during the rehabilitation and construction period. Within these periods, economic activity began to start again, and make households could choose their own food choices although their dependence on the fund/aid distribution from the government kept high [4]. A failure in recognizing the nutritional problems and also a low level of nutritional awareness in the post-disaster period can increase morbidity and mortality [4]. Globally, nutrition education for adolescents has been carried out widely with varying degrees of success in changing their eating behavior [1][2][3]. Nutrition education aimed at adolescents who live in food insecurity status has already been conducted in several places with varying degrees of success. Most studies assessed the components of knowledge [5][6][7][8][9] and the eating behavior [5,7,[9][10][11][12][13]. There was one study conducted in Lebanon on Syrian refugees [9], whereas some other studies were conducted in the United States with a sample size ranging from 15 [7] to 1136 adolescents [10]. Nutritional education interventions for adolescents are mostly implemented in the form of School-based interventions with different effectiveness [2,[14][15][16]. The nutritional education in schools is very important for building peer group supports through group interactions to be able to increase the individual subjective norms [17] but adolescents often find difficulties in setting priorities, goals, and action plans for changing their nutritional behavior. In addition, transformation in nutritional information obtained from school will be difficult to implement at home without the help of a mentor or someone who has more knowledgeable in the related area [18]. Increasing the adolescents' self-confidence and strengthening the motivation of the related individual through individual and family approaches will be an important matter in evaluating the nutrition education interventions until the adolescent is able to make decisions upon nutritional knowledge that they have acquired [19]. A combination of nutritional education in the community with an individual approach through home visits has been successfully applied to groups of mothers and children for improving their knowledge, attitude, and self-confidence in carrying out the good nutritional behavior and nutritional status [20,21]. For adolescents, interventions for nutritional education that involved parents were effective in increasing the vegetable and fruit intake [22], meanwhile for the adolescent who has an intervention with the help of the university students in the form of a motivational interview had a significant effect on changing the snack consumption behavior from adolescents in low socioeconomic status [23]. The concept of the home visit was initiated and applied in community-based interventions in the mother and children group [20]. The purpose of this program is to improve the existing health services, closing the gap by volunteer services targeted to the family where children categorized as the food insecurity group are able to live their lives with good parenting support in the caring communities and families [24]. Home visit also becomes an approach to do in the community scope aimed at families with food insecurity status [25]. Home visits provide support to families by fulfilling the basic family needs, sharing prenatal and childcare knowledge also linking them to available health services [24]. Most of the home visits were carried out by the community who already received the training related to the program being implemented [20]. Adoption of concept home visits for adolescents is the potential to close the gap in nutritional education given at school and its implementation at home. Adolescents are expected to increase their nutritional literacy and be able to change their behavior so they can meet their nutritional needs with the support of families and caring communities [23] In vulnerable groups such as adolescents who live in food-insecure households, home visits provide support to adolescents and families against the obstacles faced when changing their eating behavior. The apparent problems especially occurred in the application of nutritious eating with the minimum resources which related to the eating pattern in adolescents [26]. The strategies to improve nutritional literacy and diet quality through a nutritional education approach aimed at food-insecure adolescent groups will be provided through this research. Efforts to improve nutrition and health need to be carried out, especially for groups that receive less attention and are more vulnerable to experiencing worse nutritional and health conditions. The study's outcome will answer the difference between the average nutritional literacy score and the diet quality among adolescents in food-insecure households in post-disaster areas who received nutrition education at school and home visits with adolescents in the control group who only received nutrition information leaflets. Methods/design This study aims to assess the effect of a combination of nutrition education at school and home visits on increasing nutritional literacy and its impact on the diet quality of adolescents who live in foodinsecure households in post-disaster areas by taking a case study in Palu city, Indonesia. Nutrition education is carried out based on behavioral change mediators in the theory of planned behavior. The selected design of this study was a Cluster Randomized Controlled Trial (CRT) with random allocation based on school to search the effect of a combination of nutritional education at school and home visits on adolescents' nutritional literacy and adolescents' diet quality in food-insecure households. This study is called the De-Nulit Study which means Diet and Nutrition Literacy. In this study, nutrition education at school and at home was carried out to improve the nutritional literacy and diet quality of adolescents in food-insecure households in the post-disaster areas in Palu City. This city was the area hit by major disasters of a powerful earthquake, tsunami, and land liquefaction in September 2018 with the death toll of more than 3600 people and 40,738 refugees [27]. The catastrophic event not only bring causalities but also impacted socio-economic conditions. The record of material loss was 17293 houses having light damage, 12717 houses were moderately damaged, 9181 houses were heavily damaged and 3673 houses were brought down to ashes [27]. The main hypothesis of this study is a combination of nutrition education at school and home visits for three months is effective in improving the nutritional literacy and dietary quality of adolescents who live in food-insecure households. In addition, the eating habit of mothers also increased compared to the control group. The sample size, inclusion, and exclusion criteria The adolescents are involved in the study if they are in food-insecure households, living with their mother, the age are ranging from 15 to 17 years old, grade X or XI at school, have never been absent from school for more than two days in the last semester, do not have allergies and chronic diseases, are not on a special diet, willing to participate in the study, and the adolescents and their mothers are signing informed accent and informed consent. Informed accents and informed consents were collected by research staff. The total sample for the nutritional education intervention and home visits was determined by using the formula of Armitage and Berry (1987) [28]. Determination of the main outcome power calculation according to the previous study in Mumbay India in the form of a nutritional education intervention for adolescents carried out for two months was able to increase the diet quality score by 7.6 [29]. There were four schools located closest to the disaster site that was willing to participate. Cluster randomization implemented in school was considered the effect design as a correction factor in determining the sample. Intra-cluster correlation determination based on intervention studies in school was found as 0.014 [30]. The number of samples with an anticipated dropout of 10% was 27 adolescents in each cluster whereas the total sample was 108 adolescents consisting of 54 people in the intervention group and 54 people in the control group. The stage of the study is shown in Fig. 1. Randomization The randomization were done based on the clusters (schools). School allocation randomization were conducted by computer program randomization. Each of two selected schools were randomly assigned as the intervention group and the control group. Each school consisted of 27 subjects thus made up of 54 subjects in the intervention group (De-Nulit) and 54 subjects in the control group. The nature of the intervention made blinding impossible for both participants and researchers. However, the assessors were blind to the baseline conditions as well as the evaluation of the final data collection and follow-up. However, blindness cannot be ascertained as this can be disclosed by adolescents or their mothers. The statistician who performed the data analysis was blinded to the study group, using the intervention numerical code only. Primary outcome The main outcome in this study are nutritional literacy and diet quality which will be measured at baseline, end line, and follow-up, and the final measurement will be carried out after the nutrition education intervention process is done for three months. Afterward, the follow-up measurement will be carried out after three months the intervention is finished. Follow-up measurement targeted at looking at the retention of nutritional literacy and improving diet quality after the intervention ended. Nutritional literacy is assessed with a validated Nutrition Literacy Questionnaire (Nulit) [31]. The scoring is based on a Likert scale consisting of five choices, namely "strongly agree', 'agree', 'undecided', 'disagree' and 'strongly disagree'. The range of scores for each statement is starting from one as the lowest score and five as the highest score. The higher the total score of the functional nutrition literacy, interactive nutrition literacy, and critical nutrition literacy components, the higher the nutritional literacy. The modified IGS3-60 will be employed to measure the diet quality in adolescents in this study. The modified IGS3-60 is a Healthy Eating Contemporary Clinical Trials Communications 35 (2023) 101185 Index (HEI) that was tailored and developed for adolescents in Indonesia [31] by adding an iron component. The types of food consumed by the subjects are categorized into the groups of carbohydrates, sources of animal protein, vegetables, fruits, vegetables, milk, and iron. The average number of food portions based on food-recall 2 × 24 h then will be counted and the score will be calculated. Secondary outcome The secondary outcomes in this study are the mother's eating habits, mother's nutritional literacy also the habitual food intake and nutritional status of adolescents measured at baseline, end line, and followup. Mother eating habits were determined based on the median score for eating vegetables, fruit, protein sources, salty-sweet and fatty foods as measured by a food frequency questionnaire with a response scale of 'Never', 'less than three times per month', 1-2 times per week', 3-6 times per week', 1 time per day and 'more than 1 time per day [32]. The mother's nutritional literacy was determined based on the total score of functional literacy, interactive literacy, and critical literacy components. The higher the score, the higher the nutritional literacy of the mother. Adolescents' habitual food variables are the habit of eating vegetables, fruit, sources of animal protein, and vegetable protein as well as eating salty, sweet, and fatty foods and nutritional intake will be measured by applying the food frequency questionnaire. Eating habits were determined based on the median eating habits score. Answer scores are >1 time per day (score 5), 1 time per day (score 4), 3-6 times per week (score 3), 1-2 times per week (score 2), <3 times per month (score 1) and never (score 0) [32]. The nutrient intakes including energy (kcal), carbohydrates (grams), fat (grams), protein (grams), iron (mg), and calcium (mg) were identified using a 2 × 24-h food recall. The information on the type and amount of food intake was collected in household size and then converted into grams with the help of a food picture [33]. Intake data is converted into a nutritional value using the Indonesian Food Composition Table also with the information on the nutritional value of packaged foods. Socio-economic data and adolescents' characteristic The socio-economic families of adolescents in this study were measured based on family income, parent education, household size, family type, food norms, and mother food consumption habits. The income aspect is categorized into quartiles while parents education is divided into non-school education, basic education, secondary education, and higher education [37]. The household size aspect is divided into small households, medium households, and large households [38]. The type of family aspect is divided into the electron family, nuclear family, atomic family, molecular family, and joint family [39]. The mother's food norm aspect was determined based on the median value of the Healthy Eating Norm [40]. Food allocation in the households was assessed by Likert scale questions. The mothers were asked to rank each household member according to food allocation by order of more diverse, fairly diverse, undecided, less diverse, and least diverse [41]. Food allocation consists of carbohydrate sources, protein sources, vegetables, and fruits, and according to the median value of the total food group, the food allocation for each member of the household will be found. The food security The household food security level was measured by employing the Household Food Insecurity Access Scale (HFIAS) questionnaire consisting of nine questions [42]. Adolescents are categorized as food insecure when they have score of 2 [42]. Psychological components The construction of the Theory of Planned Behavior consists of variables of attitudes, subjective norms, behavioral control, and intentions to have a healthy diet. Whereas for the attitude assessment consists of 16 statements in which 12 statements about the subjective norms, 20 statements of behavior control, and 9 statements of intention. The scoring is based on five answer choices for each statement starting from 'strongly agree', 'agree', 'undecided', 'disagree' and 'strongly disagree'. Responses to each positive statement were scored from 5 to 1 (strongly agree to strongly disagree) and for negative statements also scored from 1 to 5 (strongly agree to strongly disagree) [43]. Attitudes, subjective norms, behavioral control and intentions are determined based on the median score. The assessment of the Theory of Planned Behavior construct on a healthy diet uses a questionnaire that has been validated and assessed for reliability [44]. Interventions and control group The intervention group will receive a combination of nutrition education at school and home visits. The nutrition education conducted in school will be provided in eight sessions, while home visits with a motivational interview approach are carried out four times. The nutrition education in schools and home visit are carried out in a span of three months. The duration of nutrition education varies from 60 to 120 min per week and home visits of 60 min which will be held from one to two times per month. Details in weekly activity times are shown in Table 1. The nutrition education activities in school are after-school activities that are held every Saturday. The activity will be guided by two facilitators in each class. The facilitators of nutrition education in school in this study were nutritionists with at least a bachelor's degree in health nutrition who understood how to conduct participatory education to adolescents. The nutrition education activities in school include games, role play, practicum, discussions, brainstorming, group work, presentation and assignments. The material is given with a participatory approach like an interactive method in a fun way. The nutrition education session at school consists of the following materials: The importance of nutrition of adolescents; Consequences of malnutrition and excess nutrition in adolescents; Balanced nutrition for teenagers; My dinner plate; Food exchange material; Vegetable and fruit; Food source of protein; Food and Beverage labels; Sugar, Salt and Fat; Nutrition facts and hoaxes". The adolescents will be given assignments to deepen their knowledge and skills, especially those relating to their ability in promoting healthy food to friends and family. In addition to the material that has been delivered in a participatory manner at school, the adolescents will also receive videos related to the material that has been studied. They will be asked to convey what they have learned to their families, especially mothers, by asking several questions and providing answers that have been provided. Home visit activities include motivational interviews, practical assistance and 24-h dietary recall. A 24-h dietary recall was carried out to determine changes in food consumption according to the nutritional education that had been obtained. The difficulties encountered while adopting a balanced diet were also identified. Adolescents are asked to convey solutions to nutrition problems that hampered them. Motivator provides motivation and practical assistance on matters related to a balanced diet such as determining single meal portions and reading food labels. If also for some reasons the adolescents do not take part in school activities such as being sick, the motivator is in charge of conducting a brief review of the material given at school. Home visits are not aimed directly to increase the family nutritional knowledge, especially mothers, but adolescents were asked to be able to communicate the information obtained to their families. The motivators should have a nutritional or public health education background and attended a minimum of two years of nutrition education. Type of media applied in the nutrition education process are the adolescents' module, the Facilitator module, the Motivator module and video materials. The teenager module has a purpose as the source of nutritional information and to serve as a guide for the adolescents in nutrition education activities whether will be done at home or at school. For the module content consists of nutrition education material which will be delivered in the process of nutrition education given at school, assignment sheets, commitment sheets and plans for changes that they made every week. Adolescents were also given nutrition education media in the form of videos. These videos provide some reinforcement of the material given at school and the video is made in the form of illustrations. The facilitators are also given a facilitator module aimed as a guide for facilitators in carrying out the process of facilitating nutrition education in schools. The facilitator module is also intended as a source of information for facilitators about the materials which need to be conveyed and must be understood by the adolescents. The module contains nutrition education materials as well as detailed technical steps for facilitation. The modules also be given to motivators as a guide for them in conducting motivational interviews as well as become the guide in providing technical assistance to improve nutritional skills for the adolescents. The motivator module contains information on nutrition education materials and technical steps for conducting motivational interview as well as technical assistance for nutritional skills for adolescents. The control group will receive leaflets which given three times for every month within three months. Leaflets are one of the most frequently used media to disseminate health information including adolescents. The first leaflet contains information about 10 balanced nutritional messages, the second leaflet contains My Plate Contents and the third leaflet about information on how to read food labels. The leaflets are issued by the Ministry of Health of the Republic of Indonesia. Every adolescent group will receive a food stamp to ensure the adolescent households have sufficient access to groceries. The coupons can be exchanged for vegetables, fruit, fish, meat, eggs, and nuts and cannot be exchanged for other foodstuffs such as spices, sugar, oil, flour. The coupons can be exchanged at designated grocery stalls. Every subject in each group received a coupon of $7.6 for every month and will be received by the adolescent for three months. A training for facilitator and motivator The training for facilitator has an aim to enable facilitators to carry out the nutritional education process in achieving certain competencies according to the stated goal of changing food behavior. Facilitator training is created based on the developed facilitator module whereas the motivator training also carried out after the motivator module was developed. Motivator training is held to ensure the motivators competent in carrying out the role of adolescent motivators by conducting appropriate motivational interviews and practical assistance needed by adolescents to achieve the goals which have been set. Facilitators and motivators can carry out the facilitation and motivation process when the standard score of minimum 80 on the post-test successfully achieved. Statistical analysis Data analysis was conducted by presenting the mean, median, standard deviation and presentation as descriptive data for each variable in this study. In order to observe whether the nutritional literacy is related to diet quality through attitudes, subjective norms, behavioral control and adolescent intention (TPB construct), a mediation analysis was carried out. The multivariate regression analysis was conducted to examine factors related to the quality of adolescent diet. Differences in the nutritional literacy, dietary quality and eating habits of adolescents were analyzed in before, after intervention and follow-up in the intervention and control group through ANOVA which involving variables apparent to be confounding variables. Then, a Post Hoc Test was applied as a follow-up analysis when it was found the effect of nutrition education at school and home visits based on the time of measurement. The p-value used to reject the hypothesis is < 0.05. Discussion This study observes the effectiveness of combination in nutrition education held at school and home visit that carried out together for 12 times in total for 3 months period compared to the provision of leaflets in improving the nutritional literacy and diet quality in adolescents in food insecure household in the post-disaster areas. As the study hypothesis is the combination of nutrition education at school and home visits is effective in increasing the nutritional literacy and diet quality of adolescents in food insecure households. In addition, there will be an increasing a good eating habit in mother of the related adolescents in the intervention group when compared to the control group. A similar study was conducted in Lebanon on humanitarian conflict refugees with a younger age group (6-14 years) [9]. This research shows an increase in knowledge, attitude, and intake of nutrients to the nutritional status of body mass index for age. Several other studies have also been conducted on food-insecure adolescents with mixed results [5,45]. One study showed significant increases in knowledge, self-efficacy and vegetable and fruit intake scores [5]. However, a systematic review showed that adolescent behaviour change interventions had little effect on changing healthy eating habits [45]. This study provides an overview about efforts as solution to improve the nutritional behaviour and diet quality in vulnerable groups which are still rarely carried out. This study has a strong point, among others, about nutritional education in schools which able to reach large number of adolescents who are pursuing education. Furthermore, this research also involving the university students as motivators in the community as a form of community service which can be a role model for implementing a sustainable community service for the higher education scope. The study also provides information about the role of nutrition education in the implementation of food assistance program in postdisaster areas. Effort to increase the nutritional literacy are the prevailing components that must be included in the provision of any food assistance program from the government or other donor agencies. Unfortunately, some limitations to the study are found such as the research was conducted after three years since the heavy natural disasters happened, thus, the result differences according to the time period of the occurrence unable to examine. In addition, there are several environmental factors like the availability of healthy food in the school canteen or the involvement of teachers in the nutrition education process were not become the focus of this study, therefore, the result of this study do not fully reflect the involvement of school elements. Furthermore, type of this study did not allow the blinding method to be applied to participants and the researchers, however, the baseline, the end line and follow-up measurement were performed by blinded assesors to reduce potential bias. Conclusion Efforts to improve the quality of adolescent diets need to be carried out for food-vulnerable groups, including in post-disaster areas prone to experiencing socio-economic changes that can exacerbate nutritional and health conditions. Nutrition education efforts and changes in eating behaviour are strategies that are often carried out among adolescent groups to improve nutrition and health. This study is a study in postdisaster areas which is rarely carried out. However, the results are very much needed to see the effect of nutrition education interventions on food-insecure adolescents in these vulnerable areas of socioeconomic change. Trial status Recruitment of subjects began in March 2022 while the intervention began in May 2022 and is currently ongoing. Final assessment will be conducted in August 2022 and follow-up assessment in December 2022. Ethics approval and consent to participate The Ethical Committee of IPB University has approved the research with registration number of 464/IT3.KEPMSM-IPB/SK/2021. The intervention also has been recorded to Thai Clinical Trial Registry (TCTR) under identification number TCTR20220203003. Every subject has declared the willingness to participate in this study in a written form after received explanations before everyone sign the approval form. Consent for publication Not Applicable. Availability of data and materials The datasets generated and/or analyzed during the current study are not publicly available due to failure to obtain agreement from all members of researchers team and the funding beneficiary. One of the causes lies in the likelihood of data misinterpretation. Funding The entire study was funded by the Neys-van Hoogstraten Foundation (NHF), The Netherlands with the Grant number NHF Code number IN340. NHF provided approval and funding for the study. The publication of the article was obtained from the Neys-van Hoogstraten Foundation (NHF) -The Netherlands and Tadulako University -Indonesia. Authors' contributions There are ten [10] authors participated in this study with their own duties as explained in the following paragraph [1]: NUD has responsibility to design the concept and study design, prepares the draft of the manuscripts, conducts revision, and ensures the field study run according to the stated objectives [2]. AK plays a role in supervise the study, provide constructive criticism and suggestions to the manuscripts, and creates the study design [3]. CMD has responsibility to prepare manuscripts, the research instruments, the validation processes, and revise the manuscript [4]. HR performs the data processing and statistical analysis, and then ensures the data accuracy followed by writing the manuscript [5]. IE makes the study design, supervises the facilitators and motivators also prepares the manuscript [6]. DAH has a role to prepare the data collection process and ensures the process going without any problems and take responsibility for revising the manuscripts [7]. BOH interprets data, prepares manuscripts and the educational materials to be used at school and home visits [8]. UA conducts the facilitator and motivator training, data analysis and manuscript revision [9]. NF has responsibility to design and validate the educational materials as well as providing criticism on the manuscript draft [10]. As the last individual, RNF plays a role in funding acquisition, administration and revision of the manuscript. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability Data will be made available on request.
2023-07-18T15:04:11.137Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "6969e48c3675b63f99d30382b91ef620fbcfb895", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.conctc.2023.101185", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f20b9b4b2fb8fefe60c4b2528b2098cd7ac5dbcc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
225191314
pes2o/s2orc
v3-fos-license
INFLUENCE OF MEDIUM CHAIN FATTY ACIDS ON SOME BOTRYTISED WINE-RELATED YEAST SPECIES AND ON SPONTANEOUS REFERMENTATION OF TOKAJ ESSENCE Medium chain fatty acids are candidates of partial sulphur dioxide replacement in wine, as a solution to the growing consumer concerns about chemical additives. In botrytised sweet wine specialties, large amount of sulphur dioxide addition is one of the eff ective practices to stop alcoholic fermentation. Increasing medium chain fatty acid levels up to 80 mg l –1 was tested as a sole inhibitor on solid agar surface. S. bacillaris seemed to be the most sensitive, S. cerevsisiae and S. bayanus were more tolerant, while Z. bailii showed the highest tolerance. Then, increasing medium chain fatty acid levels up to 40 mg l –1 combined with 100 mg l –1 sulphur dioxide was introduced into a Tokaj Essence under refermentation. After 56 days, the highest dosage had pronounced eff ect on the yeast population, but the refermentation was not inhibited completely. Medium chain fatty acids have varying inhibitory eff ect on botrytised wine-related yeasts, moreover, it could be used eff ectively in media with high ethanol content, unlike Tokaj Essence. Nowadays, consumers are particularly concerned about health aspects, connected with sulphite toxicity in wine, therefore, the current general tendency is to reduce the use of sulphite in winemaking. Sulphur dioxide is the most frequently used chemical additive in winemaking, employed for multiple benefi ts as antiseptic, antioxidant, colour-, fragrance-, and taste protector (S et al., 2012). Based on current knowledge, none of the studied alternatives can totally replace SO 2 , which remains a useful, sometimes indispensable agent. In Tokaj botrytised wine specialties, it is a widespread practice to stop alcoholic fermentation and save the residual sugars with a considerable SO 2 addition in combination with cooling, racking, and microfi ltration (M , 2011). In some cases, it is a challenge to meet the upper limit of total SO 2 concentration set by EU Commission legislation 607/2009/ EC (EC, 2009). Consequently, any eff ective SO 2 replacement could facilitate botrytised winemaking. Recent candidates for partial substitution of SO 2 in wine are medium chain fatty acids (MCFA). MCFA and their esters are common yeast secondary metabolites usually produced in small quantities (B et al., 2018). Earlier studies revealed that artifi cially added MCFA could be used to stop an alcoholic fermentation carried out by S. cerevisiae (B et al., 2012), inhibit refermentation (B , 2014), and consequently decrease the necessary SO 2 addition. MCFA application in wines is studied only by a few (e.g. B et al., 2012; B et al., 2017) in normal winemaking environments. However, the inhibitory eff ect might be signifi cantly diff erent for various yeast species and in special winemaking environments, like botrytised winemaking. In case of botrytised wine fermentation, the original yeast biota of the grape berry is altered considerably (reviewed by e.g. S , 2019), the harsh fermentation conditions are tolerable only for the well-adapted species. Beside others, Saccharomyces cerevisiae has great importance, Saccharomyces bayanus is also well presented (M -P et al., 2010). Starmerella bacillaris (syn. Candida zemplinina) was originally described in Tokaj wine region and connected to botrytised, sweet wine fermentation (S , 2003). Due to the signifi cant amount of remaining sugars, refermentation of these wine specialties by the tolerant spoilage yeast Zygosaccharomyces bailii is a major threat (A et al., 2015). In this study, fi rst we focused on the characterisation of the general tolerance of S. cerevisiae, S. bayanus, S. bacillaris, and Z. bailii against MCFA as a sole additive in the medium. Furthermore, various MCFA concentrations were tested in combination with SO 2 to inhibit spontaneous refermentation in a Tokaj Essence. Tolerance test Yeast strains: Yeast strains are shown in Table 1. Natural isolates were previously identifi ed by their rDNA and ITS regions (based on the methods of Z and co-workers, 2010) except Z. bailii strains, which were formerly identifi ed by classical methods, upon characteristic sporulation and physiological traits. Inoculum was prepared in YEPD broth (20 g l -1 glucose, 10 g l -1 peptone, and 10 g l -1 yeast extract), incubated (25 °C/48 h) without agitation. Fermentation conditions: Fermentation (preculturing) was carried out under semianaerobic condition at 20 °C, in test tubes containing 5 ml aliquots of model media, without shaking. Tubes were inoculated to a level of 1×10 6 cell ml -1 with 48-hour-old yeast cultures grown in YEPD broth. Cell concentration was measured by Bürker chamber cell counting after 72 h of fermentation. Drop test: After 72-hour fermentation without preservative, the inhibitory eff ect of MCFA was tested on solid agar surface. The test was carried out with 5 μl of serial dilutions from the cultures (10 -1 , 10 -3 , 10 -5 ), in triplicate, according to P -T and co-workers' (2016) modifi ed method. Into the agar 0, 10, 20, 40, and 80 mg l -1 MCFA mixture was introduced. Upon the results of B (2014), the MCFA mixture contained C 8 :C 10 :C 12 in 2:7:1 ratio, solved in 70 v/v% ethanol. After 7 days of incubation at 20 °C, drop test images were recorded in a fi x vision system with a Sony Exmor RSIMX315 camera (Sony Corp., Minato, Japan). Growth area analysis with ImageJ software (S et al., 2012) was used to assess the capability of the strains to grow under various MCFA conditions. Growth values given as percentage are raw colony area-means of triplicate drop-tests, normalised with the control growth of each strain. Data were evaluated with ANOVA after checking the assumptions, using IBM SPSS 23.0. Armonk, NY, USA. Inhibition of refermentation in Tokaj Essence by MCFA Culture media: A Tokaj Essence from Vintage 2005 was bottle aged by the producer at 12 °C in cellar conditions, then re-bottled in 2018 into Tokaj-shape 0.33 l bottles. After two weeks of bottling the Essence was sent to our laboratory in a spontaneous refermentation state with 1.86×10 5 CFU ml -1 initial cell concentration. Total yeast count was determined by culturing on DRBC agar (Sigma Aldrich), and the population was found heterogeneous upon colony and microscopic morphology (no further identifi cation was performed). Basic parameters of the Tokaj Essence were determined according to the offi cial OIV methods: 0/38 mg l -1 free/ total SO 2 (OIV-MA-AS323-04B), 54.19 °Brix total soluble solids (OIV-MA-AS2-02), 2.32 v/v% ethanol (OIV-MA-AS312-01A). Refermentation conditions: To stop the spoilage, 0, 10, 20, and 40 mg l -1 of MCFA mixture was applied to 150 ml of Tokaj Essence in 200 ml fl asks, in duplicates, incubated at 15 °C. After 24 h of the MCFA dosage 100 mg l -1 SO 2 was added to each of these treatments, (except for an absolute control, where no SO 2 and no MCFA were used) according to the recommendation of an earlier study (B , 2014). Composition of the MCFA mixture was identical with the above described (see drop test section). Population dynamic changes of the MCFA-treated Tokaj Essence was followed with traditional plating of serial dilutions on DRBC agar surface, sampling at day 0, 1, 2, 7, 14, 21, 28, and 56. All chemicals were purchased from Sigma-Aldrich Chemie Gmbh (Munich, Germany). Tolerance test Four botrytised wine-related yeast species were evaluated in terms of their tolerance in growth towards increasing amounts of MCFA. Considering the suggested practical application of MCFA, that is to stop fermentation (B , 2017), the conditions of the test included a small amount (5%) of ethanol present in the medium. All species under study showed some intraspecifi c variation between 5.8% and 25.6%, without correlation among better MCFA tolerance, better fermentation ability (e.g. M & T , 2011), and geographical origin (Table 1). Due to the increasing MCFA levels, considerable diff erences were detected among the investigated species in their tolerance ( Fig. 1). At 10 mg l -1 MCFA, all S. bacillaris strains were slightly inhibited, while the other three species were not infl uenced signifi cantly (Fig. 2). In earlier studies, this MCFA level seemed to be eff ective in combination with SO 2 (100 mg l -1 total) and a higher concentration of ethanol (12 v/v%) in fermenting wine (B et al., 2017). From our results it could be seen that the MCFA mixture without SO 2 and with the presence of only 5 v/v% ethanol cannot inhibit the growth of the investigated strains at this low concentration. At 20 and 40 mg l -1 MCFA, S. bacillaris strains were further inhibited. S. cerevisiae strains were slightly reduced in growth at 20 mg l -1 MCFA, while the eff ect of the 40 mg l -1 MCFA was more pronounced. S. uvarum growth was not infl uenced considerably by 20 mg l -1 MCFA, while 40 mg l -1 MCFA resulted in noticeable intraspecifi c variance. E105, SB42, and TKH1 did not seem to be infl uenced, while S103 decreased moderately and CBS395 showed the highest sensitivity, comparable with that of S. bacillaris (Fig. 2). Z. bailii strains were able to tolerate these concentrations without signifi cant reduction (Fig. 2) At 80 mg l -1 MCFA, all S. uvarum and S. bacillaris strains were inhibited completely, while some very limited growth was detected in the case of S. cerevisiae strains. Z. bailii strains showed still signifi cant growth and excellent tolerance. This could be a limitation of industrial MCFA application against refermentation, since this wine yeast is often responsible for spoilage of sweet wines (A et al., 2015). In this investigation, MCFA mixture as a sole yeast-inhibitor seemed to be eff ective only at considerably higher levels than in combinations used in earlier works (B et al., 2012; B et al., 2017). Stop of refermentation in Tokaj Essence by MCFA At the start, the Tokaj Essence had a considerable yeast concentration of 1.86×10 5 CFU ml -1 , which heterogeneous yeast population presented an excellent overall tolerance towards low cellar temperature and extreme amount of sugars (54.19 °Brix). In general, MCFA addition had a prompt eff ect on the population, since after 1 day, the cell concentration decreased by one order of magnitude in the case of 10 and 20 mg l -1 MCFA, and by two orders of magnitude in the case of 40 mg l -1 MCFA. The SO 2 addition in combination with the MCFA did not have additional short-term eff ect on the yeast population (Fig. 3). Comparing the absolute and sulphited control, it could be seen that this SO 2 concentration itself did not inhibit refermentation (Fig. 3), which is in accordance with the high sulphite binding capacity of botrytised wines and general yeast characterisation (R S , 1993). : 20 mg l -1 MCFA+ 100 mg l -1 SO2; : 40 mg l -1 MCFA+ 100 mg l -1 SO2 The 10 mg l -1 MCFA+100 mg l -1 SO 2 had negligible inhibitory eff ect on the yeast population in the fi rst 28 days, while in the case of 20 mg l -1 MCFA+100 mg l -1 SO 2 cell concentrations were lower than both controls, but still rather limited inhibition was noticed. The 40 mg l -1 MCFA+100 mg l -1 SO 2 had more pronounced eff ect on the refermentation, gradual decline was observed (Fig. 3). After 28 days, the cell concentration decreased to 10 1 CFU ml -1 level, but the Essence still cannot be regarded as stable, free from possible refermentation. After 56 days, the 10 and 20 mg l -1 MCFA+100 mg l -1 SO 2 reduced the living yeast cell concentration with only two orders of magnitude (Fig. 3), which is a considerable decrease regarding the initial cell number, although the 100 mg l -1 SO 2 alone had the same inhibitory eff ect. In the case of the 40 mg l -1 MCFA+100 mg l -1 SO 2 , the cell concentration remained in the 10 1 CFU ml -1 range, however, from an oenological point of view, the lowest remaining cell amount is still not acceptable in bottled wine. These results are diffi cult to compare with earlier works, since the parameters of the botrytised wine specialties, particularly Tokaj Essence, are considerably diff erent from those in a normal wine. The limited inhibitory eff ect of the MCFA dosage must be infl uenced by the reduced ethanol content of the Tokaj Essence, which is in accordance with an earlier fi nding about S. cerevisiae (V et al., 1989). Conclusions Due to the increasing MCFA concentrations, considerable diff erences were detected among the investigated yeast species in growth. S. bacillaris seemed to be the most sensitive, S. cerevsisiae and S. bayanus were more tolerant, while Z. bailii showed the highest tolerance. It could be concluded that at low ethanol content (5%) and without SO 2 , the MCFA mixture as a sole additive needs to be implied in considerably higher amounts than suggested for normal wines. The inhibitory eff ect of MCFA should be thoroughly tested in the future with a wider strain set of the currently investigated and other species. The MCFA-SO 2 combinations had rather limited inhibitory eff ect on the Tokaj Essence refermentation, possibly due to the low alcohol content of the botrytised wine specialty, but the excellent general stress tolerance of the spoilage yeasts should not be excluded. Consequently, the future MCFA application should be reduced only to wine-media, where signifi cant amount of ethanol is present to reach acceptable inhibition.
2020-09-03T09:05:30.942Z
2020-08-27T00:00:00.000
{ "year": 2020, "sha1": "ba4d554003235aaeceebe13d699e0b3432430a8c", "oa_license": "CCBY", "oa_url": "https://akjournals.com/downloadpdf/journals/066/49/3/article-p339.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "3334cb5f73c40982da42bcef13b6890f5d666bd6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
54648605
pes2o/s2orc
v3-fos-license
Substance abuse in patients admitted voluntarily and involuntarily to acute psychiatric wards : a national cross-sectional study Background: Substance abuse and mental disorder comorbidity is high among patients admitted to acute psychiatric wards. The aim of the study was to identify this co-occurrence as a reason for involuntary admission and if specific substance use-related diagnoses were associated with such admissions. Methods: The study was a part of a multicentre, cross-sectional national study carried out during 2005-2006 within a research network of acute mental health services. Seventy-five percent of Norwegian hospitals providing acute in-patient treatment participated. Substance use was measured using the Clinician Rating Scale and the ICD-10 diagnoses F10-19. Diagnostic assessments were performed by the clinicians during hospital stay. Results: Overall, 33.2% (n=1,187) of the total patient population (3,506) were abusing alcohol or drugs prior to admission according to the Clinician Rating Scale. No difference in the overall prevalence of substance abuserelated diagnoses between the two groups was found. Overall, 310 (26%) of the admissions, 216 voluntarily and 94 involuntarily admitted patients received a double diagnosis. Frequent comorbid combinations among voluntarily admitted patients were; a combination of alcohol and either mood disorder (40%) or multiple mental disorders (29%). Among involuntarily admitted patients, a combination of poly drug use and schizophrenia was most frequent (47%). Substance abusing patients diagnosed with mental and behavioral disorders due to the use of psychoactive stimulant substances had a significantly higher risk of involuntary hospitalization (OR 2.3). Conclusion: Nearly one third of substance abusing patients are involuntarily admitted to mental hospitals, in particular stimulant drug use was associated with involuntarily admissions. INTRODUCTION The prevalence of substance abuse (SA) among patients admitted to acute psychiatric wards varies according to setting and mode of measurement.Prevalence of such comorbidities among inpatients with severe mental illness ranges from 24.4% to 70.0% in reports from single wards [1][2][3][4][5][6].Comorbid SA typically complicates recovery from mental health disorders and is associated with increased use of health services [7,8]. Involuntary admission and treatment of mentally ill patients are controversial issues in mental health care worldwide [9].The frequency of involuntary hospitalizations varies between and even within different countries, and is dependent on legislation, clinical experience, resources, traditions, and attitudes [4,5,[10][11][12][13].According to the Norwegian Mental Health Care Act [14], compulsory psychiatric mental health care may take place when the patient is suffering from a suspected or established serious mental disorder to prevent severe deterioration of the patient's health status or in cases where there is an obvious threat to the patient's own life or the life of others. Involuntary admission rates to psychiatric hospitals in Norway are high compared to other European countries [12].Published involuntary referral rates for 1998-2000 from other Nordic and European countries range between 6 (in Portugal) and 218 (in Finland) per 100,000 inhabitants/year [10,15,16].In Norway, the respective incidence rates for civil commitment based on "involuntary referrals", "treatment periods", and number of persons involved were 259, 209, and 186 per 100,000 adults/year according to a study conducted by Iversen et al. [12].Based on the frequent application of coercive mental health care and the context of high rates of comorbid SA and mental illness in Norway [1,2,4], it is important to further investigate the role of substance abuse among patients admitted to acute psychiatric services.Previous studies have focused either on involuntary admissions and treatment in mental hospitals [4,5] or on substance abuse among mentally ill patients [1,2].However, we have not been able to find studies that have examined comorbidity and involuntary admissions to hospitals.One of the aims of this study was to investigate if there were specific substance-related diagnoses associated with involuntary admissions. In order to provide better treatment it is necessary to explore the extent to which the patient's behavior, i.e. drug use prior to admission, predicts the application of coercion in psychiatric wards. Aims of the study 1. To investigate if substance abusing patients had a higher risk of involuntary admission to acute psychiatric wards.2. To investigate whether there could be typical patterns of diagnostic comorbidity of substance abuse and mental disorders among patients abusing psychoactive substances prior to admission to acute psychiatric wards.3. To investigate if there could be specific substancerelated diagnoses associated with involuntary admissions. Setting In Norway the application of coercive mental health care for the mentally ill patients is covered by the Mental Health Care Act [14].The most common causes for involuntary hospital admission in mental health care are schizophrenia, paranoid psychoses, and acute reactive psychoses [4].Another act, the Social Services Act §6.2, covers an option for involuntary admission to the hospital for three months for persons without severe mental illness, but who are primarily addicted to psychoactive substances and whose substance abuse may cause risk to their physical or mental health [17].In 2009 in Norway, a total of 87 decisions were made for substance abusing persons for involuntary admissions to institutions according to the Social Services Act [18], whereas more than 7,200 patients were admitted involuntarily based on the Psychiatric Healthcare Act [19].Many of these were patients with substance abuse problems typically treated in psychiatric hospitals, rather than in drug treatment facilities [17]. In 2004 the national health authorities reorganized the funding of alcohol and drug abuse treatment and the responsibility for provision of care was transferred from the counties to the Specialist Healthcare Authorities.Currently, Social Services, together with the Psychiatric Specialist Healthcare Services and the Specialist Substance Abuse Services, share joint responsible for SA patients.Nevertheless, these services often operate independently with limited interaction.Thus, the group of vulnerable substance abuse patients often experience problems when admitted to the Specialist Substance Abuse Services leaving them suffering from lack of treatment addressing their specific needs [18]. Study subjects This study was part of the cross-sectional Multicentrestudy of Acute Psychiatry (MAP) in Norway.The data collection was carried out as a national cross-sectional study during 2005 and 2006 within a research network of acute mental health services.Data on patient characteristics and treatment episodes were collected from all patients admitted during a three-month period.The network was organized and coordinated by the research institute SINTEF Health Research in Norway with support from the Norwegian Directorate of Health and Social Affairs [20,21]. The sample originally consisted of 39 wards, which were categorized into three groups: 4 admission wards, 28 acute wards, and 6 subacute wards.One ward was an intermediate term ward and was removed from the sample, resulting in a total of 38 acute wards.This comprised 75% of Norwegian hospitals providing acute inpatient treatment.The clinics were located in both urban and rural parts of the country and were assumed to cover a representative sample of the Norwegian population [20].Data from 3,506 admissions to adult acute psychiatric wards were collected.Very few patients may have had more than one admission in the 3-month inclusion period.Thirty-five percent of patients were involuntarily admitted to the hospital [22]. Instrument and measures Drug and alcohol use during the six months prior to index hospital admission was assessed by the Clinician Rating Scale [23,24], which measures the consumption of psychoactive substances on a scale from 1 to 5. The ratings are 1 = no use, 2 = use without impairment, 3 = abuse, 4 = dependence, and 5 = dependence with need for institutionalization.The use of psychoactive substances without impairment is defined as "no evidence of persistent or recurrent problems in social functioning, legal status, role functioning, mental status, or physical status, and no evidence of recurrent dangerous use".The patients were subsequently divided into two groups: 1.The non-substance abuse group including patients who scored 1 or 2 on the Clinician Rating Scale (for alcohol and/or drugs).2. The substance abuse group including patients who scored 3, 4, or 5 on the Clinician Rating Scale (for alcohol and/or drugs). Demographic, administrative, and clinical information, in addition to one primary and up to two secondary ICD-10 diagnoses [25], were recorded for each patient. Diagnoses were based on "routine clinical assessments", and on structured clinical interviews that measured SA over different time periods.The Clinician Rating Scale measured alcohol and drug use, respectively, during the six months prior to admission, whereas ICD-10, F10-19 diagnoses represent current substance use disorders as judged by the clinician during the hospital stay.The focus of this study was on patients who reported drug use with impact/abuse pattern before admission to acute psychiatric wards.We were therefore notably interested in patients scoring 3 or higher on the Clinician Rating Scale; these patients formed the study sample and the basis for further analysis.Patients were tested for substance use by laboratory drug tests upon hospital admission.The Global Assessment of Functioning Scale (GAF) was used to rate social, occupational, and psychological functioning.The latter scores were split into symptom scores (GAFs) and function scores (GAFf) [26].No reliability tests were carried out.All clinicians had experience in rating GAF as a routine measure required in the mental health services. Analysis and statistical methods Continuous data are presented as means with standard deviations (SD) and analyzed using Student's t-test when normal distributed.Multiple logistic regression analysis was performed to investigate whether specific substance-related diagnoses predicted involuntary admission (dependent variable).Results are presented with 95% confidence intervals.Continuous variables were checked for correlation with Spearman's rho; none of the included continuous variables had a correlation >0.7.Significance level was set at P <0.05.Analyses were performed using SPSS 16.0 software (SPSS Inc., Chicago, IL, USA). Ethics and informed consent The Regional Committee for Medical and Health Research Ethics and the Data Inspectorate, Oslo, Norway (REK: 211-04049 NSD: 11074) approved the study.The Norwegian Directorate of Health and Social Affairs provided permission to collect information from health services.The Regional Committee for Medical and Health Research Ethics approved that data was collected without asking for consent, as it was considered ethically important also to include those that were involuntarily admitted and would be most likely to not give consent. RESULTS According to the Clinician Rating Scale, 1,187 of the 3,506 admissions (33.2% of all admissions) were patients abusing psychoactive substances prior to admission.We found that 826 (70%) of the admissions were voluntarily admitted SA patients and 361 (30%) were involuntarily admitted SA patients (Table 1). Two-thirds of both voluntarily and involuntarily admitted SA patients were males, mean age 36 years and 34 years, respectively.Involuntarily admitted patients had more severe problems as measured by GAFs and GAFf scores.Significantly more voluntarily admitted patients than involuntarily admitted patients had suicidal ideation or plans (Table 1).Sixty-two percent of involuntary admissions and 18% of the voluntary admissions required police assistance.Suspected intoxication rates at admission were higher among involuntarily admitted patients, in particular positive drug tests were found in up to one-fifth of those patients. At discharge, 290 (35%) of the voluntarily and 131 (36%) of the involuntarily admitted SA patients were given a primary substance abuse diagnosis according to ICD-10 F10-F19 (Table 2).Of the primary mental diagnoses, mood disorders (F30-39) and neurotic disorders (F40-49) were significantly more frequently diagnosed among patients admitted voluntarily.Schizophrenia spectrum disorders (F20-29) were significantly more common among the involuntarily admitted patients.Although no difference in the overall prevalence of substance abuse-related diagnoses between the two groups was found, there were differences in the specific patterns of drug abuse.Among voluntarily admitted patients, alcohol-related diagnoses were significantly more common, whereas stimulant drugs were significantly more common among involuntarily admitted patients.A tendency towards more polydrug use was observed in patients admitted involuntarily.Overall, 310 of the SA admissions (216 voluntary and 94 involuntary admissions) received a double diagnosis (Table 3).Some typical comorbid patterns of drug use and mental disorders were found.Alcohol use or poly drug use were most frequent.Among the voluntarily admitted patients, a combination of alcohol and either mood disorder (40%), multiple mental disorders (29%), or neurotic disorder (16%) were more frequent.Among involuntarily admitted patients, a combination of poly drug use and schizophrenia was most frequent (47%). Multiple logistic regression analysis was used to investigate whether being involuntarily hospitalized in acute psychiatric wards was associated with any spe- 4). DISCUSSION One-third (33.2%) of the total hospital admissions (n=3,506) were patients abusing psychoactive substances prior to admission to acute psychiatric wards according to the Clinician Rating Scale.Of these, 70% were voluntarily admitted and 30% involuntarily admitted.No difference in the overall prevalence of substance abuse-related diagnoses between the two groups was found.Among voluntarily admitted SA patients, alcohol-related diagnoses were significantly more common.A tendency towards more polydrug use was observed in patients admitted involuntarily.SA patients diagnosed with mental disorders due to stimulant use had a significantly higher risk for involuntary hospitalization (OR 2.3). Prevalence and characteristics Using the Clinician Rating Scale revealed a prevalence of substance abuse among patients admitted to acute psychiatric wards of 33.2%, which is concordant with similar previous studies.In these studies using the same Clinical Rating Scale as a screening tool on smaller and more selected populations, the reported prevalence varies between 24% and 69% [8,[27][28][29]. Studies reporting prevalence of substance use based on self-report tended to underestimate the prevalence compared with studies based on laboratory or on-site drug analyses [30]. Involuntarily admitted patients tested positive significantly more often for substances on drug tests performed at hospital admission.They were more often suspected to be intoxicated.Police assisted admissions were more frequently required (Table 1).However, it is noteworthy that as many as 18% of the voluntary admissions also required police assistance.Several studies suggest that the patients' experience of being coerced during the admission process to mental hospitals do not necessarily correspond with their legal status [31,32].Rather, perceived coercion appears to be associated with a feeling that their views were not taken into consideration in the admission process.In a study by Iversen et al. 32% of voluntarily admitted patients perceived high levels of coercion in respective of legal status at admission. Diagnoses and diagnostic comorbidity Different modes of substance use detection often result in different prevalence estimates.Applying the Clinicians Rating Scale revealed more substance abusers than that diagnosed by clinicians according to ICD-10 coding.According to the Clinicians Rating Scale, 1,187 of the admissions were patients abusing psychoactive substances.However, only 53% of these received a substance abuse diagnosis according to ICD-10.Some typical patterns of diagnostic comorbidity of SA and mental disorders among patients abusing psychoactive substances prior to admission were found.Alcohol and polydrug use were the two most frequently observed patterns.Among patients admitted voluntarily, a combination of alcohol and either mood disorders, multiple mental disorders, or neurotic disorders were common, whereas a combination of polydrug use and schizophrenia was most frequent among involuntarily admitted patients.This is in agreement with the study of Mueser et al. who reported that 53% of all the involuntarily hospitalized psychiatric patients (SA and non-SA patients) suffered from schizophrenia or schizoaffective disorders, and alcohol was the most commonly abused substance [24]. SA patients diagnosed with mental and behavioral disorders due to psychoactive stimulant use had a significant higher risk for involuntary hospitalization (OR 2.3).This could be due to stimulant-induced psychosis or it may reflect acting-out behavior among stimulantusing patients.Most commonly, stimulant psychosis occurs in drug abusers who take large stimulant doses [33][34][35].In nearly every case, the symptoms of amphetamine-induced psychosis (as well as stimulant psychosis in general) will stop within 7-10 days of discontinuing the drug.However, some individuals with long-term or "heavy" use may continue experiencing intermittent psychotic episodes (hallucination, delusions, and/or paranoia) on an ongoing basis during the first year of abstinence [36].It is clinically challenging to differentiate between a drug-induced psychosis and other forms of psychosis during the initial phase. Stimulants seem to predict involuntary admission in our study.Besides stimulant-induced psychosis, stimulants also often produce an acting-out behaviour and these patients may be agitated, aggressive, hallucinating, demonstrate suicidal behaviour, and require extensive resources when admitted to the hospital [37,38].The aggressive behavior rather than the degree of severity of the psychiatric disorder could determinate if admission to hospital becomes voluntary or involuntary.It is of concern if the Mental Health Act designed to provide health care for psychotic patients is regularly used towards non-psychotic but aggressive patients intoxicated by stimulant drugs. There are some methodological considerations to recognize when interpreting results from this study.First, the cross-sectional study design can only provide associations, not causation.Second, the diagnoses used in this study are clinical diagnoses and not necessarily based on any standardized, structured interviews.Nevertheless, this study has a relatively large sample size, is nationally representative, and may have the power to detect important associations of clinical significance.The large data collection represents the diagnostic reality in a large number of clinical settings in Norway, and not only in a strictly controlled experiment. This study indicates that more than half (53%) of patients abusing substances prior to admission to acute psychiatric wards, addiction treatment alone or in combination with treatment for mental disorders may be more appropriate than mental disorder treatment alone.This and other studies have shown that SA and mental disorders are co-occurring and comorbidity renders treatment more difficult, leading to greater use of health services [8,39].Therefore, clinical routines to better identify SA among patients receiving mental healthcare should be given higher priority in order to provide optimal treatment, as many of the patients likely would benefit from additional treatment in specialist substance abuse services. Table 1 . Patient demographics and premorbid functioning of voluntarily and involuntarily hospitalized patients with substance abuse according to Clinician Rating Scale. * Global Assessment of Social Functioning, scale from 0 to 100 with lower ratings for more severe problems ** As judged by clinicians Table 2 . Diagnosis according to ICD-10 of voluntarily and involuntarily hospitalized patients. Table 3 . Patterns of comorbid mental disorders and substance abuse disorders; ICD-10 diagnosis. Table 4 . Drug diagnosis (ICD-10) patterns and associations with involuntary hospitalization in acute psychiatric wards.Bivariate and multivariate analyses.
2018-12-12T17:45:34.132Z
2011-12-22T00:00:00.000
{ "year": 2011, "sha1": "f585ead348e71af5b16f7dd91bc4112adf597f8c", "oa_license": "CCBY", "oa_url": "https://www.ntnu.no/ojs/index.php/norepid/article/download/1430/1284", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f585ead348e71af5b16f7dd91bc4112adf597f8c", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
229349573
pes2o/s2orc
v3-fos-license
Predicting vital wheat gluten quality using the gluten aggregation test and the microscale extension test Vital gluten is a by-product of wheat starch production and commonly used in bread making, but its quality is difficult to predict. The most accurate method to determine vital gluten quality is the baking experiment, but this approach is time- and labor-intensive. Therefore, the aim was to identify faster and easier ways to predict vital gluten quality. Three different approaches, the gliadin/glutenin ratio, the gluten aggregation test and the microscale extension test, were assessed for their predictive value regarding the baking performance of 46 vital gluten samples using two recipes. Hierarchical clustering classified the vital gluten samples into 23 samples with good, 15 with medium and eight with poor quality. Protein-related parameters, such as the gliadin/glutenin ratio, were not reliable to predict gluten quality, because the correlations to the bread volumes were weak. The gluten aggregation test and the microscale extension test were reliable methods to predict vital gluten quality for use in baking based on a scoring system. Both methods need less material, time and labor compared to baking experiments. Especially, maximum torque, peak maximum time, the ratio between peak30 and peak180 as well as the corresponding distance at maximum resistance to extension seem to be suitable alternatives to predict vital gluten quality. Introduction Wheat gluten contains storage proteins, which are a complex mixture of more than one hundred single proteins, composed of alcohol-soluble gliadins (GLIA) and alcohol-insoluble glutenins (GLUT) (Shewry, 2019). Gluten is typically separated from wheat flour by washing out starch and water-soluble components (Bailey, 1941). This simple experiment represents the beginning of gluten extraction and is still the basis of modern day processes like the Martin process, the batter process or variations thereof (Van der Borght, Goesaert, Veraverbeke and Delcour, 2005). After separating wet gluten from wheat starch, a drying step at low temperatures is necessary to facilitate handling, ensure microbiological stability and retain functional properties like cohesivity, elasticity, viscosity, and extensibility (Weegels et al., 1994). The resulting dry powder is called vital gluten and is defined in the CODEX STAN 2001 as a wheat protein product that contains at least 80% crude protein (N Â 6.25, dry matter basis (DM)), 10% moisture, 2% ash (DM), 1.5% crude fiber (DM) and a variable percentage of residual starch and lipids. After rehydration, vital gluten regains its intrinsic functionality and forms a hydrated gluten network. This network has the ability to retain the gas produced during dough fermentation and it stabilizes the sponge-like structure of wheat bread crumb (Neumann, Kniel and Wassermann, 1997;Ortolan, Corrêa, da Cunha and Steel, 2017). Vital gluten is most commonly applied as an enhancer for weak flours to obtain improved dough strength, higher mixing tolerance, better gas holding capacity, higher baking volumes, and regular textural properties Scherf et al., 2016). Vital gluten samples, even if they are from the same manufacturer, were often reported to have different qualities. Within the scope of this work, we defined vital gluten quality as good, if the addition of that sample resulted in a high bread volume in the baking experiments. One possible reason for different vital gluten qualities could be the drying process, as it is considered the most critical step in gluten production (Wadhawan and Bushuk, 1989;Weegels et al., 1994). Excessive heat during the drying process led to a loss of functional properties, "de-vitalized" vital gluten, and resulted in a reduced baking quality (Schofield et al., 1983). The prediction of vital gluten quality is difficult, because it has so far been unknown which target parameter(s) need(s) to be determined. Baking experiments are usually considered as the "golden standard" to identify the quality of vital gluten and used, e.g., by gluten producers and large bakeries. However, this test is time-consuming and labor-intensive, because it takes about 90 min from dough preparation to the final bread, depending on the procedure, plus an additional 30 min for evaluation (total time: 120 min). This paper evaluates three possible approaches for a quick and reliable vital gluten quality assessment as alternative to baking experiments. These approaches were already successfully used to predict the baking quality of wheat flour, but to the best of our knowledge, this study is the first to investigate whether these three methods are also suitable to predict the quality of vital gluten. First, the GLIA/GLUT ratio could be an approach to determine gluten quality (Kieffer et al., 1998;Thanhaeuser et al., 2014). The procedure requires about 300 min considering weighing, extraction, analysis and data evaluation, depending on the fraction. It has no time advantage compared to the baking experiment, but it provides valuable information on the protein composition. Glutenin polymers contribute directly to the development of a gluten network by forming intermolecular disulfide bonds. Gliadins play an indirect role by weakening the interaction of the glutenin polymers due to the increase in entanglement spacing (Singh and MacRitchie, 2001;Delcour et al., 2012). Both gliadins and glutenins are considered to influence dough properties and need to be present in a balanced ratio to ensure good breadmaking performance . Second, the gluten aggregation test is considered as a tool to predict vital gluten quality. During the test, a suspension of vital gluten and water (ratio of approx. 2:1) is analyzed for its aggregation behavior. The input of mechanical energy via a rotating paddle leads to an increase in the consistency of the slurry up to a maximum value. This value represents the state when the gluten network is fully formed. As mechanical energy is continuously applied after reaching this state, the gluten network is destroyed, which results in a softening of the slurry consistency. Gluten aggregation parameters already showed promising correlations with quality-related gluten protein fractions of wheat flour (Marti et al., 2015) and were able to classify wheat according to dough stability (Malegori et al., 2018). Third, the microscale extension test, which provides information about the extensibility and the resistance to extension of each vital gluten sample, could be a possible alternative (Scherf et al., 2016). Both extensibility and resistance depend on the strength of the gluten network. While glutenin polymers contribute to dough strength and elasticity, gliadins provide viscosity and serve as plasticizers for the dough (Belton, 1999). All three approaches investigated require less time, labor, and testing material than baking experiments. In this investigation, we applied the three approaches (GLIA/GLUT ratio, gluten aggregation test, and microscale extension test) as well as microbaking tests to 46 vital gluten samples. To assess the ability of the three alternative approaches to replace baking experiments in predicting the quality of vital gluten, correlations between microbaking tests and each of the three approaches were calculated. Microbaking tests to determine the functionality of vital gluten samples The external conditions remained constant for all tests (temperature 22 AE 2 C, relative humidity ! 60%). All determinations were performed in triplicate. The first microbaking test was based on a recipe of 7.5 g baking mixture A (22.94% soy flour, 22.94% lupine shots, 18.35% linseeds, 11.47% sunflower seeds, 9.17% wheat flour type 1050, 6.88% rye sourdough, 4.59% sesame, 2.52% salt and 1.15% roasted malt flour), 2.5 g of one of the vital gluten samples G1-G46 each, 0.25 g yeast and 5.5 ml water (recipe A). The second recipe consisted of 7.5 g baking mixture B, 2.5 g of one of the vital gluten samples G1-G46 each, 0.25 g yeast and 7.5 ml water (recipe B). The exact composition of baking mixture B was unknown, but it was included for practical reasons, because it is a standard mixture commonly applied for high-protein breads. In both recipes all ingredients were kneaded for 8 min at 30 C in a farinograph-E (Brabender, Duisburg, Germany). The dough was manually moulded and weighed about 13 g in total. Then, the dough piece was placed in a water-saturated proofer to rest for 20 min at 30 C. Finally, the dough went through a fully-automated baking-line, consisting of a proofing chamber of 30 C in which it was left to rest for 40 min as well as an oven in which it underwent a baking procedure for 10 min with the temperature increasing from 185 C to 255 C (Schaffarczyk et al., 2016). The volume of the resulting bread rolls was determined by a laser-based device (VolScan Profiler, Stable Micro Systems, Godalming, U.K.) after a 2 h cooling period. Afterwards, the specific volume (bread volume divided by dough weight) was calculated to compensate for, e.g., dough losses, which can occur during dough preparation and handling. Determination of gluten aggregation behavior The aggregation behavior of each vital gluten sample was measured in triplicate with a GlutoPeak instrument (Brabender, Duisburg, Germany) by applying the method described in the Technical Note of Gall et al. (2017). Vital gluten (2.10 g) was suspended in 4.41 g distilled water in the stainless-steel sample cup. The instrument temperature was set to 36 C. The speed profile for the rotating paddle was defined in the software (GlutoPeak, version 2.2.6) and set to 500 rpm for 1 min, 0 rpm for 2 min and 3300 rpm for 10 min. The software provided the curve profile (gluten aggregation over time) as well as the maximum torque (BEM) expressed in Brabender units (BU) and the peak maximum time (PMT) expressed in seconds. Microscale extension tests of hydrated vital gluten The sample preparation procedure and the microscale extension test of hydrated gluten were described in detail by Scherf et al. (2016). The force-distance curves of each vital gluten sample were carried out in triplicate from four different experiments (n ¼ 3 Â 4 ¼ 12). Therefore, three steps were necessary: hydration of vital gluten, centrifugation and microscale extension. For hydration, 1.5 g vital gluten were mixed in a 50 ml beaker with 5 ml of a salt solution (2% NaCl) until no dry powder was left. After an incubation period of 5 min the hydrated vital gluten was placed between a specially notched and a smooth Teflon mould and centrifuged in cylindrical centrifuge inserts (Heraeus Labofuge 400 R, Thermo Fisher Scientific, Osterode, Germany) for 10 min at 3060Âg and 22 C. The preformed gluten strands were pressed between a sufficiently oiled trapezoidal ribbed and a smooth Teflon plate and protruding vital gluten parts were removed. The gluten strand was placed on the measuring device after an incubation time of 15 min. The SMS/Kieffer Dough and Gluten Extensibility Rig fitted to a TA.XT plus Texture Analyzer (Stable Micro Systems, Godalming, U.K.) was used with a 5 kg load cell and the software Exponent version 6.1.7. The following parameters were set: test mode: extension, pre-test speed: 2.0 mm/s, test speed: 3.3 mm/s, post-test speed: 20.0 mm/s, rupture distance: 4.0 mm, distance: 150 mm, force: 0.049 N, time: 5 s, trigger type: auto, trigger force: 0.049 N, break detect: rate, break sensitivity: 0.020 N. Statistical data analysis One-way analysis of variance (one-way ANOVA) was applied to determine significant differences between vital gluten samples (Sigma-Plot 11, Systat Software, San Jose, USA). These differences were selected with Tukey's test at a significance level of p < 0.05. The specific volumes of both microbaking tests were used to perform a hierarchical cluster analysis to classify the 46 vital gluten samples into the quality classes good, medium and poor using Origin 2019 (OriginLab Corporation, Northampton, USA). The means for all variables were calculated for each cluster by applying the cluster analysis. Then, the Euclidean distance to the cluster means was determined for each vital gluten sample and similar values based on the sum of the squared distances were assigned to one cluster. The peak30 (area from 15 s before the PMT to 15 s after the PMT) and peak180 (area from 180 s after the start of the measurement to 15 s after the PMT) were calculated manually to get more details about the profile of the GlutoPeak curve. Furthermore, the curve was fitted with the Chesler-Cram Peak Function (CCE) and the resulting parameters were assessed in a correlation matrix. The equation of the fit is as follows: where y 0 is the offset, x c1 is the first center, A is the first amplitude, w is the half width, k 2 is the first unknown, x c2 is the second center, B is the second amplitude, k 3 is the second unknown, and x c3 is the third center. Origin 2019 was used to evaluate the suitability of the three approaches (GLIA/GLUT ratio, gluten aggregation test and microscale extension test) to predict vital gluten quality. Spearman correlations were applied to relate the respective parameters with the specific volumes of baking mixture A and B at a significance level of p < 0.05. Development of a scoring system A scoring system was developed using those parameters of the gluten aggregation test and the microscale extension test that showed a significant correlation to the specific volumes of the microbaking tests (Spearman's correlation coefficients (r S )). First, value ranges for the different quality classes were defined for each parameter (Table S1). For this purpose, the 25% and the 75% quantiles of the "medium" group, as defined by the hierarchical cluster analysis, were calculated using Origin 2019 to ensure that the majority of the vital gluten samples with medium quality will be correctly assigned to the "medium" group. The measured values resulting from the gluten aggregation test and microscale extension test were then matched to the pre-defined parameter ranges and corresponding points were allocated. Values that belonged to the good quality class were attributed 20 points, those of the medium quality class 10 points, and those of the poor quality class 0 points. Those points were then multiplied by the respective correlation coefficient to account for the different accuracy to predict the specific volume of vital gluten. For example, G16 received 20 points for PMT, which were multiplied by the correlation coefficient of 0.53 (r s of PMT) resulting in a weighted value of 10.6. The weighted values of all parameters were summed up and assigned based on the following classification: vital gluten samples that reached a total greater than 80 points were classified as good, greater than 50 as medium, and less than 50 as poor. Classification of vital gluten samples into quality classes As baking experiments are the "golden standard" for evaluating the quality of vital gluten (Gabriel et al., 2017), the vital gluten samples G1-G46 were classified based on the specific volumes of two microbaking tests using baking mixture A and B (Table 1). The correlation between both microbaking tests was significant and very high (r S ¼ 0.893, p < 0.001). The breads made from baking mixture A had specific volumes from 1.6 ml/g (G40) to 3.5 ml/g (G2). The breads made from baking mixture B generally resulted in lower specific volumes from 1.1 ml/g (G40) to 3.0 ml/g (G26) and we assume that the lower specific volumes were caused by the higher water addition. The comparatively complex recipes were chosen, because preliminary experiments using 7.5 g of a weak wheat flour as a base and 2.5 g of vital gluten gave very similar results for all samples and the resulting specific volumes hardly showed any significant differences. In contrast, the specific volumes differed significantly among vital gluten samples within one microbaking test using either baking mixture. Based on the specific volumes of both recipes, a hierarchical cluster analysis was performed, resulting in three different cluster types (good, medium, and poor) (Fig. 1). Twenty-three vital gluten samples were classified as quality class "good", 15 as "medium" and the remaining 8 as "poor" (Table 1). Breads made with poor vital gluten had a firmer and moister crumb compared to breads made with medium or good vital gluten. Vital gluten samples of medium and good quality resulted in a regular crumb structure, but differed in their specific volumes, which were higher for the good quality breads. Measurements of rheological properties were beyond the scope of this work, but further studies could be interesting to provide some more in-depth insights. Determination of gluten protein composition by GP-HPLC The protein distribution of the vital gluten samples G1-G46 was determined according to Grosch and Wieser (1999) and is shown in Fig. 2. The proportions ranged from 6.2% (G11) to 17.5% (G18) for HMW-gliadins, 5.2% (G18) to 10.1% (G27) for MMW-gliadins and 41.0% (G43) to 52.1% (G25) for LMW-gliadins. In total, gliadins made up between 53.9% (G43) and 73.9% (G27). For glutenins, the relative protein distribution was 0.8% (G1) to 2.4% (G45) for HMW-glutenins, 5.8% (G27) to 9.5% (G4) for MMW-glutenins and 19.3% (G27) to 34.5% (G43) for LMW-glutenins. Overall, glutenins ranged from 26.1% (G27) to 46.1% (G43). The resulting GLIA/GLUT ratio was between 1.2 (G43) and 2.8 (G27). To obtain a general view, the mean values (MV) of protein-related parameters were determined for each cluster (Table S1). It revealed that the MV of MMW-, LMW-and total gliadins of lower quality vital gluten were lower compared to those of higher quality gluten. In contrast, the MV of MMW-, LMW-and total glutenins were higher. Additionally, the GLIA/GLUT ratio was highest for the good quality, followed by the medium quality and it was lowest for the poor quality. The proportions of HMW-gliadins, HMW-glutenins and total gluten were similar between the quality classes. In general, dough properties are influenced by the content and composition of gliadins and glutenins. While gliadins are responsible for higher viscosity and thus for dough extensibility, glutenins are associated with dough elasticity and therefore with dough strength (Delcour et al., 2012). A high GLIA/GLUT ratio ensures high viscosity and leads to low resistance to extension as well as high elasticity (Marti et al., 2015;Wieser and Kieffer, 2001). Therefore a balance between gliadins and glutenins is necessary to obtain a high specific volume. The highest specific volume was achieved by vital gluten sample G2 with a GLIA/GLUT ratio of 1.5 for baking mixture A and by vital gluten G27 with a GLIA/GLUT ratio of 1.7 for baking mixture B. Gluten aggregation test The gluten aggregation behavior of all 46 vital gluten samples was analyzed using the GlutoPeak instrument. An exemplary curve of a representative vital gluten sample of each quality class is shown in Fig. 3. A high BEM value and a fast PMT were characteristic of vital gluten samples which were classified as good (Fig. 3A). In comparison, the curve profile of vital gluten samples of medium quality showed a later PMT and a lower BEM (Fig. 3B). Vital gluten samples of poor quality had a late PMT and a low BEM (Fig. 3C). Overall, the PMT ranged between 213.7 s (G18) and 486.7 s (G41) and the BEM was between 23.0 BU (G37) and 33.3 BU (G25) ( Table S2). Beside those parameters provided by the GlutoPeak software, the peak30, the peak180 and their ratio were manually calculated for each vital gluten sample in order to characterize the curve in more detail. Peak30 was between 380.1 area units (AU) (G37) and 724.1 AU (G25), peak180 was between 647.7 AU (G20) and 1. Hierarchical cluster analysis based on the specific volumes of both baking mixtures A and B. The division was made into the three quality classes "good", "medium" and "poor". Twenty-three vital gluten samples were classified as good (left cluster, red), 15 as medium (middle cluster, green) and eight as poor (right cluster, blue). (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) 3772.5 AU (G25) and the ratio peak30/peak180 was 0.2-0.9. The comparison of the MV from each quality class showed that peak30 remained similar and did not show large differences between good and poor quality. Peak180 increased and the area ratio decreased with decreasing vital gluten quality. A comparison of the peak180 values was valuable for defining the curve characteristics and thus supported the quality assessment. A CCE fit was used to approximate the actual curve as it had a high coefficient of determination (R 2 higher than 0.9). The parameters resulting from the CCE equation were included in the correlation analysis ( Table 2). The MV of amplitude A and the halfwidth w were higher for vital gluten samples classified as good compared to samples of medium or poor quality (Table S1). All other parameters were similar among the clusters. Previous studies showed promising correlations for wheat flours between the parameters determined by the gluten aggregation test and the baking performance (Bouachra, Begemann, Aarab, & Hüsken 2017;Marti, Augst, Cox and Koehler, 2015). Weak flours were characterized by a rapid formation of the gluten network, followed by a fast degradation, as opposed to strong flours that took more time to build up the gluten network, but remained more stable (Goldstein et al., 2010). In contrast to wheat flour, the vital gluten sample was not continuously exposed to mechanical stress during the measurement. There was a 1 min pre-shearing step at 500 rpm during which the hydration of the vital gluten sample took place. Then, the vital gluten sample was left to rest for 2 min to allow relaxation prior to the actual measurement (Gall et al., 2017). Vital gluten contains only small amounts of starch compared to wheat flour. This facilitates the formation of a gluten network, as there is hardly any steric hindrance from starch particles. This could be the reason why vital gluten samples of good quality displayed earlier and sharper peaks compared to vital gluten samples of poor quality. Microscale extension test The force-distance curves of all vital gluten samples were recorded (Fig. 4). The maximum resistance to extension R max (0.7 N (G46) to 1.2 N (G26)), the corresponding distance at maximum resistance to extension E Rmax (34.5 mm (G13) to 66.5 mm (G22)), the corresponding area under the curve A Rmax (14.9 mJ (G13) to 42.4 mJ (G24)), the distance at maximum extensibility E max (42.2 (G13) to 78.1 (G22)), the corresponding area under the total curve A max (19.9 (G13) to 54.9 (G24)) and the ratio E max /R max (41.0 (G26) to 92.6 (G27)) were provided by the software (Table S2). The MV of all parameters were calculated for each cluster (Table S1). While R max was independent of vital gluten quality, the other parameters showed a lower MV for vital gluten samples with decreasing quality. Similar results were already reported (Thanhaeuser et al., 2014;Kieffer et al., 1998). Correlation matrix The parameters of the three approaches were correlated with the specific volumes of the two microbaking tests ( Table 2). The GLIA/GLUT ratio was weakly correlated. MMW-, LMW-gliadins and total gliadins, as well as, MMW-, LMW-glutenins and total glutenins were significantly correlated, but the correlation coefficients were weak. Therefore, the approach of determining the GLIA/GLUT ratio was not sufficiently suitable to predict vital gluten quality as defined by its functionality, i.e. high volume, in the microbaking tests used here. The parameters of the Glu-toPeak test basically showed good significant correlations to the results of the microbaking tests. The PMT, the BEM, the peak180 and the peak30/ peak180 ratio could be possible predictors for breadmaking performance. The gluten aggregation test has already been considered as an alternative prediction tool for wheat flour quality (Malegori et al., 2018;Marti et al., 2015;Zawieja et al., 2020). Sissons (2016) indicated that the PMT was the best parameter for predicting gluten strength for durum wheat and was able to separate strong and weak dough samples. A previous study of Bouachra et al. (2017) showed that a linear model based on the combination of the crude protein content and the GlutoPeak parameters was able to predict the loaf volume of 64% of independent data correctly. In addition, the microscale extension test could be suitable to predict vital gluten quality. The relationship between the parameters of the microscale extension test and the specific volume was significant except for R max . E Rmax , A Rmax , E max and A max showed correlation coefficients between 0.4 and 0.5. Evaluation of the scoring system The combination of the gluten aggregation test and the microscale extension test was evaluated for its predictive value regarding bread volumes using a scoring system (Table S3). Considering all significant parameters, the scoring system was able to predict 65.2% of vital gluten samples correctly into their quality class. By using only the significant parameters of the gluten aggregation test, still 63.0% of vital gluten samples were assigned correctly, while 52.2% of vital gluten samples would be allocated into the correct quality class only considering parameters of the microscale extension test. Using a combination of both, the result of 65.2% was a good indication and we considered this approach to be feasible to determine the quality of vital gluten. The percentage of correct assignments could actually be higher, but there were some incorrect assignments of vital gluten samples. This might be caused by several possibilites. On the one hand, the cluster formation had a huge impact on the assignment of the vital gluten samples into their quality classes. The classification was based on the specific volumes of both microbaking tests to account for differences caused by the composition of the recipes. For example, vital gluten sample G3 showed a score of 85.8 determined by the parameters of the gluten aggregation test and of the microscale extension test, resulting in a prediction of good quality. G3 reached a specific volume of 3.1 ml/g for recipe A, but only 1.7 ml/g for recipe B. Considering only recipe A, G3 would be assigned as good. Since both recipes A and B were used for the classification the actual result was more accurate and G3 received a correct rating of medium, because of its low bread volume in recipe B. On the other hand, the vital gluten samples showed structural similarities and the transition from poor to medium and from medium to good was very close. For this reason, the error propability was comparatively high. For example, the score of vital gluten G46 was 75.6, which led to a false identification as medium quality, but actually it should have been good according the hierarchical clustering. For a more detailed evaluation of the Table 2 Correlation coefficients (r S ) and corresponding level of significance (p-value) between the specific volume (recipe A and B) and the parameters of the three approaches (GlutoPeak: peak maximum time (PMT), torque (BEM), area 15 s before and after PMT (Peak30), area from 180 s after the beginning of the measurement to 15 s after the PMT (Peak180), Peak30/Peak180 and CCE equation parameters (y0, xc1, A, w, k2, xc2, B, k3 and xc3); microscale extension test: maximum resistance to extension (R max ), distance at maximum resistance to extension (E Rmax ), area under the curve (R max ), distance at maximum extensibility (E max ), area under the total curve (A max ) and the ratio E max /R max and gluten protein composition: high-molecular-weight (HMW)-, medium-molecularweight (MMW)-, low-molecular-weight (LMW)-gliadins and glutenins and gliadin-to-glutenin (GLIA/GLUT) ratio. performance and prediction accuracy of the scoring system, further work will include more and new vital gluten samples. Conclusion This study showed that protein-related parameters, such as the GLIA/ GLUT ratio, were not reliable enough to predict gluten quality, because the correlations to the volumes of the microbaking tests were weak. The gluten aggregation test and the microscale extension test were reliable methods to predict vital gluten quality for use in baking. Both methods need less effort in terms of material, time and human resources compared to baking experiments. The time saving compared to the microbaking test was 100 min for the gluten aggregation test and 78 min for the microscale extension test. Especially, BEM, PMT, Peak30/Peak180 ratio as well as E Rmax can be suitable alternative quality predictors. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2020-12-10T09:08:15.808Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "2cce4c7d2b1262599f75e9a387d185af17b94a9a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.crfs.2020.11.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d83cc5b69709725f02db5d6efe1da998db9d4fb4", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
88843946
pes2o/s2orc
v3-fos-license
Identification of heat stress tolerant genotypes in bread wheat Heat stress affects a number of physiological and morphological traits in crops. The present study was undertaken to identify heat tolerant wheat genotypes based on their response for days to flowering and days to maturity. A set of 95 wheat genotypes was evaluated under normal and late sowing conditions. Data analysis revealed that location, sowing time and genotype has marked effect on days to flowering; and sowing time and genotype has significant effect on days to maturity. Based on the time taken by the genotypes to flower and mature under late sowing condition, six genotypes showing heat tolerance in at least two locations were found to be resistant. Heat susceptibility index value was used to identify a total of thirteen genotypes as tolerant to heat for both the traits. The genotypes identified as heat tolerant would form an important resource for the development of high-yielding varieties under heat stress. Introduction Wheat is an important crop globally and ranks second next to rice in production as a cereal crop.India has largest area (26.3 m ha) under wheat cultivation in the world followed by China (22.5 m ha).The total wheat production for 2013-14 in India was approximately 95.85 million tonnes for the year 2013-14, which was higher than 94.88 and 93.51 million tonnes for 2011-12 and 2012-13, respectively (www.icar.org.in).This was largely due to the use of better agricultural practices and improved seed quality during the last decade.However, to meet the demand for food of the ever growing Indian population, a significant increase in grain production is required for cereals including wheat.This can be made possible by developing higher yielding varieties under different stress conditions, of which abiotic stress is the most important. Wheat production is affected by a number of abiotic stresses like drought, floods, high temperature, salinity, chilling etc. in the world including India.Terminal heat stress (high temperature > 32˚C), as a result of global warming, at the time of grain filling is a major limitation to wheat production in many environments and is major cause of yield reduction (Hays et al. 2007).A significant portion of the wheat grown in South Asia is considered to be affected by heat stress, of which the majority is present in India (Joshi et al. 2007).The losses of upto 50% in yield potential have been estimated when crop is exposed to 32-38˚C temperature at crucial grain formation stage (Wardlaw et al. 1989). A number of traits like chlorophyll content, canopy temperature depression, photosynthetic rate, biomass, thousand grain weight, grain yield are yield associated traits affected by heat stress.Grain filling duration has been widely used as a measurement of heat tolerance (Fokar et al. 1998).Heat stress during grain filling is responsible for shortening of grain growth period and improper grain filling which affects over-all yield of wheat crop (Rane et al. 2007).The heat stress tolerant varieties/genotypes can be identified by calculating heat susceptibility index (HSI) following field evaluation for a number of agronomic traits.HSI has been used in previous studies for measuring heat tolerance in crops like soybean and wheat (Ayeneh et al. 2002;Githiri et al. 2006;Kirigwi et al. 2007;Mohammadi et al. 2008;Mason et al. 2010Mason et al. , 2011)).A total of eight QTLs on different chromosomes were detected in a study that involved use of HSI for thousand grain weight, grain filling duration, grain yield and canopy temperature depression (Paliwal et al. 2012).In another study, HSI for short term reproductive stage heat stress was calculated and used for the identification of twenty seven QTLs (Mason et al. 2010).HSI was used for identification of five heat tolerant varieties, on their relative performance in yield components, grain yield and heat susceptibility indices, among a pool of 25 spring wheat genotypes (Khan et al. 2014).The use of HSI and performance under late sowing heat stressed conditions has also been reported in a number of studies earlier as well (Mohammadi et al. 2008;Pinto et al. 2010;Yang et al. 2010;Barakat et al. 2011;Mason et al. 2010Mason et al. , 2011)). The present study was aimed at identifying heat tolerant genotypes on the basis of heat susceptibility index (HSI) and performance under late sowing heat stressed conditions for days to flowering and days to maturity. Materials and methods Planting material and field evaluation A set of 95 diverse wheat genotypes, obtained from CCSU, Meerut, U.P (India) were used for heat stress evaluation.The wheat genotypes were sown under two regimes of sowing i.e normal sowing and late sowing, both in replicated trails at two locations viz SKUAST-J, Chatha and SKUAST-J, R.S Pura herein referred to as Chatha and RS Pura.Alpha-lattice experimental design with two replications (each for normal and late sowing) was used.Each genotype was sown in plots of 5.0 m 2 with row-to-row spacing of 0.25cm.All agronomic practices recommended for the normal wheat crop were followed.The phenotypic data for days to flowering (DTF) and days to maturity (DTM) was recorded for each genotype in each replication.DTF was recorded as the number of days required for half the length of spikes to emerge out in 50% of plants in a plot.Similarly days to maturity was recorded as the number of days required for 50% of a plot to become physiologically mature as evident from yellowing of plants. Statistical analysis The analysis of variance (ANOVA) was carried out to study the effect of different factors on the days to flowering and days to maturity under normal and late sowing. The paired t-test was carried out to understand the effect of late sowing on two traits.All the data analysis was carried using SPSS statistical package. Heat Susceptibility Index (HSI) Heat susceptibility index (HSI) was used to evaluate the effect of heat stress on days to flowering and days to maturity.The formula used for HSI calculation, taken from Paliwal et al. (2012), is given below: Results and discussion Phenotypic evaluation of genotypes The data on 95 wheat genotypes was evaluated for days to flowering (DTF) and days to maturity (DTM) under two regimes of sowing i.e normal sowing and late sowing.The DTF was scored at two different locations (Chatha and RS Pura) in the year 2014-15 while DTM was scored at one location (Chatha) only in 2014-15.The initial data analysis suggested that there were significant differences in the DTF and DTM between two sowing times and for DTF between two locations.DTF under normal sowing ranged from 91.5 to 114.5 (Chatha) and 99.5 to 120.5 (RS Pura).For late sowing, DTF ranged from 52 to 63.5 and 71.5 to 79 for Chatha and RS Pura locations, respectively.The mean days to flowering for early sowing was 102.89 (Chatha) and 110.25 (RS Pura) and for late sowing was 58.The ANOVA analysis was conducted by taking DTF and DTM as dependent variable and location, sowing time and genotype as independent variables with random effects for DTF; and sowing time and genotype as independent variables with random effects for DTM (Table 1 & Table 2).The analysis revealed that location, sowing time and genotype has significant effects on the DTF and sowing time and genotype has significant effects on the DTM.The main effect interactions location*sowing time, location*genotype and sowing time*genotype showed significant differences for days to flowering and sowing time*genotype showed significant differences for days to maturity. Identification of heat stress tolerant genotypes The performances under late sowing condition revealed significant effects of heat stress on the both traits.A total of 11 genotypes namely, C4, C12, C16, C18, C24, C26, C31, C34, C36, C65 and C76 showed delayed flowering in Chatha location suggesting their ability to withstand heat stress.Ten genotypes (C7, C19, C24, C26, C27, C49, C65, C66, C81 and C83) identified showed delayed flowering in RS Pura location suggesting tolerant nature to heat stress.A total of 11 genotypes identified namely, C1, C5, C12, C15, C19, C26, C34, C58, C71, C87 and C91 showed delayed maturity suggesting their tolerance to heat stress under late sowing conditions (Table 3).Based on the overall performance in late sowing conditions for DTF and DTM, C12, C19, C24, C26, C34 and C65, were found to be resistant to heat stress.Heat Susceptibility Index (HSI) of wheat genotypes To estimate the effect of heat on the genotypes, paired t-test was employed using data of DTF and DTM from normal and late sowing conditions (Table 4).Significant differences between normal sowing and late sowing environments were found for both the traits.The HSI for DTF ranged from 0.91 to 1.07 for Chatha (Fig. 3) with mean value of 0.99 and from 0.78 to 1.20 for RS Pura with mean value of 0.99 (Fig. 4).Similarly, HSI ranged from 0.42-1.08 for DTM with an average of 0.99 at Chatha (Fig. 5).These values were used to identify heat tolerant genotypes.Low values of HSI (less than 1) are synonymous with high stress tolerance (Fischer and Maurer 1978).Values of stress intensity (D) indicated that both the traits were highly affected by heat stress. HSI of X= [(1-Xheat stress/ Xcontrol)/D] Where, X represents DTF and DTM Xheat stress represents phenotypic values of individual genotypes for DTF and DTM under late sowing Xcontrol represents phenotypic values of individual genotypes for DTF and DTM under normal sowing D (stress intensity) = (1-Yheat stress/ Ycontrol) Yheat stress= Mean of Xheat stress of all genotypes Ycontrol= Mean of Xcontrol of all genotypes 3 (Chatha) and 75.8 (RS Pura) (Fig 1).DTM under normal sowing ranged from 141.5 to 146.5 with an average of 143.8 days and for late sowing it ranged from 112 to 130 with an average of 115.3 days (Fig 2). Fig. 1 : Fig. 1: Days to flowering (DTF) under normal and late sowing for two locations (Chatha and RS Pura). Fig. 5 : Fig. 5: Graph showing range of HSI for all genotypes for days to maturity at Chatha. Table 1 . Analysis of variance for days to flowering under normal and late sowing conditions Table 2 . Analysis of variance for days to maturity under normal and late sowing conditions Table 3 : Heat stress tolerant genotypes identified for days to flowering (DTF) and days to maturity (DTM). Table 4 : Paired t-test for days to flowering and days to maturity using means of early and late sowing. * indicate significant at 0.05 level of probability
2019-04-01T13:16:20.424Z
2016-06-13T00:00:00.000
{ "year": 2016, "sha1": "9e33c041aea55e51ff359d6aa5535768f56824d2", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5958/0975-928x.2016.00016.8", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e6f7ae536894b46b092fe1c8f8b6694a1bbd5b6a", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
53414307
pes2o/s2orc
v3-fos-license
Information and Liquidity of OTC Securities: Evidence from Public Registration of Rule 144a Bonds The Rule 144A private debt represents a significant and growing segment of the U.S. bond market. This paper examines the market liquidity effects of enhanced information disclosure induced by the public registration of 144A bonds. Using the regulatory version of TRACE data for the period 2002-2013, we find that following public registration of 144A bonds, dealer-specific effective bid-ask spreads narrow, especially for issues with higher ex-ante information asymmetry. Our results are consistent with existing theories that disclosure reduces information risk and thus improves market liquidity. Introduction A wide range of corporate securities and derivatives are traded in the over-the-counter (OTC) market, where market-makers and investors search for trading counterparties and bargain over trade terms. Theories show that asymmetric information about issuer's financial conditions is one of the major factors contributing to search frictions and hence market illiquidity in underlying securities (e.g., Duffie 2012). In practice, lack of information can cause extreme illiquidity episodes leading to trading halts of specific securities, for example, when a firm stops filing periodic public reports. Trading conditions may also deteriorate sharply across the broader market when the information risk becomes too high, as evidenced during the 2008 financial crisis. This article examines the liquidity effect on OTC securities of information disclosure in the context of the public registration of the Rule 144A corporate bonds. By comparing the transaction costs before and after a Rule 144A bond becomes a public-registered bond, we conduct a quantitative analysis on how the changes in information asymmetry impacts the trading liquidity in the market for Rule 144A bonds -an OTC market that has received a great deal of attention in recent years. Adopted in 1990, the Rule 144A provides a safe harbor from the public registration requirements of the Securities Act (1933) for resales of restricted securities to "qualified institutional buyers" (QIBs), who generally are large financial institutions and other accredited investors. 1 A large fraction of 144A bond issues carry registration rights and are subsequently publicly registered. 2 Public registration requires all issuers to disclose their financial and operational conditions regularly following SEC security laws, while before public registration, issuers of 144A bonds have no obligation to disclose financial conditions to either the bond investors or regulators unless they are also issuing public equities or public bonds. While there are significant changes in informational environment for market participants in 144A bonds, public registrations in general simply entail offers of new public bonds to exchange for the target Rule 144A bonds, where the terms of the new bonds such as coupon, maturity, amount issued, and option features are mostly identical to those of the exchanged ones. As such, the 144A public registrations do not accompany changes in the issuer's fundamentals such as leverage, allowing us to better identify the liquidity impact from the changes in information set received by investors and market makers. We find that liquidity generally improves following public registration of 144A bonds, with more significant impact for issues with higher ex-ante corporate information asymmetry. Our results show that on average, registration reduces the effective bid-ask spread Round-Trip Cost (RTC) -a dealer-specific liquidity measure estimated from transaction data -by 3.3 (5.0) basis points 100 (30) days around the registration, or about 12 (19) percent of its pre-registration level. Further decomposing trades according to transaction size, we find reduction of liquidity costs around registration appears more significant for large-and medium-sized transactions. The decrease in liquidity cost could also come from more transparent post-trade information, as shown in empirical studies on TRACE dissemination for public bonds (Bessembinder, Maxwell, and Venkataraman, 2006). Allowing non-QIBs to trade post registration could also impact liquidity due to clientele effect. To separate the impact of issuer information disclosure from other drivers of liquidity, such as market transparency and clientele effect, we contrast the change in trading liquidity of 144A bonds that are more likely to experience changes in financial disclosure information environment due to registration with that of other 144A bonds. The hypothesis is that if the financial and operational information disclosure matters for trading liquidity, then the public registration effects should be stronger among issuers with greater corporate information risk pre-registration. Two DID analyses are conducted for this hypothesis. Firstly, focusing on the 144A bonds experiencing public registrations, we contrast issues of firms with publicly-traded stocks with those of private firms. Because public firms file regularly, public registration of their 144A bonds has relatively less information content, when compared to 144A bond registrations of previouslyprivate firms. In robustness checks, we expand the "public firm" definition to include firms with public bonds prior to the Rule 144A bond registration because those firms also files financial disclosure for the public bonds they offer. Secondly, we use three metrics from bond prospectus to measure ex-ante information asymmetry of an issue: total word count, count of uncertainty words, and file size, following the literature of SEC filing analysis (e.g., Li 2008, Loughran and McDonald 2013, Ertugrul et al. 2015. Suggested by the phenomenon of "information obfuscation," where issuers tend to hide adverse information through lengthy filings (Bloomfield 2002), more lengthy files with more uncertainty words in post-registration bond prospectus are identified to be issues with higher degree of ex-ante information risk and are predicted to have larger liquidity change from registration. The results from both of these DID analysis show that the reduction in liquidity costs of trading the once Rule 144A bonds post registration is larger for firms with higher degree of information asymmetry. These findings provide evidence that enhanced corporate information disclosure associated with public registration improves the market liquidity of OTC securities. Overall, this paper contributes to the finance literature examining the relationship between information asymmetry and liquidity cost in secondary market trading. Based on a comprehensive and unique transaction data set on Rule 144A bonds, we show that public disclosure reduces liquidity costs for OTC securities. Following public registration of the 144A bond, bid-ask spreads narrow, and more so for firms with higher ex-ante information asymmetry. We believe our paper is the first to identify such relationship between corporate information disclosure and liquidity cost in bond market. It also complements empirical studies on bond liquidity with detailed examination on Rule 144A bonds, which are often omitted in earlier bond market studies largely because the dissemination of 144A bonds is only recent. The rest of the paper proceeds as follows. Section 2 provides institutional background on Rule 144A bonds, emphasizing on the information flow for these bonds around their public registration. Section 3 discusses our contributions to the literature. Section 4 describes our data, sample construction, and liquidity measures. Section 5 presents our main empirical results on the impact of registration to liquidity and trading activity. Section 6 provides additional robustness tests and other findings regarding trading activities. Section 7 concludes. Firms issue Rule 144A debt for a number of reasons. Huang and Ramirez (2010) argue that the speed of issuance by bypassing SEC registration has been the main driving force behind the growth in Rule 144A debt as a popular funding source. Speed of issuance is perhaps especially valuable for low credit quality firms as they are likely to have urgent financing needs. Another possible explanation is the lender specialization hypothesis: Private lenders (e.g., banks and QIBs) have advantages over public lenders in handling credit risk and information asymmetry because they have more skills and resources in the areas of information production, monitoring efficiency, and renegotiation in financial distress. As discussed in Bolton, Santos, and Scheinkman (2012), the disclosure exemption in the "dark" markets such as 144A bonds allows informed QIBs to exploit their information advantage and, as such, cream-skim most valuable assets away from the public. Information Disclosure of 144A Bonds Issuers Both of these incentives suggest information content is one of the key distinctions between Rule 144A bonds and public bond offerings. In contrast to public bond offerings, 144A bond issuers do not have any stand-alone mandatory requirement to disclose their financial conditions periodically in regular public filings. The only publicly available information investors could get come from the "Form D" filings that SEC requires 144A bond issuers to file within 15 days of the issuance. Form D is a brief notice that includes the names and addresses of the company's promoters, executive officers and directors, and some details about the offering such as offering amount, but contains little other information about the company. For example, issuers can report their total revenue as range value in Form D but they can also select "decline to disclose" or "Not Applicable". Except for total revenue range or total Net Asset Value range, there is virtually no financial information about the issuer to release in Form D. In contrast, the U.S. Securities Act of 1933 requires that, before issuing public bonds, firms register with the SEC and provide a prospectus furnishing following information: (a) description of the company's properties and business; (b) description of the security to be offered for sale; (c) 3 Specifically, out of the total $20.7 trillion of corporate debt issued during 1990-2013, $4.1 trillion was Rule 144A debt. Also, 144A bond issuance as a share of total U.S. corporate bond issuance rose from 23.40 percent during precrisis period (1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006) shortly after the company files them with the SEC. After registration and once the security is successfully offered to the public, the issuer is subject to periodic and non-periodic disclosure requirement, such as 10-Q (quarterly report), 10-K (annual report), and 8-K (material events). The SEC requirement on disclosure reduces the information asymmetry between issuers and outside investors not only through offering prospectus at the time of issuance, but more importantly, through the expectation of subsequent filings of periodic financial statements and material events of issuers. Another informational difference in the market for 144A bonds versus public bonds is that prior to July 2014, Finance Industry Regulatory Authority (FINRA)'s transaction reporting system TRACE (Trade Reporting and Compliance Engine) has not phased in the dissemination of trading information on 144A bonds to general public. 5 That is to say, transaction information, such as trading volume and transaction price, was not available publicly to either investors or market makers for 144A bonds, in contrast to the more transparent OTC market for public corporate bonds. Our data ends in 2013 when the TRACE reporting change has not kicked in. Hence, public registration of previously issued 144A debt significantly improves the information environment for the prospective investor, more so if the firm has no prior public equity or debt issues. It is noteworthy that a considerate portion of Rule 144A bond issuers are also public firms who have public listed stocks in equity market, or are "private firms" who have outstanding public bonds. These kinds of issuers are still requested by SEC law to have detailed financial disclosure through 10-Q or 10-K filings. Our sample includes these firms as well as pure "private" firms who only issue Rule 144A bonds. About 40% of the issuers in our sample are private firms without any publicly listed security prior to their registration of the Rule 144A bonds. We use such distinction in financial disclosure to identify the impact of financial information disclosure on liquidity from other sources of liquidity mover, such as transaction information transparency, clientele base change, and market conditions that may change registration decision. Of course, regulation requirements do not prevent bond issuers to voluntarily disclose more information to potential investors or market makers, but most of such information disclosure is on a bilateral base from issuers to initial purchasers of the bonds, hence invisible to the public or costly to obtain. 6 In lack of data, we cannot build our empirical study on the degree of such private information provision. Since our focus is on the market liquidity in the secondary market, the information provided to initial purchasers in the primary market is less relevant. If trading information, such as price and trading volume, of 144A bonds were transparent, investors might imply issuers' financial conditions from secondary market trading. However, prior to July 2014, transaction information is not disseminated to general public either, so such information transmission channel is also blocked and an opaque OTC market enforces the information asymmetry rising from less-regulated information disclosure. Related Literature From market microstructure theory, the market liquidity of traded securities reflects the risk of asymmetric information about the securities (e.g., Milgram 1985, Diamond andVerrecchia 1991, among others). In particular, Diamond and Verrecchia (1991) predicts more corporate disclosure should in general reduce liquidity premium embedded in asset prices, although too little information asymmetry may discourage market making activities due to lack of profit. Duffie and Lando (2001) and Yu (2005) show that better corporate disclosure reduces yield spread and affects the observed term structure of yield spread. However, previous empirical studies on the relationship between corporate disclosure and liquidity are almost all about OTC stock market, maybe due to data limitation. Studies such as Healy, Hutton, and Palepu (1999), Leuz and Verrecchia (2000), Easley, Hvidkjaer, and O'Hara (2002), Greenstone, Oyer, and Vissing-Jorgensen (2006), and Brüggemann, Kaul, Leuz, and Werner (2016) have found that market liquidity in the OTC stocks improve following mandatory 6 SEC's regulation on 144A securities provide prospective buyers designated by current holders of the security "right to obtain" certain financial information from the issuer, but does not specify terms of such obligation. In practice, private issuers seldom respond information request from secondary market participants in a timely manner. disclosure requirements because higher quality financial reporting and better disclosure can reduce information asymmetry. 7 White (2016), using a proprietary database of transaction-level OTC, shows that the typical OTC investment return is severely negative and that investor outcomes worsen for OTC stocks that have weaker disclosure-related eligibility requirements. Different market tiers of information disclosure are related to liquidity in both the OTC markets (Davis, Van Ness, and Van Ness 2016) and the Pink Sheet markets (Jiang, Petroni, and Wang 2016). Ang, Shatuber, and Tetlock (2013) find that there is an illiquidity premium among OTC stocks and that the premium is largest among stocks held predominantly by retail investors and those not disclosing financial information in the OTC stocks. We believe our paper is the first empirical study to use transaction-level information in corporate bond market to provide evidence on how the issuer information disclosure affects market liquidity. The availability of detailed transaction data of public corporate bond OTC market through mandatory TRACE reporting facilitates empirical studies on bond liquidity, but the relationship between corporate information disclosure and market liquidity has not been the focus. The relationship between post-trade transaction information and bond market liquidity has been investigated in studies such as Bessembinder, Maxwell, and Venkataraman (2006), Edwards, Harris, and Piwowar (2007), Goldstein and Hotchkiss (2007), and Goldstein, Hotchkiss, and Sirri (2007), where they all provide evidence that TRACE dissemination of transaction information results in lower transaction costs. In a very recent working paper, Jacobsen and Venkataraman (2018) uses TRACE reporting on 144A bonds and shows transaction costs decreases after July 2014 when TRACE starts to disseminate transaction information on 144A bonds as well. Similarly, Schurhoff (2007a and2007b) and Green, Li, and Schurhoff (2011) find positive effects of trade dissemination on transaction information in the municipal bond market. Given these rich evidences on the positive impact of market transparency on liquidity, we design our empirical study to focus instead on the impact of issuer financial information disclosure on liquidity. Employing DID analyses based on ex-ante information asymmetry in issuers' financial and operational facts, we are able to disentangle information disclosure impact from post-7 Greenstone, Oyer, and Vissing-Jorgensen (2006) analyze the effects of mandatory disclosure requirement using 1964 Securities Acts Amendments in U. S. equity markets and find the Amendments created $3.2 to $6.2 billion of value for shareholders of the OTC firms in their sample. Brüggemann, Kaul, Leuz, and Werner (2016) analyze a comprehensive sample of more than 10,000 U.S. OTC stocks and find that OTC stocks that are subject to stricter regulatory regimes and disclosure requirements have higher market quality (higher liquidity and lower crash risk). trade transaction information disclosure, or in other words, market transparency impact. We provide the first empirical evidence in the corporate bond market that financial information disclosure enhances market liquidity. Our paper also complements existing studies on 144A securities by focusing on the liquidity effects of information disclosure brought by public registration. Previous studies on 144A corporate bonds examine yield spreads and various aspects of the financing choice. Studies generally find that Rule 144A debt on average has higher yield than their public counterparts (e.g., Fenn 2000, Livingston and Zhou 2002, Chaplinsky and Ramchand 2004, and Huang, Kalimipalli, Nayak, and Ramchand 2017. Other studies on 144A bond market focus on the cost analysis of issuance and corporate finance choice between public and 144A bond issuance. These include the 144A debt issuance by foreign firms where 144A market is fast replacing the public debt market for high yield and non-rated international issues (e.g., Miller and Puthenpurackal 2002;Gao 2011), and the roles of corporate governance, market timing, industry strategy, and market competition in the decision of public vs. private debt or equity financing (e.g., Arena and Howe 2009;Barry, Mann, Mihov, and Rodríguez 2008;and Tang 2012). The empirical evidence is broadly consistent with the notion that costs associated with mandatory disclosure regulation have an economically significant impact on the financing choice between public and private financing. The majority of the studies above conjecture that investors in 144A bonds generally require higher yield because of lower liquidity and higher degree of information opaqueness in the private debt market. Our paper is among the first to provide direct evidence on the liquidity effects in the 144A corporate bond market using transaction data. Hollifield, Neklyudov, and Spatt (2014) examine market liquidity using transaction data on 144A securitized (collateralized) securities, but their focus is the role of dealers in a network setting in liquidity provision. They find that central dealers receive relatively lower spreads than peripheral dealers, with the centrality discount stronger for 144A securitizations. Jacobsen and Venkataraman (2018) uses 144A bond data since 2013 and examines the liquidity impact of TRACE reporting change in 2014 for 144A bonds. In contrast to their data analysis, we compare the transaction costs around the event of public registration of each 144A bond, instead of the event of July 2014 TRACE reporting requirement change. In addition, our DID approach helps identify the information disclosure effects from other factors potentially associated with the changes in liquidity conditions. Rule 144A Bonds in TRACE and FISD We examine 144A bond issues that are subsequently registered as public bonds. We use Table I indicates that 144A bonds is a significant portion of the corporate bond universe in TRACE. [ Panel C of Table I shows the primary market bond characteristics of the merged TRACE-FISD sample of Rule 144A bonds. While most bonds are non-convertible, non-puttable and non-secured, more than half of the issues are callable. The average offering maturity of these bonds is 7.57 years, or medium-to long-term. Public Registration of Ru1e 144A Bonds Previous studies, such as Livingston and Zhou (2002) and Huang and Ramirez (2010), search company filings in EDGAR to identify subsequent registration of 144A bonds. We instead use a matching sample approach to identify 144A registration events. The information on the 144A registration rights clause or the exercise of the rights is not readily available. However, Rule 144A securities become registered mostly through an exchange offer in which the debtor issues registered public securities (with a new CUSIP) to tender for the 144A securities. The prospectus for the 144A issues typically states that the new bonds issued pursuant to such an offer will be substantially identical to the 144A bonds for which they may be exchanged in several attributes, such as coupon rate, maturity date, security (collateral), and restrictive covenants. Therefore, for every 144A bond in TRACE sample, we search the FISD database to identify a public bond that matches key characteristics of the 144A bond. We treat the matched bond as the corresponding registered bond and its issuance date as the registration date. Specifically, our matching criteria are as follows: (i) the public bond is issued by the same borrower zero day to five years (inclusive) after the date of the 144A issuance, (ii) the difference in the dates of maturity of the two bonds is no greater than 30 days, (iii) the difference in the offering amount of the two bonds is no greater than 5 percent; (iv) the two bonds have the same coupon rate and same coupon type (fixed or variable), 9 (v) the two bonds have the same collateral condition (secured or not), and (vi) the two bonds have the same "straight" characteristic (straight bond or not, where straight bond is defined as non-convertible, non-puttable, and non-callable). From Panel C of Table I, we note that 10 percent of the 144A bonds are secured and 41 percent of the 144A bonds are straight. The matching process yields 2,749 bonds for the universe of 11,443 bonds in Panel B of Livingston and Zhou (2002), who, unlike us, rely on Thompson SDC, EDGAR filings, and/or Bloomberg data to identify registration. 10 The number of bond registrations in our sample is insensitive to the matching dimensions of maturity and size differences. 11 Panel A of Table II provides the characteristics of the registration. More than 99 percent of the matched pairs have exactly the same maturity, and more than 75 percent (90 percent) of the matched pairs have the same (less than 0.1 percent difference in) the offering amount. The slight differences in the offering amount in the right tail of the distribution may be due to early payment such as sinking fund. More than half the registrations take place within half a year of 144A issuance, and more than 95 percent of the registrations take place within a year. Even though we allow for five years to search for potential registration, these results indicate that the registration, if any, takes place quickly. [ We provide further validation of the registration events identified above by examining the EDGAR filings of the registered bonds. Specifically, we employ machine searching of bond prospectuses to ensure that the matched procedure that we used above indeed produces registered bonds of 144As. In registering a public bond for the 144A bond, the issuer makes an exchange offer. We note that the following four phrases appear frequently in the exchange-offer prospectuses in our reading of a number of bond prospectuses: "offer to exchange," "exchange offer," "exchange note," and "to exchange." We hence count the number of appearances of these phrases in bond prospectuses to verify that the pairs that we identified in Table II are indeed 144As and their exchange offers. Out of the 2,749 matched bond pairs, we are able to download 1,150 prospectuses from the SEC's EDGAR website for those issues that we can calculate the bid-ask spread measure of roundtrip cost (to be elaborated on in the next section). Panel B of Table II provides the summary 10 Some authors report that for certain subsets of 144A bonds, the registration rate is much higher. For example, Huang and Ramirez (2000) document that the registration rate for all 144A convertible debt issues is about 88 percent for the sample period of 1996 to 2004. Note that in Panel C of Table I we identify that convertible debt issues only consist of 10 percent of all domestic 144A debt issues for our sample period. 11 For example, if we relax the maturity difference of the two bonds to 15 (60) days and size difference to 0.025 (0.10), the matching process would yield 2,700 (2,878) pairs of bonds. statistics of the above keywords in these prospectuses. The minimum number of occurrences of the aforementioned exchange-related phrases in a prospectus is four times. 98.2% (99.7%) of the prospectuses contains at least 50 (five) occurrences of these keywords, showing that our matching algorithm is effective in identifying 144A registrations. Out of the four key phrases, perhaps not surprisingly, "exchange offer" and "exchange note" make the most frequent appearance, each appearing on average 100 or more times in a prospectus. These results suggest that our filtering of 144A registration is highly effective in capturing the public registrations of 144A debt. We keep all of the matched pairs in our sample but note that our results are robust to excluding the registered issues whose prospectuses have fewer than 50 occurrences of the exchange-related phrases. In untabulated results, we also investigate whether there is a change of registration Liquidity Cost Measures We are interested in the impact of information disclosure of bond issues on the liquidity cost of trading as these bonds are subsequently registered. Among the prevailing bond trading liquidity measures, we consider dealer round-trip liquidity measure following Goldstein et al. (2007). The dealer round-trip metric is based on transaction prices of opposite sides of trades matched by the same dealer and the same trading volume. This measure offers a direct estimate of trading spread charged by dealers. We construct a RTC measure of liquidity, as described below, and develop our main results of liquidity cost comparison based on this RTC measure. For each bond in our sample, we use the dealer ID provided by FINRA and only dealer-to-customer trades, and search for matched trading pairs within the same day from the same dealer with the same trading volume at the opposite trading sides. That is, for each trade in which a customer sells (buys) a bond to a dealer, we attempt to find a subsequent trade in which the same dealer sells (buys) the bond with the same amount to another customer within the same day. If we find such a pair, we estimate the bid-ask spread that the dealer charges to the customers as the difference between the pair of buy-sell prices. 12 Formally, we define this round-trip cost, for each pair of trades as We take a simple average to aggregate the RTC measures by trading day, bond, and dealer. Daily RTC for each bond is then averaged across dealers and used as the sample liquidity measure. Clearly, the availability of RTC depends on whether there exist at least two opposite-side trades on the same bond with the same volume intermediated by the same dealer on the same day. Although in calculating RTC, we disregard all other trades, it is common practice in bond markets that dealers do cover the trades within a short-time period if they are pure market makers. 13 Using RTC has advantages in our setting. For one thing, this liquidity measure is based on pairs of transactions with the same trading volume, hence immune from the impact on transaction costs from trading volume, a critique that some other liquidity measures, such as Amihud's priceimpact measure, suffer from (see Schestag et al. 2016). In addition, RTC is less contaminated by intra-day price volatility, as dealers live on bid-ask spreads. Even although during volatile days, fundamental price movements may also affect RTC, but to a lesser degree as compared to other measures such as those based on price dispersion. 12 In case there are multiple trades that match the original trade in terms of trading volume, trading parties (dealer), trading sides, and trading day, we select the trade with the closest transaction time. 13 Roll's measure, requiring at least three trades on the same bond within the day, is another widely-used liquidity measure. In contrast, RTC requires two trades at minimum. Sample Correlations We use 100 trading days around registration as our primary event window. For the 2,487 matched registered bonds after 2002, we are able to calculate the round-trip-cost of 1,734 bond pairs, or more than two-thirds of the sample. Table III shows the correlations among the major variables used in the paper. The registration indicator variable, post, which takes the value of zero (one) for the transaction time before (post and including) registration, is negatively correlated with RTC. The table also reports that, consistent with conventional wisdom, RTC is negatively correlated with bond offering amount, firm size, and whether the issuing firm is a public firm or not, and is positively correlated with firm's leverage and stock return volatility. [ Univariate Analysis of Liquidity Change around Registration We first examine RTC changes around registration. In the comparison of RTC liquidity measures, we take the mean values of RTC for each bond-pair, respectively, during specific window periods before and after the registration, and then average over the mean values of all bonds. This way, each bond has an equal weight regardless of its trading frequency. Panel A of Table IV offers the comparison of RTC pre-and post-registration. It shows that RTC decreases after registration regardless of event time window. From windows 30 to 100 days, the differences of RTC and RTC_pct pre-and post-registration are all significant, and the reduction of RTC ranges from 12 to 24 percent of the spread value, which is economically significant. [ Transactions with par-value volume larger or equal to $5 million ($1 million) of an investment grade (high-yield or non-rated) bond belong to the "large" group; transactions with volume smaller than $100,000 belong to the "small" group; and the rest of the transactions are classified into the "medium" group. Within each size group, we re-calculate the RTC liquidity measures as we do for the full sample. Panel B of Table IV shows that large trades dominate the sample, accounting for around two thirds of the trade observations. Noticeably, the large trade-size group has the largest reduction in RTC post-registration. The medium trade-size group also has a reduction in RTC but with a smaller magnitude. In the small trade-size group, RTC instead increases; however, we note that small trades account for less than 10 percent of the trades, and such increase disappears in later sections of multivariate regressions when prospectus-based information asymmetry measures are introduced. Baseline Regressions of RTC We test the robustness of univariate results using panel regressions that control for both bond cross-sectional characteristics variables and aggregate market variables. In examining the change of RTC following the public registration, we employ the following baseline regression specification based on the extant bond liquidity literature: for a given bond issue i at day t. The dependent variables are either of the two liquidity measures, i.e., RTC or RTC_pct, and β 1 , β 2 , and β3 refer to regressions coefficients. Our main focus in Equation (1) is post, the registration indicator, that embodies overall transaction cost changes due to registration. Other regression covariates consist of issue-specific attributes (offering amount, time to registration, maturity, ratings, and callability dummy), issuer-specific characteristics (whether the firm is a public firm with listed stock or not, firm-size, leverage, idiosyncratic stock return volatility), 14 and aggregate bond market credit and liquidity risk factors (term-structure slope, default, funding liquidity, and VIX). We control for year fixed effects and clustering effects by issuer and employ heteroscedasticy adjustments in all regressions. Equation (1) is our baseline regression to evaluate the effects of registration. Models (1) and (5) (1)), or about 8 percent of its pre-registration level. The coefficient estimate of post on RTC_pct in Model (5) is however not significant. In untabulated results we find that when we instead only control for the macro variables and issue-or issuer-characteristics, post is significantly and negatively related to RTC_pct. Thus, the insignificance of post on RTC_pct in Table V may be caused by potential multicollinearity. In sum, Models (1) and (5) of Table V shows that public registration on average improves liquidity. [ DID Analysis: The Impact of Information Asymmetry on Bond Liquidity The results above show that there is a reduction of liquidity cost in trading after a 144A bond becomes publicly registered. We further investigate whether these changes are driven by information disclosure. To pin down the impact of corporate information disclosure to liquidity cost, we use a DID approach to identify the liquidity change caused by enhanced corporate information disclosure. We conduct two sets of DID analysis in this section. In our first DID test, we examine how registration affects the market liquidity of bonds for higher versus lower information asymmetry firms. The premise is that if the information disclosure matters for trading liquidity, then the public registration effects should be stronger among issuers with greater information risk pre-registration. Our hypothesis is that the RTC measure of bid-ask spread decreases more for firms with high information asymmetry pre-public registrations. In our second DID test, we examine the issue-level ex-ante information asymmetry so that the registration effect can be further attributed to the bond itself. DID Results on Information Asymmetry: Public vs. Private Firms As discussed, we first differentiate bonds issued by previously-public and private firms. Previously-public firms, via their SEC-mandated information disclosure, have less degree of information asymmetry to investors relative to private firms. Because public firms disclose regularly their financial and operational conditions, public registrations of their 144A bonds may have relatively less information content, compared to 144A registrations of private firms. Our primary measure for the status of previously-public firm is whether the firm's stock is publicly listed pre-registration. Compared to a firm whose listed securities are only public bonds, regulatory disclosure of stock-listed firms is arguably much more extensively followed by investors and analysts. A large fraction of U.S. 144A bonds are issued by private firms. Earlier we showed that intersecting FISD with TRACE leaves us with 11,443 U.S. 144A bond issues from 3,528 issuers (Table I) To examine whether information asymmetry is a channel for the effect of registration on liquidity, we focus on the interaction term of post times one of our information asymmetry measures. Models (2) and (6) of Table V present the results using public, our previously-public firm dummy, as the information asymmetry measure. We note that the interaction term, post_x_public, loads positively on liquidity measures of RTC and RTC_pct. Recall our earlier results that bond registration per se leads to reduction in RTC. Models (2) and (6) of Table V indicate that the reduction in RTC through bond registration is weaker (stronger) in public (private) issuers. The overall reduction in the bid-ask spread in the event of registration of RTC is moderated by public-issuer-in fact, the net reduction of RTC via post for public-issuers, which equals the sum of coefficients of post and post_x_public, is close to zero or slightly positive, suggesting that the reduction in RTC takes place mostly in private issuers. We note similar results for RTC_pct. Hence, Models (2) and (6) of Table V provide evidence for the role of information asymmetry via private firms as a channel for the effect of 144A registration on liquidity. DID results on Information Asymmetry: Bond Prospectuses In the previous section, the measure public captures the issuer-level of information asymmetry but not the granular issue level. The challenge of the latter lies in the fact that about half of the bonds are issued by private issuers. One way to learn about the issuer and issue by the private issuer is through bond prospectus. A growing literature examines how firms disclose the information in their financial reports such as 10-Ks and IPO prospectuses, and finds that firms tend to hide adverse information through lengthy filings. Li (2008) finds that 10-K reports are harder to read when earnings are lower; and You and Zhang (2009) These prospectus metrics allow us to differentiate, in an ex-ante sense, the level of information asymmetry among the bond issues. As discussed earlier, we downloaded 1,150 bond prospectuses (offering documents of the exchange bonds) for bonds that we can calculate RTC, and calculate these prospectus measures. (7)- (8) of Table V present the information asymmetry results using log_wc. We first verify in Models (3) and (7) that log_wc is positively related to RTC and RTC_pct. Models (3)-(4) and That is, higher information asymmetry, as proxied by larger values of log_wc, induces larger bond bid-ask spreads. For our focus of interaction variable between post and log_wc, we note that it loads negatively on liquidity measures of RTC and RTC_pct. These results are consistent with those of public, in that they both indicate that bid-ask spread decreases more for high-information asymmetry issues. The coefficient estimates of the post and log_wc interaction term (post_x_log_wc) is about the same as those of log_wc, indicating that the information asymmetry effect of registration is about the same as the main information asymmetry effect itself. Thus, the evidence suggests that the channel of information asymmetry is an important consideration in the reduction of bid-ask spread in Rule 144A bond registrations at the issue level. In sum, both DID tests support our hypothesis that information disclosure improves trading liquidity of private bonds. Endogeneity Consideration of Registration Our sample so far relies on 144A bonds that are subsequently registered. However there is a possibility that bond registration may be an endogenous firm choice depending upon borrowing costs, issue amount, firm size, market liquidity and funding conditions. Endogeneity can, therefore, arise from self-selection, as more liquid firms may have a greater tendency to register. To control for potentially endogenous bond registration, we run a traditional two-stage Heckman's test. In the first stage, we utilize all domestic 144A bonds in FISD and run a probit regression of whether a bond is registered or not, based on bonds' primary-market characteristics, including issue-and issuer-characteristics and macro conditions. In the untabulated first-stage regression of registration, there are 10,548 Rule 144A issues from FISD, out of which 3,945 are registered. We observe that elements such as offering spread, offering size, callability, and public firms are positively related to the probability of registration, whereas elements such as idiosyncratic return volatility, macro default level, and liquidity are negatively related. These results are by and large consistent with going-public literature (e.g., Pagano, Panetta, and Zingales 1999). We then calculate the inverse Mills ratio (IMR) from the first stage and include the IMR in the stage-two regressions of secondary-market liquidity measures of RTC. Different Trade Sizes We next examine how the post-public registration impact on liquidity is influenced by the size of the underlying trades. Post registration, as retail investors begin to trade 144A bonds, the average trading size drops. Table VII presents the results with different trade-size groups. We find that post_x_public remains significantly positive, and post_x_log_wc remains significantly negative for large and medium trade-size groups, but both become insignificant for small tradesize group. These results echo the patterns of RTC reduction in Table IV, where we found that the reduction in RTC is concentrated in large and medium trade-size groups, which nonetheless account for 90 percent of all trades. [ Alternative Information Asymmetry Measures We investigate the robustness with respect to alternative ex-ante information asymmetry measures. So far, we measured the public firm based on whether the issuer has a listed stock at the time of registration. We now expand the measure to include firms issuing public bonds as well. Specifically, a firm is instead defined as "public" if at the time of registration of the Rule 144A bond, it has a listed stock or an outstanding public bond issued at least two months before and maturing at least two months after the registration date. Earlier we reported that public-equity firms account for 39.5% of the trades in our sample; in contrast, public-equity or public-bond firms account for 53.1% of the trades. Results in Panel A of Table VIII show that post_x_public remains significantly positive for both RTC and RTC_pct; and such significance remains for large and median trade size groups. [ prospectus measures load positively, their interaction terms with post load negatively, and the coefficient estimates on the interaction terms are about the same magnitude as those of the counterpart prospectus measures. These results are highly consistent with those of log_wc and confirm the information asymmetry effect on liquidity during 144A registration. Alternative Event Windows We further employ alternative event windows. As suggested in [ In untabulated results, we further examine the impact of QIBs potentially trading with non-QIBs after two-or one-year vintage for 144A bonds. We first note that very few registrations take place outside this window-earlier Table II shows that the 99 th (90 th ) percentile of registration distance is 1.89 (0.96) years. We drop the few observations with pre-registration trading vintage of greater than either one or two years, and find our results to be robust. Alternative Liquidity Measures and Clientele Effect Our RTC spread measures are constrained by the availability of same-dealer trades. Such 16 We also tested other liquidity measures such as difference of average bid and ask prices as in Hong and Warga (2000), effective half spreads as in Bessembinder and Venkataraman (2010), and intraday price range as in Han and Zhou (2014). These all yield similar results, which are available upon requests. registration more actively. The entrance of retail investors post registration will likely cause lower trading activity as they tend to behave more like "buy-and-hold" investors than QIBs. Retail investors may also interpret public information less efficiently than QIBs (e.g., Kandel andPearson 1995, Han andZhou 2014). The change of the investor base, post registration, may therefore lead to a clientele effect on our results. The negative coefficient of post on total_trd_vol in Table X is consistent with the above conjecture that the introduction of retail and non-QIB customers post registration results in smaller and less frequent trades, or a clientele change. As well, the result of subdued trading size and volume is consistent with Goldstein, Hotchkiss, and Sirri (2007), who find that enhanced transparency is not associated with greater trading volume. [ Amihud. Therefore, higher Amihud illiquidity measure could simply arise from change in investor base. Additionally, there are fewer trades by the buy-and-hold retail investors post registration, resulting in a more negative covariance between consecutive returns (therefore a larger value of Roll). Importantly, we observe that the interaction term post_x_public has a significantly negative coefficient on total_trd_vol and a significantly positive coefficient on Amihud and Roll (Models (1)- (3)). This is consistent with our findings that liquidity improvement post registration is higher for private debt issuers that have higher ex-ante information asymmetry. Table X we constrain the sample to only observations with missing-RTC trades. In this last sample, the clientele effect is arguably more acute, and it is possible that our findings may not survive. Contrary to this, we find that the signs of post_x_public and post_x_log_wc on total_trd_vol, Amihud and Roll remain the same as those in Panel A. Thus, the results on traditional liquidity measures are not driven by the clientele effect. Conclusions In this paper, we utilize a detailed transaction level data on Rule 144A bonds to examine the liquidity change following registration, focusing on the link between information asymmetry and bond trading liquidity. We use broker-dealers' round-trip-cost (RTC) to measure the underlying liquidity of the 144A bond market. We find that the registration of Rule 144A bonds leads to decreasing trading costs, especially for 144A issues with higher ex-ante information asymmetry. Specifically, our results show that on average registration reduces RTC by about 12 percent of its pre-registration level in 100 days around the registration. More importantly, we find that the reduction of RTC is larger for 144A issues with higher ex-ante information asymmetry. These results are based on two difference-in-difference approaches, by contrasting registered bonds between public and private firms, and between high and low ex-ante information asymmetry issues as embodied in bond offering prospectuses. We also conduct several robustness tests such as incorporating possible endogeneity of registration, subsamples by trade size, alternative liquidity and asymmetric information measures, alternative event windows, and the effects of year 2008 SEC reform. Overall, our findings suggest information disclosure contributes to lower transaction costs and better liquidity of corporate bond trading in the OTC markets. In addition, enhanced market transparency from TRACE dissemination seems to have positive effects on decreasing liquidity costs as well, while clientele base change following public registration of 144A bonds could be associated with lower trading volume, smaller transaction size, and perhaps higher liquidity costs for bonds with little information asymmetry change post registration. Most of the liquidity improvement effects are found in the large-and medium-sized trades. Hence, despite the change in clientele base and the reduction of trading size post registration, we continue to observe information asymmetry driving liquidity. Our paper contributes to policy debate on the externalities of financial disclosure in the platforms could exacerbate average underpricing in primary asset markets and reduce welfare. Extant literature, on the other hand, also show, mostly on the equity side, that disclosure in OTC markets could provide benefits such as reduction in cost of capital, increased liquidity, restraining the shadow financial sector, and lowering crash risk (e.g., Greenstone et al. 2006, Brüggemann et al. 2016. Our results imply, in the context of OTC private debt market, that corporate disclosure improves information environment by lowering information asymmetry and improving underlying liquidity. Appendix A. Variable Definitions This table describes all the variables used in the paper. RTC Round-trip cost of trading, a proxy for effective bid-ask spread, calculated as the daily mean price difference between buy and sell dealer-customer trades, while price is measured as $100 at par. RTC_pct Round-trip cost of trading as a percentage of the midpoint of the buy and sell. Amihud Percentage change in bond price between two consecutive trades divided by the dollar trading volume (in million $) of the first transaction. A price impact measure of liquidity. Roll Roll measure of liquidity cost, i.e., effective bid-ask spread constructed following Roll (1984), equals two times the square root of minus the covariance between consecutive returns from price changes. total_trd_vol The logarithm of dollar volume of the trade (in thousand $). Issue-specific characteristics (Source: FISD; EDGAR) post A dummy variable that equals one for the post-registration period and zero otherwise public A dummy variable that equals one if the bond is issued by a firm with a public equity (or by a firm with a public equity or public bond in robustness check), and zero otherwise. post_x_public The interaction term of post times public. log_wc The logarithm of the number of words in the bond offering prospectus. log_wc_unc The logarithm of the number of uncertain words in the bond offering prospectus. log_fsize The logarithm of the file size of the bond offering prospectus. post_x_log_wc The interaction variable of post times log_wc. offer_amt Offering amount of the bond, in millions. In regressions, the variable is transformed into logarithm. post2008 An indicator variable that equals to one if the bond transaction time is after Feb. 29, 2008, and zero otherwise. post_x_post2008 The interaction term of post times post2008. post_x_public_x_post2008 The interaction term of post_x_public times post2008. post_x_log_wc_x_post2008 The interaction term of post_x_log_wc times post2008. Registered A dummy variable for a 144A bond that is subsequently registered. IMR Inverse Mills Ratio from bond's registration regression. Issuer-specific characteristics (Sources: COMPUSTAT,CRSP) firm_size Logarithm of the issuing firm's market capitalization of the previous three months, obtained as the product of stock price and shares outstanding. ltdebt_ratio Ratio of long-term debt to total book value of assets of the previous fiscal year. iv Idiosyncratic return volatility, computed as standard deviation of residuals from the application of Fama-French 3-factor model on six months of monthly stock returns preceding the transaction date. A 144A issue and its exchange issue must appear both before and after registration to be included in the comparison. In the comparison, for each bond-pair, we take the mean values of the RTC liquidity measures, respectively, before and after the registration, and then average over the mean values of all bonds to arrive at the number reported in the table. In Panel B, we partition the transactions into three size groups: Large, Medium, and Small, according to their trading volume and bond credit ratings. Transactions with volume larger or equal to $5 million ($1 million) at par value for an investment grade (high-yield or non-rated) bond belong to the "Large" group; transactions with volume smaller than $100,000 belong to the "Small" group; and the rest of the transactions are classified into the "Medium" group. After the partition, within each size group, we re-calculate the RTC liquidity measures as we do for the full sample. ***, **, and * indicate significance at 1, 5, and 10 percent levels, respectively. variable that equals one for the post-registration period and zero otherwise); public (a dummy variable if the issuer is a public firm and zero otherwise); post_x_public, an interaction variable of post and public; log_wc (logarithm of total word count of the bond offering prospectus); and post_x_log_wc, an interaction variable of post and log_wc. To preserve the number of observations, issuer characteristics of firm_size, ltdebt_ratio, and iv are replaced with zero when missing. All other variables are defined in Appendix A. All regressions include controls for year-specific fixed effects and issuer-specific cluster effects, and adjustments for heteroskedasticity. Values of t-statistics are reported in parentheses. ***, **, and * indicate significance at 1, 5, and 10 percent levels, respectively. (1) In the first stage, registered (a dummy variable equal 1 for a 144A bond that is subsequently registered and 0 otherwise) is regressed on bonds' primary-market characteristics, where we utilize all of the primary market 144A bonds from FISD. In the second stage, the estimated inverse Mills ratio (IMR) from the first stage is included in secondary-market regressions that are otherwise identical to Table V. The second-stage regressions include controls for year-specific fixed effects and issuer-specific cluster effects, and adjustments for heteroskedasticity. Values of t-statistics are reported in parentheses. ***, **, and * indicate significance at 1, 5, and 10 percent levels, respectively. (1) Panel A reports the results of regressions using the existence of either public equity or public bond as the measure for "public". Panel B reports the results for alternative prospectus measures. All control variables are omitted for brevity. Values of t-statistics are reported in parentheses. ***, **, and * indicate significance at 1, 5, and 10 percent levels, respectively. Panel A: Alternative Measure of "public" Using the existence of public bond or equity status prior to 144A registration to define "public" The dependent variables consist of the logarithm of the total dollar volume of the trade (total_trd_vol), Amihud measure (Amihud), and Roll's measure (Roll). In Panel A, all trades are included; in Panel B, trades for which we can calculated RTC are excluded. The control variables in Panel B, same as those in Panel A, are omitted for brevity. All regressions include controls for year-specific fixed effects and issuer-specific cluster effects, and adjustments for heteroskedasticity. Values of t-statistics are reported in parentheses. ***, **, and * indicate significance at 1, 5, and 10 percent levels, respectively.
2018-10-29T21:26:05.493Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "19a254282f7988f2c3caa2ccb73f9dd8a17d92e1", "oa_license": null, "oa_url": "https://doi.org/10.17016/feds.2018.061", "oa_status": "GOLD", "pdf_src": "ElsevierPush", "pdf_hash": "19a254282f7988f2c3caa2ccb73f9dd8a17d92e1", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
6726683
pes2o/s2orc
v3-fos-license
Customized Knee Prosthesis in Treatment of Giant Cell Tumors of the Proximal Tibia: Application of 3-Dimensional Printing Technology in Surgical Design Background We explored the application of 3-dimensional (3D) printing technology in treating giant cell tumors (GCT) of the proximal tibia. A tibia block was designed and produced through 3D printing technology. We expected that this 3D-printed block would fill the bone defect after en-bloc resection. Importantly, the block, combined with a standard knee joint prosthesis, provided attachments for collateral ligaments of the knee, which can maintain knee stability. Material/Methods A computed tomography (CT) scan was taken of both knee joints in 4 patients with GCT of the proximal tibia. We developed a novel technique – the real-size 3D-printed proximal tibia model – to design preoperative treatment plans. Hence, with the application of 3D printing technology, a customized proximal tibia block could be designed for each patient individually, which fixed the bone defect, combined with standard knee prosthesis. Results In all 4 cases, the 3D-printed block fitted the bone defect precisely. The motion range of the affected knee was 90 degrees on average, and the soft tissue balance and stability of the knee were good. After an average 7-month follow-up, the MSTS score was 19 on average. No sign of prosthesis fracture, loosening, or other relevant complications were detected. Conclusions This technique can be used to treat GCT of the proximal tibia when it is hard to achieve soft tissue balance after tumor resection. 3D printing technology simplified the design and manufacturing progress of custom-made orthopedic medical instruments. This new surgical technique could be much more widely applied because of 3D printing technology. Background Giant cell tumor (GCT) of bone is a rare, benign, but locally invasive neoplasm, accounting for 3-5% of all primary bone tumors [1,2]. GCT mostly involves the distal femur, followed by proximal tibia and distal radius [3]. The choice of surgical treatments in patients with GCT of the proximal tibia include either intralesional curettage with a high-speed burr, cryotherapy or prenylation, cementation or bone grafting, or en-bloc resection and reconstruction. For primary cases, intralesional curettage with adjuvant methods is the main choice when the local recurrence rate is relatively high [4]. In local recurrence cases or elderly patients, en-bloc resection and reconstruction of the knee joint could be a good choice to reduce the risk of local recurrence and to achieve better quality of life [4]. Generally, we choose a customized rotating-hinge knee prosthesis to fill the bone defect left after en-bloc resection of the tibia. However, poor flexibility and stress concentration cause prosthesis fracture and loosening, which often lead to poor follow-up results [5][6][7][8]. Standard knee prosthesis, which can solve these problems, cannot fix large bone defects after the resection, so the joint stabilization remains a problem. In recent years, clinicians have managed to integrate CT imaging and computer-aided design (CAD) into surgical planning and custom-made implants design. 3D printing technology has also been applied clinically in preoperative planning, and it has been shown that 3D-printed models can achieve accuracy in preoperative design and manufacturing internal fixation devices [9]. Moreover, studies on 3D-printed metallic implants in animal experiments have also been reported [10,11], but it has not been reported in design and production of tumor prostheses. Recently, the China Food and Drug Administration (CFDA) approved the use of 3D-printed metallic implants clinically (registration number: 20153461311). In this paper, we explored the application of 3D printing technology in treating patients with giant cell tumors (GCT) of the proximal tibia through en-bloc resection and reconstruction of the knee joint. A tibia block was designed and produced using 3D printing technology. We expected this 3D-printed block, combined with a standard knee joint prosthesis, would fix the bone defect after en-bloc resection and maintain knee stability through providing attachments for collateral ligaments of the knee, achieving good results. Ethics approval and consent to participate This study was conducted in accordance with the principles outlined in the Declaration of Helsinki and was approved by the Ethics Committee of the Second Hospital of Jilin University. Written informed consent to participate was obtained from all patients involved in the study. Patient data were kept anonymous to ensure confidentiality and privacy. Patient characteristics Between August 2015 and December 2015, 4 patients (1 male and 3 females, aged 35-68 years) with giant cell tumor of the proximal tibia underwent en-bloc resection of the tumor of the proximal tibia and reconstruction with prosthesis. All 4 cases were diagnosed pathologically with GCT. To reduce the possibility of local recurrence, the criteria for en-bloc resection were patients who had local recurrence and aged patients who could not tolerate another operation. One case was diagnosed by primary tumor. The other 3 underwent curettage in previous surgery and suffered a local recurrence. One of them (Case 3) underwent internal fixation after curettage ( Figure 1). Antero-posterior and lateral radiographs, CT, and/or magnetic resonance imaging (MRI) were performed to locate the tumor. Chest X-ray was examined to exclude any pulmonary metastasis. Patient demographics are listed in Table 1. Data acquisition and processing of images Imaging data of the patient's bilateral knee joints was obtained through 64-slice spiral CT scan (PHILIPS Corporation, Japan) (X-ray Tube Current 232 mA and KVP 120 kV, slice thickness 1 mm, reconstruction interval 1 mm). The DICOM file format obtained by each scan was input into the Materialise's Mimics (version 14.0) software package for 3D reconstruction and editing to eliminate impurities. Results were then saved in the STL file format and imported into the IMAGEWARE software (Imageware V12.1, EDS Corporation, USA) to mimic surgery and design of custom-made proximal tibia block ( Figure 2). Selection of tumor prosthesis All patients underwent en-bloc resection of tumor of proximal tibia and reconstruction with prosthesis. Standard knee prosthesis combined with an extension stem were applied. With the help of the newly designed 3D-printed block during the procedure, the soft tissue balance was achieved. Special techniques applied during the procedure Special attention was given to preserve the collateral ligaments during the en-bloc of the lesion. The soft tissue of collateral ligaments was peeled off meticulously from proximal tibia like a sleevelet. After the resection, the ligaments were then sutured to the proximal tibia block, which provided a reticular porous-structured surface for ligaments to attach. The knee joint motion range was tested immediately after the operation. Postoperatively, systematic rehabilitation training plan was designed and carried out. Design and manufacture of proximal tibia block GCT lesion location and the range of resection For all the patients, a CT scan was performed 1 week prior to surgery to obtain a 2-dimensional (2D) CT of the lesion. The tumor resection region was determined according to standard oncologic principles. The surgical excision, 2 cm longer than the tumor boundary, was made wide to reduce the risk of local recurrence [12]. In these 4 cases, the median resection length was 7.8 cm (range, 6.5-8.5 cm). In Case 1, an additional MRI was taken to inspect the bone cortex destruction. In Case 3, a 3D CT was taken to locate the internal plates of the previous surgery (Figure 3). Proximal tibia block design and surgical simulation through 3D-printed model The proximal tibia block was designed through the mirror image of the 3D reconstruction (unaffected side) to ensure the same morphological characters. Then, the length of the block was designed through surgical simulation software according to resection region. The original anatomical model was then modified into a model with several geometric figures in details. Initially, in the first case, reticular porous structure was generated on all of its surface to connect soft tissue, but it was hard to suture through the limited space. Then, to maintain keen stability, horizontal cylinder holes were designed on the surface of the block to attach collateral ligaments according to previous studies [13]. For all 4 cases, a cylinder hole was vertically built into the center of the block to let the extension stem of the tibia tray prosthesis precisely pass through into the medullary cavity ( Figure 4). After the design was completed, the model was saved as an STL file and then imported into a 3D printing machine. The designed block model, include tibia tray prosthesis with extension stem, was 3D-printed at 1: 1 scale. All these models were made of photosensitive resin. Preoperatively, surgical simulation based on the 3D-printed model was produced to check the accuracy of the model ( Figure 5). Manufacture of the block The block combined with proximal tibia prosthesis fitted the tibia precisely during the preoperative surgical simulation through photosensitive resin models. The final product was fabricated by a commercial company (AK MEDICAL Ltd, China), which was certified by China Food and Drug Administration (CFDA). The 3D-printed product was made of titanium alloy (Ti6Al4V) through use of EBM technology [14]. Reticular porous structures were generated on the surface of the block during the process of rapid-prototyping. After post-processing, the block could be used ( Figure 6). Surgery All operations were performed under epidural anesthesia. The patient was placed on the operating table in a supine position. The conventional anterior medial incision for the knee was applied. En-bloc resection of the proximal tibia was strictly performed in accordance with the principles of tumor-free surgery. The length of osteotomy was determined based on preoperative design. Then, the prosthesis and the 3D-printed block were installed into the medulla of tibia with a bone cement technique. Special attention was given to preserve the patellar tendon and collateral ligaments during the en-bloc resection of the lesion. The soft tissue of collateral ligaments was peeled off meticulously from the proximal tibia like a sleevelet. Importantly, the ligaments were sutured to the proximal tibia block through the reticular porous structure. The motion range of knee joint was tested immediately after the reconstruction (Figures 7, 8). Tranexamic acid was used before the tourniquet was loosened to reduce transfusion rate and blood loss [15,16]. Follow-up The patients were followed up at 1, 3, 6, and 12 months during the first year after the operation, every 6 months in the second year, and every year after the second year. Antero-posterior and lateral radiographs were taken to look for signs of prosthesis fracture and loosening. The Musculoskeletal Tumor Society (MSTS) score [17], image appearance, and videos of the motion of these patients were recorded. Surgery effect The operation was completed successfully in all patients. The 3D-printed block fitted the bone defects precisely and connected ligaments and soft issues (Table 1). There were no signs of wound infection, skin necrosis, or other surgical complications (including peroneal nerve injury). The motion range and stability of the knee joint were examined and recorded in Table 1. Monitoring for prosthesis failure requires a long follow-up study. Follow-up results The patients were followed up for 5-8 months. The latest follow-up scores for these patients can be seen in Table 1. There were no signs of prosthesis fracture or loosening. No chronic infection or other prosthesis relevant complications were reported in these cases (Figure 9). Because most of the recurrences after treatment were reported within 2 years [12], a further follow-up study was needed. Discussion En-bloc resection of tumors of the proximal tibia and reconstruction with customized rotating-hinged knee prosthesis has been demonstrated to provide acceptable function and a remarkably low recurrence risk in most cases [4,18]. However, poor flexibility and stress concentration cause prosthesis fracture and loosening, which lead to poor follow-up results [19]. It is desired that rotational and flexional stress affection can be reduced, while the stability is still ensured. Based on these principals, we turned to use of a standard knee prosthesis in this study, which can solve stress concentration and mechanical wear, and the soft tissue balance of the knee was ensured. With the help of the newly designed 3D-printed block based on the unaffected side of the proximal tibia, the bone defect after en-bloc resection was precisely filled. Ligament attachments of collateral ligaments were attached into this block, which can keep the soft tissue balance. Various methods for surgical visualization have been implemented. 3D anatomical images reconstructed from CT or MRI have now become a commonly used tool for orthopedic surgery in the last 2 decades [20][21][22]. The advantage of observing the patient's anatomy through 3D printing is obvious. In the present study, we introduced 3D printing technology into surgical visualization. Through use of a 3D-printed model made of photosensitive resin it is much easier and safer to design treatment plans. Intraoperatively, the personalized designed proximal tibia block makes it easier to apply innovate surgical techniques. Benefit from the advantage of 3D printing technology, the block was modified to easily connect the soft tissue of collateral 1698 ligaments, which could help reconstruct the knee joint, and is useful in design and manufacture of custom-made orthopedic medical instruments. This new surgical technique could be more widely used because of 3D printing technology. 3D printing technology is a kind of additive manufacturing, in which no raw material is wasted, so the cost of 3D printing is lower than in tradition processes. The total cost was about 30% lower than the cost of traditional prosthesis. The time needed to produce a 3D-printed customized prosthesis was also shorter than that required for a traditional prosthesis. Taken together, these facts make this new technology attractive. The material properties of 3D-printed Ti-alloy had been applied in several studies [23]. Liu et al. compared the mechanical characteristics of the EBM-printed Ti-6Al-4V orthopedic implants with the general implants of AO (Association for the Study of Internal Fixation) systems. In this study, it was shown that the stiffness, strength, and structure of EBM plate-bending were greater than the general implants clinically [14]. Facchini et al. worked on the microstructure and mechanical properties of Ti-alloy and found that The Ti-6Al-4V alloy produced by EBM had 99.4% of the theoretical density and a very fine microstructure, and, preserved tensile mechanical properties that fully satisfied the standard requirements [24]. In the operation, the interface between the tibial tray and 3D-printed Ti-alloy block was filled by bone cement to prevent any micromovement. This kind of technique has been safely applied in some prosthesis revision systems [25,26]. The interfaces between the tibial tray and 3D-printed Ti-alloy were filled by bone cement, reducing the chance of fatigue and corrosion. In this study, patients with GCT were selected because it was benign with locally invasive neoplasm, and it had a high possibility of local recurrence. In addition, the possibility of metastasis of GCT was lower than with a malignant bone tumor, which could make the procedure relatively safe. Postoperative fractures and implant loosening were taken into consideration during the design. In this study, we turned to use of cemented stems to acquire better stress distribution in the tibial bone [27]. The length of the stems was designed based on AAOS (American Academy of Orthopaedic Surgeons) principals [28]. A benefit of the new properties of our tibia block is that we were able to used standard knee prostheses, which has lower possibility of postoperative fractures and implant loosening than with the traditional modular rotating-hinge prosthesis [5]. By these means, we expected to avoid postoperative fractures and implant loosening. Moreover, there is no relevant complication reported (4 cases) by now. This study introduced a new application of 3D printing technology in knee surgery. 3D printing technology is widely used in our center for preoperative planning and surgical visualization of complex trauma, correction of bony abnormalities, and custom-made internal fixation for fractures. In terms of the 3D-printed block, its clinical application still has limitation. The time spent on manufacturing was longer than in traditional methods, but this could be solved by cooperating closely with the multidisciplinary team. Conclusions In this study, the personalized 3D-printed proximal tibia block was successfully applied in treating patients with GCT, acquiring functional results. 3D printing technology makes the operation precise and safe, which is beneficial for the patients. This new surgical technique could be much more widely used because of 3D printing technology.
2018-04-03T03:34:27.225Z
2017-04-07T00:00:00.000
{ "year": 2017, "sha1": "5945657a766f504aeeccf8ede274447c3b6a4f1e", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc5391808?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "5945657a766f504aeeccf8ede274447c3b6a4f1e", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231951266
pes2o/s2orc
v3-fos-license
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence Stochastic gradient descent (SGD) is an essential element in Machine Learning (ML) algorithms. Asynchronous parallel shared-memory SGD (AsyncSGD), including synchronization-free algorithms, e.g. HOGWILD!, have received interest in certain contexts, due to reduced overhead compared to synchronous parallelization. Despite that they induce staleness and inconsistency, they have shown speedup for problems satisfying smooth, strongly convex targets, and gradient sparsity. Recent works take important steps towards understanding the potential of parallel SGD for problems not conforming to these strong assumptions, in particular for deep learning (DL). There is however a gap in current literature in understanding when AsyncSGD algorithms are useful in practice, and in particular how mechanisms for synchronization and consistency play a role. We focus on the impact of consistency-preserving non-blocking synchronization in SGD convergence, and in sensitivity to hyper-parameter tuning. We propose Leashed-SGD, an extensible algorithmic framework of consistency-preserving implementations of AsyncSGD, employing lock-free synchronization, effectively balancing throughput and latency. We argue analytically about the dynamics of the algorithms, memory consumption, the threads' progress over time, and the expected contention. We provide a comprehensive empirical evaluation, validating the analytical claims, benchmarking the proposed Leashed-SGD framework, and comparing to baselines for training multilayer perceptrons (MLP) and convolutional neural networks (CNN). We observe the crucial impact of contention, staleness and consistency and show how Leashed-SGD provides significant improvements in stability as well as wall-clock time to convergence (from 20-80% up to 4x improvements) compared to the standard lock-based AsyncSGD algorithm and HOGWILD!, while reducing the overall memory footprint. I. INTRODUCTION The interest in Machine Learning (ML) methods for data analytics has peaked in the last decade due to their tremendous impact across various applications. Parallel algorithms for ML, utilizing modern computing infrastructure, have gained particular interest, showing high scalability potential, necessary in accommodating for significant growing data demands as well as data availability. Parallelization schemes for Stochastic Gradient Descent (SGD) have been of particular interest, since SGD serves as a backbone in many widely used ML algorithms and has proven effective on convex problems (e.g. linear, logistic regression, SVM), as well as non-convex (e.g. matrix completion, deep learning). The first-order iterative minimizer SGD follows the simple rule (1) of moving in the direction of the negative stochastic gradient ∇f with a step size η, of a differentiable target function f : R d → R, quantifying the error of a ML model: where θ t contains the learned parameters of the model at iteration t, typically encoding features of a given data-set. Iterations, calculating over batches of one or multiple data samples each, typically repeat until -convergence, i.e. reaching a sufficiently low error threshold . As in SGD each update relies on the outcome of the previous one, data parallelization is challenging. Still, several approaches have been proposed, distinguished into synchronous and asynchronous ones: Synchronous SGD (SyncSGD) is a lock-step parallelization scheme where the gradient computation is delegated to threads/nodes, then aggregated by averaging before taking a global step according to eq. (1) [44]. In its original form, SyncSGD is statistically equivalent to sequential SGD with larger data-batch [16] [4]. This method is well-understood and widely used, e.g. in federated learning [32]. However, its scalability suffers as every step is limited by the slowest contributing thread. In addition, higher parallelism implies an impact on the convergence, inherent to large-batch training [20]. Semi-synchronous variants have shown improvements [24], [25], relaxing lock-step semantics and requiring only a subset of threads to synchronize, hence reducing waiting. In a recent article [25] it was seen that requiring only a few, even just one, thread at synchronization, implies significant speedup due to less waiting and higher throughput, motivating further study of asynchronous parallel SGD. Asynchronous SGD (AsyncSGD) on the other hand employs parallelism on SGD/algorithm level, allowing threads to execute (1) on a shared vector θ with less coordination, and has shown superior speedup compared to SyncSGD in several applications [29], [36]. It was first introduced for distributed optimization with a parameter server sequentializing the updates. In this context it was proven that the algorithm converges for convex problems [1] despite the presence of noise due to stale updates. A relaxed variant, HOGWILD! [36], allowing completely uncoordinated component-wise reads and updates in θ, showed substantial speedup, however only on smooth convex problems with sparse gradients. This, besides staleness, also introduces inconsistency incurred by non-coordinated concurrent reads and writes on θ, penalizing the statistical efficiency. Only if parallelization gains counterbalance the latter penalty, will there be an actual improvement in the wall-clock time for convergence. Challenges: There are substantial analytical results and empirical evidence that AsyncSGD [1], [9], [12], [36] provides speedup for problems satisfying varying assumptions on convexity, strong convexity, smoothness and sparsity, e.g. Logistic regression, Matrix completion, Graph cuts and SVM training. Recently, a target of study is parallelism in SGD for wider class of more unstructured problems, not conforming to strict analytical assumptions, such as artificial neural network (ANN) training, or deep learning (DL) in general. Recent works [6], [13] explore aspects of data-parallelism in the context of distributed and parallel SGD for DL. However, using abstraction libraries such as TensorFlow and Keras in Python implementations, with its inherent limitations in parallelism and performance, makes time measurements unreliable. As a consequence, the existing literature address the topic mostly from an analytical standpoint, and empirical convergence rates are almost exclusively measured in statistical efficiency, i.e. n.o. iterations, as opposed to actual wall-clock time. With new methods that potentially affect the computational efficiency, i.e. time per iteration, such results can be delusive, with unclear usefulness in practice. Moreover, such implementations have limited capability of fine-grained exploration of aspects of synchronization mechanisms and consistency, the critical impact of which on the convergence properties has been observed analytically; It was shown (i) in [11] that the number of iterations until convergence increases linearly in the magnitude of the maximum staleness and (ii) in [3] that inconsistency due to HOGWILD!-style updates further increases the same bound with a factor of √ d, d being the size of θ. There is a need for further exploration of how synchronization, lock-freedom and consistency impacts the actual wall-clock time to convergence, to facilitate work in development of standardized platforms for accelerated DL. For DL applications, convergence of sufficient quality is challenging to achieve, requiring exhaustive neural architecture searches and careful tuning of many hyper-parameters. Unsuccessful such tuning typically results in models never converging to sufficient quality, or even executions which crash due to numerical instability in the SGD steps [42]. The step size η is among the most important hyper-parameters, while data-batch size, momentum, dropout, also play a significant role. Tuning is vital for the convergence and end performance, and is a time-consuming process. On one hand, parallelism in SGD is crucial for speedup, but it introduces new hyperparameters to tune, such as number of threads, staleness bound and aspects of synchronization protocol. In addition, AsyncSGD introduces noise due to staleness, further impacting convergence and potentially causing unsuccessful executions. There is hence a need for methods enabling speedup by parallelism tolerant to existing parameters, and avoiding the overhead of tuning additional ones related to parallelism. Focal point and contributions: In summary, there are challenges in understanding the dynamics of asynchrony and consistency on the SGD convergence [40] in practice as outlined in Fig. 1, in particular for applications as DL. Understanding better the tradeoff between computational and statistical efficiency is a core issue [30]. It is known that consistency helps in AsyncSGD [3]. However, whether it is worth the overhead to ensure consistency with locks or other synchronization means, to improve the overall convergence, is a research question attracting significant attention, as we describe here and in the related work section. We study asynchronous SGD in a practical setting for DL. In a system-level environment, we explore aspects of synchronization, lock-freedom and consistency, and their impact on the overall convergence. In more detail, we make the following contributions: • We propose Leashed-SGD (lock-free consistent asynchronous shared-memory SGD), an extensible algorithmic framework for lock-free implementations of AsyncSGD, allowing diverse mechanisms for consistency and for regulating contention, with efficient on-demand dynamic memory allocation and recycling. • We analyze the proposed framework Leashed-SGD in terms of safety, memory consumption and we introduce a model for estimating thread progression and balance in the Leashed-SGD execution, estimating contention over time and the impact of the contention-regulation mechanism. • We perform a comprehensive empirical study of the impact of synchronization, lock-freedom, and consistency on the convergence in asynchronous shared-memory parallel SGD. We extensively evaluate Leashed-SGD, the standard lockbased AsyncSGD and its synchronization-free counterpart HOGWILD! on two DL applications, namely Multilayer Perceptrons (MLP) and Convolutional Neural Networks (CNN) for image classification on the image classification benchmark dataset MNIST of hand-written digits. We study the dynamics of contention, staleness and consistency under varying parallelism levels, confirming also the analytical observations, focusing on the wall-clock time to convergence. • We introduce a C++ framework supporting implementation of shared-memory parallel SGD with different mechanisms for synchronization and consistency. A key component is the ParameterVector data structure, providing a modularization facilitating further exploration of aspects of parallelism. The paper is structured as follows: In section II we outline preliminaries and key notions for describing Leashed-SGD, while its contention and staleness dynamics are described in sections III and IV. The comprehensive empirical study is presented in V, followed by further discussion of related work in section VI, after which we conclude in section VII. Convergence rate is the product of computational and statistical efficiency, sensitive to hyper-parameters tuning. We show the significant impact of lock-free synchronization on these factors and on reducing the dependency on tuning, enabling improved convergence. II. PRELIMINARIES Here we give a brief background, along with a more refined description, for the questions and the metrics in focus. 1) SGD and DL: Artificial neural networks (ANNs) are computational structures of simple units known as neurons, inspired by the biological brain. Neurons are arranged into layers, each performing a non-linear transformation of the output from the previous layer, parameterized by a set of learnable weights. The input layer is initialized as the input to be analyzed, e.g. an image to be classified. The output layer gives the final output, e.g. the class of an image. Different types of layer arrangements give rise to a diverse class of ANN architectures, with different applications. Among the most prominently used are multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs) [5], where MLPs consist of layers densely connected through a weight matrix, and CNNs of sparsely connected performing filter convolutions, used in conjunction with MaxPool downsampling layers. Some more information on MPLs and CNNs appears in the Appendix. The aforementioned weights and filters consist of parameters, learned through the training process. We refer to the collection of all such parameters belonging to an ANN, flattened into a 1D array, as the parameter vector, denoted as θ t , at iteration t of SGD. This abstraction is used in subsequent sections when arguing regarding consistency and progress. Nonlinear activation functions are applied after each layer, where common choices are the ReLU function σ(x) = max(0, x) for all layers except the last, where instead the softmax activation function σ i (x) = e xi / |x| j=1 e xj , for each output neuron i, is used in order to acquire a predicted probability distribution. With this, an error measure f (θ) can be defined, the minimization of which constitutes the training process. The metrics of interest are (i) statistical efficiency, i.e. the number of SGD iterations required until reaching an error threshold f (θ * ) < , i.e. -convergence (ii) computational efficiency measuring the wall-clock time per iteration and, most importantly (iii) the overall convergence rate, i.e. the wallclock time until -convergence, of most relevance in practice. 2) System Model: We consider a system with m concurrent asynchronous threads, with access to shared memory through atomic operations to read, write and read-modify-write, e.g. CompareAndSwap (CAS), FetchAndAdd (FAA) [17] on single-word locations. Each thread A computes SGD updates (1) according to a pre-defined algorithm, in the context outlined in the previous paragraphs. Since A must read the current state θ t prior to computing the corresponding stochastic gradient ∇f (θ t ), before A's updates take place, there can be intermediate, referred to as concurrent updates, from other threads, The number of such updates, between A's read of the θ t vector and A's update to apply its calculated gradient ∇f (θ t ), defines the staleness τ of the latter update. When there is lack of synchronization, as in HOGWILD!, a total order of the updates is not imposed, and the definition of the staleness of an update is not straightforward; we adopt a definition similar to [3]. We refer to Section (III) for details on how the staleness is calculated for the different algorithms, and thereby the total order of the updates. Under the system model above, we have that the asynchronous SGD updates according to (1) instead will follow where v t = θ t−τt is the thread's view of θ. 3) Synchronization methods and consistency: For consistency on concurrently accessed data, different methods for thread synchronization exist, the most traditional one being locks for mutually exclusive access. Non-blocking synchronization avoids the use of locks. [17]. A common choice is lock-free synchronization, ensuring that in the presence of concurrent object accesses, some are able to complete in a bounded number of steps, thus guaranteeing system progress. Such synchronization mechanisms usually implement a retry loop involving CAS or equivalent, in which a thread might need to repeat, in case another thread has succeeded. Besides progress guarantees, to argue about concurrent data accesses, we consider data consistency. The most common is atomicity (aka linearizability, with non-blocking synchronization), and it implies that concurrent object operations act as if they are executed in sequence, affecting state and returning values according to the object's sequential specification [17]. 4) Problem overview: In the following, we focus on exploring the effectiveness of asynchronous parallel algorithms for SGD, for training deep neural networks (DNNs). We study the computational and statistical efficiency for different applications, and the overall time to -convergence. We explore in particular the effect of different synchronization mechanisms on consistency, contention and staleness, and the resulting impact on the convergence and memory consumption. III. THE Leashed-SGD FRAMEWORK In the following we define Leashed-SGD along with the proposed ParameterVector data structure's common interface, containing the values of the parameter vector, as well as metadata used for memory recycling. We also express AsyncSGD and HOGWILD! using this interface; both are well established versions of parallel SGD implementations [1], [36]. Modified versions, optimized for specific applications, have been proposed, e.g. in [41], however not in the context of DL. In the following, we use them as general baselines, representative of the classes of consistent asynchronous SGD algorithms and the synchronization-free, inconsistent HOGWILD!-style ones. 1) Introducing ParameterVector: Considering (1), each worker in parallel SGD reads the shared data object θ, computes a gradient and updates the former. We propose a set of core components for this type of data structure, Param-eterVector, providing possibilities to get parameter values and submit updates. An instantiation of ParameterVector can be local or shared among threads. For concurrent accesses to it, its implementation can provide certain consistency and progress guarantees (cf. section II). Hence studying shared memory data-parallel SGD implementations with synchronization in focus, is to study implications of the properties of the algorithmic implementations of the parameter vector seen as shared object, connecting to and extending work in the literature on bulk operations on container data structures [34]. for Algorithm 1 describes the core components for the algorithmic implementation of ParameterVector. A main one is the array theta of dimension d (typically a very large number in DL applications, e.g. in the well-known AlexNet [21] CNN architecture there are 62,378,344 parameters). A read of the parameters can be accomplished by getting a pointer to theta, while function update() performs the addition (2) on theta. Notice that algorithm 1 does not provide specific synchronization for protecting reads of updates, which is instead left to the algorithmic implementation's "front-end" to specify, depending on the demands of consistency. It provides however additional methods and metadata for keeping track of accesses and for recycling memory, as explained further in this section. While there is some resemblance with a multi-word register [18], [22], two significant issues here are (i) the nature of the update, which is a bulk Read-Modify-Write operation and (ii) the very large value of d, posing challenges both from the memory and from the timing (retry loop size) perspectives. 2) Baselines outline: Algorithm 2 shows the lock-based AsyncSGD, one of the baselines, achieving consistency in the reads and the updates of the parameters through locking. This introduces an overhead, influencing the thread interleaving, with unclear implications on staleness and statistical efficiency. This is further explored in Section V. There is one shared variable of type ParameterVector, P ARAM , and two local ones to each thread, one with a copy of the latest state of the shared parameter vector (local param) and one for storing the gradient (local grad). HOGWILD!'s algorithmic implementation is similar to Algorithm 2, except that the locks are removed, since no synchronization happens among the threads accessing the parameter vector. Certain overhead is thus eliminated, however at the cost of inconsistency in the parameter updates. The algorithm outline is available in the Appendix. For problems with sparse gradients the lack of synchronization will not significantly impact the convergence, since the update() operation will only influence a few of the d components in theta. For DL applications though, its influence is not well understood. 3) Leashed-SGD: Lock-free consistent AsyncSGD: The key points and arguments supporting Leashed-SGD, which is shown in pseudocode in Algorithm 3, using ParameterVector core components from Algorithm 1, are as follows: P1. Local calculation and sharing of new parameter values: Each thread manages its update locally new param, and attempts to publish the result in a single atomic CAS operation (line 31), switching a global pointer P to point to its new instance (Fig. 2). As a successful CAS replaces the previous "global" vector, copies of parameter vectors that become global are totally ordered on their sequence number, t. A vector that has been replaced using the aforementioned CAS, is labeled as stale through a boolean flag (stale f lag in ParameterVector) that is one of the data structure's fields. P2. Memory recycling: Since a new ParameterVector is needed for each such update, a simple yet efficient recycling mechanism of stale and unusable ones ensures that the memory used is bounded. Besides the label for marking a ParameterVector instance as stale (ensuring no new readers, making it a candidate for recycling), the field n rdrs, indicates whether the ParameterVector should persist due to active readers. P3. Lock-free atomic reads of the shared vector: To access the global ParameterVector threads acquire a pointer to the most recent by accessing P . Through that pointer, the thread can access and use the theta and metadata of that ParameterVec- tor, in particular for calculating the gradient without copying. While a ParameterVector V is in use, V.n rdrs is non-zero (it is atomically increment-able and decrement-able in the start_reading() and stop_reading() functions). Note that the update of the global pointer P , and the marking of the previous global vector as stale, are two operations. Hence, for a thread to acquire the latest ParameterVector in a concurrency-safe manner, this must be done in a retry loop, in latest_pointer(). Due to this fact and how the global pointers are updated, a read preceded by another read will not return parameter values older than its preceding read returned. P4. Conditions for safe recycling: For reclaiming the memory of a ParameterVector V , the V.stale f lag must be true and V.n rdrs must be zero. The first condition ensures that the ParameterVector instance is not the most recently published, and its address is no longer available to any thread (Algorithm 3, line 31), ensuring no additional future accesses. The second condition ensures that no thread is currently accessing V , with the exception when a thread just acquired a pointer that just became stale, which subsequently will repeat after the staleness check that follows in line 8. Note that stale instances of ParameterVector will be reclaimed by the last thread to access it, when calling stop_reading(). P5. Lock-free atomic updates of the shared vector: The publish is attempted through a CAS invoked in a retry loop, and if it fails, another thread must have succeeded. Update attempts are repeated until CAS succeeds, or until a persistence bound T p decided by the user has been exceeded. The loop thus implies lock-free progress guarantees. For T p = 0 it implies similar semantics as the LoadLinked/StoreConditional primitive, hence its name LoadAndUpdate-StorePersistenceConditional (LAU-SPC). Note that bounded T p essentially implies bounded retries. As formulated in (2), due to asynchrony, the gradients can be applied on a different ParameterVector instance than the one that was used to compute the gradient. Hence, after finishing the gradient computation, threads acquire the pointer to the most recent published ParameterVector instance a second time (Figure 2), on which the update will be applied. The result is then a candidate for publishing, the success of which is decided as described above, implying update atomicity. Based on the previous paragraphs, (in particular on points P1, P3 and P5, respectively points P2 and P4) we have: Lemma 1: Reads and updates of the θ vector by Leashed-SGD, latest_pointer() function and LAU-SPC loop, satisfy lock-freedom and atomicity. Lemma 2: The memory recycling in Leashed-SGD (i) is safe, i.e. will not reclaim memory which can be used by any thread for reading or updating and (ii) bounds the memory to max 3m ParameterVector instances simultaneously. A note on memory consumption: Note that AsyncSGD and HOGWILD! need 2m + 1 instances of ParameterVector constantly. In Leashed-SGD threads compute gradients based on a published ParameterVector instance, which will never be altered by any thread. After the gradient computation is finished, additional memory is allocated for new parameters. This mechanism enables an overall reduced memory footprint, in particular when gradient computation is time consuming. This is confirmed empirically in section V. IV. CONTENTION AND STALENESS In the following we analyze the dynamics and balance of the proposed Leashed-SGD, the effect of the persistence bound, and its impact on the contention and staleness. 1) Dynamics of Leashed-SGD: We analyze the dynamics of the threads, their progression under concurrent execution of Leashed-SGD. The model is similar to a G/G/1 queue, but with arrival and departure rates λ t , µ t varying over time, depending on the current state of the system. For a single thread executing the gradient computation, the rate of arrival to the LAU-SPC (retry) loop is λ (1) = 1/T c , where T c is the gradient computation time. For an mthread fully concurrent execution, the arrival rate scales proportionally to the number of threads currently outside the LAU-SPC loop, hence λ (m) = (m − n)λ (1) where n denotes the number of threads in the retry loop. Similarly, for the departure rate from the LAU-SPC loop we have µ (1) = 1/T u where T u is the execution time of the ParameterVector update(). In summary: We then describe the dynamics of how threads enter and leave the LAU-SPC retry loop of Leashed-SGD as follows: where n t is the number of threads executing the retry loop at time t. Note that the system (4) has a fixed point n * = (T c /T u + 1) −1 m at which the number of threads in the retry loop will stay constant. Note that n * rewrites to n * /m = T u /(T u +T c ), i.e. that thread balance at the fixed point depends solely on the relative size of the update time T u , highlighting the importance of the ratio T u /T c . In section V we show closer measurements of T c , T u for different applications. In the following, we study how n t progresses for Leashed-SGD, stability and convergence about the fixed point. Theorem 3: Assume we have an m-thread system where threads arrive to and depart from the Leashed-SGD LAU-SPC loop with the rates in (3). Then, we have that the number n t of threads in the retry-loop at time t is given by where T c , T u denotes the time for gradient computation and update, and n 0 is the initial number of threads in LAU-SPC. Due to space constraints, the proof appears in the Appendix. Corollary 3.1: The fixed point n * is stable, and the system will converge towards lim t→∞ n t = n * for any initial n 0 . The result is confirmed by taking t → ∞ in (5). The above results enable understanding of the dynamics of how threads progress throughout the execution, in particular that they converge to a balance between gradient computation and the LAU-SPC, which will be used in the following. 2) Persistence analysis: The persistence bound implies a threshold on the maximum number of failed CAS attempts in Leashed-SGD, before threads compute a new gradient. This implies an increase, denoted by γ > 0, in departure rate from the LAU-SPC retry loop, proportional to the number of threads currently in the retry loop as follows: Corollary 3.2: Under the same conditions as in Theorem 3, but using the departure rate (6), the fixed point moves to Note that (i) n * γ < n * and (ii) n * γ vanishes as γ grows, showing the contention-regulating capability through a persistence bound, i.e. an increased γ. As pointed out in [4], the complete staleness τ t of an update ∇f (v t ) according to (2) is comprised of two parts: τ t = τ c t + τ s t where τ c t counts the number of published updates concurrent to the computation of ∇f (v t ), and τ s t counts the ones that compete with the update in focus and are scheduled before it; in particular here, the latter counts the competing updates in the LAU-SPC loop that succeed before that update. Considering now the estimation E[τ s t ] ≈ n * γ , it follows that the persistence mechanism described above for reducing contention effectively regulates the additional staleness component due to scheduling of ready gradients. E.g., consider T p = 0: for each published update there was no failed CAS, hence no other update was published after the corresponding gradient was used. Then τ s t = 0, which is the maximum staleness reduction possible here. In section V we study this empirically, showing it holds in practice and is effective for regulating contention and tune the staleness. V. EVALUATION We present the results from our extended empirical study, benchmarking the methods in Section III, studying influence of consistency and associated synchronization, on the metrics described in Section II: convergence rate, statistical and computational efficiency, and memory consumption.The algorithms included are sequential SGD (SEQ), Lock-based AsyncSGD (ASYNC), HOGWILD! (HOG), and Leashed-SGD with persistence ∞, 1, 0 (LSH ps∞, LSH ps1, LSH ps0). 1) Implementation: The algorithms and the framework are implemented with C++, with OpenMP [10] for sharedmemory parallel computations, and Eigen [14] for numerical. The framework extends the MiniDNN [35] C++ library for DL. For implementing the ParameterVector and Leashed-SGD, a substantial refactoring was accomplished, extracting all learnable parameters into a collective data structure, the ParameterVector. This abstraction forms an interface between SGD algorithm constructions and DL operations, enabling implementation of consistency of different degrees through various synchronization methods. The proposed framework Leashed-SGD application-specific and apply as parallelization of SGD for any optimization problem, in particular of high dimension. For the empirical evaluation an extensible framework is implemented in conjunction with ANN operations, facilitating further research exploring algorithms for parallel SGD for DL with various synchronization mechanisms. 2) Experiment setup: We evaluate the methods of Section III for two DL applications, namely MLP and CNN training on the MNIST benchmarking dataset [23]. The proposed method, however, facilitates generic implementations of SGD, and is Box plots in the figures contain statistics (1 st and 3 rd quantiles, minimum and maximum) from 11 independent executions of each setting; outliers are indicated with the symbol +. Where executions fail to reach the required precision , the measurement is not included as basis for the box. Such execution instances, and those that fail due to numerical instability from staleness, are indicated as 'Diverge' and 'Crash', respectively. This information is highlighted because failing DL training executions due to noise from staleness or hyperparameter choices is a common problem in practice [42]. It is vital that training succeeds, and that the execution time thereby is not wasted. The threshold is specified in terms of percentage of the target function at initialization f (θ 0 ) ≈ 2.3. 3) Experiment outcomes: The steps of our experiment methodology, summarized in Table I, are as follows: S1. Convergence and hyper-parameter selection: We benchmark the convergence of the algorithms considered under a wide spectrum of parallelism, and for varying step size η. In this step the executions are halted at = 50% in order to acquire an overview of the general scalability and relative performance among the evaluated methods. The results are presented in Fig. 3, showing a complete picture of the convergence rate and computational efficiency under varying parallelism, the metric of interest being the wall-clock time required until reaching -convergence. The baselines are at best with m = 16 threads and η = 0.005, which we choose as a yardstick for further tests to ensure a fair comparison, and to stress-test Leashed-SGD. The results of the step size test appears in the Appendix, showing higher capability of the proposed Leashed-SGD to converge for larger η. S2. High-precision convergence for MLP: Using the setting selected according to the above, we benchmark the algorithms and their convergence rate for reaching high precision ( = 2.5%). We pay attention to the staleness τ distribution, to gain understanding based also on the results of section IV. Using m = 16, η = 0.005, we benchmark Leashed-SGD and baselines to high-precision 2.5%-convergence, measuring the wall-clock time (Fig. 4, left). Leashed-SGD shows competitive performance, with faster convergence and smaller fluctuations. In particular, LSH ps∞ reaches = 2.5% error within 65s median (compared to baselines' 89s and 80s). As hypothesised in section IV, Fig. 6 confirms that the staleness distribution is significantly reduced by the persistence bound. S3. Convergence rates for CNN: We study the convergence for the CNN application, benchmarking time to convergence for increasing precision , studying the staleness and convergence over time. The proposed Leashed-SGD shows fewer diverging executions, with significant improvements in time to high precision convergence with up to 4× speedup relative to the baselines AsyncSGD (Fig. 7). Measurements of memory consumption and computation times (T c , T u ) appear in the Appendix. Due to the sparse nature of the CNN topology, the gradient computation vs. update application time ratio T c /T u is high, leading to a significantly reduced memory footprint (with 17% on average) of Leashed-SGD. S4. Higher parallelization for MLP: We stress-test the methods, with m = 24, m = 34 (max. solo-core parallelism) and m = 68 (max. hyper-threading). The results appear in Fig. 4-6, showing Leashed-SGD provides significantly improved convergence and stability, with improved staleness. S5. Memory consumption: We perform a fine-grained continuous measurement of the memory consumption of all algorithms considered, for MLP and CNN training. For the CNN application, Leashed-SGD reduces the memory consumption by 17% on average thanks to dynamic allocation of ParameterVector and efficient memory recycling.The detailed plots appear in the Appendix. 4) Summary of outcomes: Leashed-SGD shows overall an improved convergence rate, stable under varying parallelism and hyper-parameters, and significantly fewer executions that fail to achieve -convergence. In presence of contention, the lock-free nature enables Leashed-SGD to self-regulate the balance between throughput and latency, and converge in settings where the baselines fail completely. Even the case that with T p = ∞, i.e. without starvation-freedom, we see persistent improvements relative to the baselines, demonstrating in this demanding context too, a useful property, namely that lock-freedom balances between system-wide throughput and thread-associated latency [8], [15]. VI. RELATED WORK The study of numerical methods under parallelism sparked due to the works by Bertsekas and Tsitsiklis [7]. Distributed and parallel asynchronous SGD has since been an attractive target of study, e.g. [9], [12], [26], [37], among which HOGWILD! [36]. In the recent [2] the concept of bounded divergence between the parameter vector and the threads' view of it is introduced, proving convergence bounds for convex and non-convex problems. De Sa et. al [11] introduced a framework for analysis of HOGWILD!-style algorithms. This was extended in [3], showing the bound increases with a magnitude of √ d due to inconsistency, implying higher statistical penalty for high-dimensional problems. This strongly motivates studying algorithms which, while enjoying the computational benefits of lock-freedom, also ensure consistency. To our knowledge, this has not been done prior to the present work. In [31] the algorithmic effect of asynchrony in AsyncSGD is modelled by perturbing the stochastic iterates with bounded noise. Their framework yields convergence bounds, but as described in the paper, are not tight, and rely on strong convexity. In [30], with motivation related to ours, a detailed study of parallel SGD focusing on HOGWILD! and a new, GPU-implementation, is conducted, focusing on convex functions, with dense and sparse data sets and comparison of different computing architectures. Here we propose an extensible framework of consistency-preserving algorithmic implementations of AsyncSGD together with HOGWILD!, that covers the associated design space of AsyncSGD algorithms, and we focus on MLP and CNN, which are inherently more difficult to parallelise. In [40], as in this work, the focus is the fundamental limitation of data parallelism in ML. They, too, point out that the limitations are due to concurrent SGD parameter accesses, usually diminishing or even negating the parallelisation benefits. To alleviate this, they propose the use of static analysis for identification of data that do not cause dependencies, for parallelising their access. They do this as part of a system that uses Julia, a script language that performs just-in-time compilation. Their approach is effective and works well for e.g. Matrix factorization SGD. For DNNs, that we consider in this paper, as they explain, their work is not directly applicable, since in DNNs permitting "good" dependence violation is the common parallelization approach. There are works introducing adaptiveness to staleness [33], [38], [43] and in particular in [4] for a deep learning application. This research direction is orthogonal to this work and can be applied in conjunction with the algorithms and synchronization mechanisms considered here. Asynchronous SGD approaches for DNNs are scarce in the current literature. In the recent work [28], Lopez et al. propose a semi-asynchronous SGD variant for DNN training, however requiring a master thread synchronizing the updates through gradient averaging, and relying on atomic updates of the entire parameter vector, resembling more a sharedmemory implementation of parameter server. In [39] theoretical convergence analysis is presented for SyncSGD with once-in-a-while synchronization. They mention the analysis can guide in applying SyncSGD for DL, however the analysis requires strong convexity. [19] proposes a consensus-based SGD algorithm for distributed DL. They provide theoretical convergence guarantees, also in the non-convex case, however the empirical evaluation is limited to iteration counting as opposed to wall-clock time measurements, with mixed performance positioning relative to the baselines. In [27] a topology for decentralized parallel SGD is proposed, using pair-wise averaging synchronization. In the recent [25] a partial all-reduce relaxation of SyncSGD is proposed, showing improved convergence rates in practice when synchronizing only subsets of the threads at a time, due to higher throughput, complemented with convergence analysis for convex and nonconvex problems. In particular, the empirical evaluation shows only requiring one thread (i.e. AsyncSGD) gives competitive performance due to the wait-freedom that follows from the lack of synchronization. VII. CONCLUSIONS We propose the extensible generic algorithmic framework Leashed-SGD for asynchronous lock-free parallel SGD, together with ParameterVector, a data type providing an abstraction of common operations on high-dimensional model parameters in ANN training, facilitating modular further exploration of aspects of parallelism and consistency, connecting to and extending work in the literature on bulk operations on container data structures [34]. We analyze safety and progress guarantees of the proposed Leashed-SGD, as well as bounds on the memory consumption, execution dynamics, and contention regulation. Aiming at understanding the influence of synchronization methods for consistency of shared data in parallel SGD, we provide a comprehensive empirical study of Leashed-SGD and established baselines, benchmarking on two prominent deep learning (DL) applications, namely MLP and CNN for image classification. The benchmarks are chosen in order to challenge the proposed model against the baselines, and provide new useful insights in the applicability of AsyncSGD in practice. We observe that the baselines, i.e. standard implementations of AsyncSGD, are very sensitive to hyper-parameter choices and are prone to unstable executions due to noise from staleness. The proposed framework Leashed-SGD outperforms the baselines where they perform the best, and provides a balanced behaviour, implying stable and timely convergence for a far wider spectrum of parallelism. The methods are implemented in an extensible C++ framework, interfacing DL operations with parallel SGD algorithms, facilitating further research, exploring algorithms for parallel SGD for DL with various synchronization mechanisms. Exploring different consistency types for the theta updates, in conjunction to sparcification approaches are interesting directions to pursue. APPENDIX ON MLPS AND CNN MLPs consist of several stacked densely-connected layers of neurons, each applying a non-linear transformation of the input and passing the result to the next layer: n is the output of neuron n ∈ {0, . . . , N l − 1} in the l th layer, σ is a non-linear activation function, typically the ReLU function σ(x) = max(0, x), and θ (l,n,w) , θ (l,n,b) contains the learnable weights and bias parameters of to the n th neuron. CNNs consist of convolutional layers, convolving the input with learnable filters for feature detection: for a number of filters f , corresponding to a 1D convolution, but can be naturally extended to 2D. Convolutional layers are sparsely connected, reducing the number of weights to be trained, and are especially efficient for analysis of image/spatial data due to the translation-invariant property of feature detection with convolution. Convolutional layers are often used in combination with MaxPool layers, which map the output of a number of consecutive neurons onto their maximum. This significantly reduces dimension of the signal and the learnable weights. We refer to the collection of all parameters θ (l,n,w/b) , θ (l,f,w/b) belonging to an ANN flattened into a 1D array as the parameter vector, denoted as θ t , at iteration t of SGD. This abstraction is used in subsequent sections when arguing regarding consistency and progress. In the output layer of an ANN, the softmax activation function σ i (x) = e xi / |x| j=1 e xj , for each output neuron i, is often used for classification problems, outputting an estimated class distribution y of an input x. Given the true class/label y, the ANN performance is quantified by the cross-entropy loss function: where y contains the outputs from the last layer, and depends on the input x and the current state of θ. The training process for ANNs then constitutes of iteratively adjusting θ to minimize the error function f (θ) = L(ŷ, y(x : θ)). The BackProp algorithm is used for computing ∇ θ f (θ), and SGD is then used for minimizing f , and training the ANN. In every iteration the input is selected at random, either as single data point or as a batch considered in conjunction. ANALYSIS -COMPLEMENTARY MATERIAL Proof sketch -Lemma 2: The first claim (i) follows from the definition of the safe delete operation of the ParameterVector, ensuring that the memory of an instance P C t is reclaimed only if stale f lag = true (P points to a newer instance, ensuring no new readers of P C t ), n readers = 0 (no readers currently) and that the memory has not already been reclaimed. The second claim (ii) is realized by the fact that the memory recycling mechanism is exhaustive, i.e. ParameterVector instances that will not be used further by any thread will eventually be reclaimed through the delete operation in line 10 of Algorithm 1. The reason is the following: each thread that finishes its use of a ParameterVector instance will call the stop reading operation, which in turn calls safe delete, which reclaims the memory if safe, according to the above, i.e. it holds that the instance is currently not in use and will not be in the future. If that is not the case, then the threads that are currently using the instance will each eventually invoke the safe delete operation, the last of which will perform the reclamation. Now, from Algorithm 3 it is clear that in the worst case each thread has a unique latest param on which it is active reader, and an additional two ParameterVector (new param and local grad), giving in total 3m. Proof of Theorem 3 From (4), we have The details of the ANN architectures implemented in the evaluation (Section 5) are shown in Table II and III for MLP and CNN, respectively. Convergence and hyper-parameter selection: Figure 8 shows the convergence rate for different values of step size η. The baselines AsyncSGD and HOGWILD! show the best performance for η = 0.005, which is hence used in the subsequent test stages. Gradient computation and update time -T c , T u : The distribution of the wall-clock time to compute and apply gradients, respectively, are shown in Figure 9. Despite having a lower dimensionality, the gadient computation time T c is higher for CNN. This is due to the topological nature of the convolutional layer, where filters are strided along the input image pixel by pixel. This requires in practice a large number of smaller matrix multiplications, as opposed to MLP which instead consists of few but significantly larger ones. However, the time to apply one gradient T u is smaller in the CNN application, since the θ vector is smaller. Since the dimension d of the ParameterVector is significantly smaller for the CNN (d = 27, 354) compared to the MLP (d = 134, 794), the time T u to apply an update is smaller, but due to the topological nature CNNs, the gradient computation time T c is relatively high. This results in lower contention in the LAU-SPC. As a consequence the contentionregulating effect of the Leashed-SGD algorithms does not kick in, hence showing similar staleness distribution as the baselines. The proposed Leashed-SGD nevertheless shows significant improvement in the convergence rate. Memory consumption: Figure 10 shows the distribution of the memory consumption of the different algorithms for MLP and CNN training. The measurements were acquired using the UNIX ps command, collected with second granularity.
2021-02-19T02:16:14.575Z
2021-02-17T00:00:00.000
{ "year": 2021, "sha1": "58e0ea96ba1392fc32d84c20c97bf926cedefc27", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2102.09032", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "58e0ea96ba1392fc32d84c20c97bf926cedefc27", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
56231831
pes2o/s2orc
v3-fos-license
A Comparison of Using Dominant Soil and Weighted Average of the Component Soils in Determining Global Crop Growth Suitability Soil parameters represent key data input for crop suitability analysis. Soil databases are complex offering soil mapping units made up of various component soils. In the case of the Harmonized World Soil Database there can be up to 8 component soils per unit. In roughly 1/3 of soil mapping units, the additional component soils take up more than 50% of the pixel share value. The soil parameter value estimate, such as pH, salinity and organic carbon content, may differ between the value of the dominant soil component and the weighted average of the values of all component soil. Understanding the effect of these differences on crop model outputs may allow quantifying the error. In this study, we show the changes in crop suitability of 15 crops while using the parameter value estimates of the dominant soils versus a weighted average of the component soils. In the case of the latter, global crop suitability amounts to 54.5% of the earth’s land surface–1% more than when using the values of just dominant soils. Intrinsic regional differences in the quality of the soil database influence the distribution of crop suitability classes especially in areas where share values of the dominant soil are low. The uncertainty range for the use of dominant versus component soils on the overall global crop suitability could be considered to be 1%, while that of each suitability class can amount to up to 4%. Introduction Ensuring food security for the global population is already challenging in current times and will be even more, when population rises up to around 8.3 billion by 2030 (UNDP, 2008).Enhanced food production relies on three factors: increased yield, enhanced cropping intensity and the expansion of agricultural land (FAO, 2003).In 2009, the total amount of agricultural and permanent crops amounted to 2.5 billion ha which equals about 19% of the earth's land surface (Bontemps, Defourny, Van Bogaert, Arino, & Kalogirou, 2009).In the last four decades of the past century, 172 million ha of land have been added in developing countries (FAO, 2003).To ensure global food security, an additional 120 million ha of converted land are projected to be necessary until 2030 and an extra 5% will be necessary up to 2050 (Bruinsma, 2009).Most land is expected to be transformed in South America and Sub Saharan Africa (Fischer, 2000).Models based on climate and soil inputs can help discern the areas where crops can grow optimally for given natural conditions.Fischer et al. (2002) showed that roughly 2.8 billion ha are to some degree suitable for rain-fed agriculture and Avellan, Zabel, and Mauser (2012) showed that about a quarter of the earth's land surface is suitable to highly suitable for the rain-fed growth of 15 major crops (Avellan, Zabel, & Mauser, 2012;Fischer, 2002).Both authors base their different models (Global agro-ecological zones versus fuzzy logic crop suitability) on global soil and climate databases.However, global soil databases are scarce and rely on patchy soil sampling.Few sets exist, such as the Harmonized World Soil Database (HWSD) (FAO/IIASA/ISRIC/ISSCAS/JRC, 2009) and the ISRIC-WISE derived soil properties on a 5 by 5 arc minute grid (Batjes, 2006).Global Climate Datasets are more varied.Past climate data can be obtained from interpolated station data (WorldClim), reanalysed forecasts (ERA) or hind-casted climate models (ECHAM, HadCM).Avellan et al. (2012) showed that the quality of climate inputs is quite homogenous while global soil databases can differ widely.The choice of the database can have a strong effect on the amount and distribution of crop suitable areas, leading to a 10% difference between the two most common global soil datasets (Avellan et al., 2012).Soil databases are immensely complex and the quality of the data is geographically diverse.For example, the HWSD is made up of four different input databases-each covering different areas of the world, using different sampling and compilation methods (FAO/IIASA/ISRIC/ISSCAS/JRC, 2009) (see Figure 1).Each pixel can contain up to 8 component soils which may, in sum, have a larger share within the pixel than the dominant soil class (see mock up example in Figure 2).When taking component soil classes into account, the soil parameter value estimate for each given pixel may be different than that of the dominant soil mapping unit (i.e.dominant soil value for pH is 8, but that of the weighted average of all component soils is 7.8). In order to enhance modelling results a balance between the quantity and quality of the used input parameters has to be maintained.While more parameters might refine the modelling results, poor quality parameters might, in fact, be counterproductive.A careful analysis of both the quality of the data as well as their influence on final results might inform the choice of parameters.In Avellan et al. (2012), we started our crop suitability analysis using only the parameter value estimates of the dominant soil mapping unit of the topsoil (0-30 cm) on a pixel by pixel basis.In comparison, the Global Agro-ecological zones studies, used soil parameters from all component soils, top-and subsoils (0-30 cm and 30 cm and below), phases as well as management practices (IIASA/FAO, 2012).It is clear to the authors that other parameters relevant to soil databases such as subsoil parameters (30 cm and below), including drainage, granularity or acidity, as well as phases and management practices can have drastic effects on crop growth (Benjamin, Nielsen, & Vigil, 2003;Kirchhof et al., 2000;Van den Akker, Arvidsson, & Horn, 2003). To our knowledge, the use of parameters in crop suitability models has not been substantiated by the analysis of the quality of the data.The inclusion of factors is defended by referring to standard works (i.e.FAO manuals (FAO, 1976(FAO, , 2007) ) or similar) without questioning the validity of the usage.It is our intent to enhance model complexity in a step-by-step approach while showing the error margins incurred.Analogous to the well-known uncertainty ranges of climate models we wish to demonstrate a similar approach in the use of crop suitability estimations.Here, we assessed the influence of the area-weighted average of the additional component soils of the soil mapping units of the topsoil, on the amount and distribution of crop suitable areas. Regions were defined for their economic relevance in global trade as a biophysical crop model was coupled to a Global Equilibrium Model in a subsequent step (Table 1). Dominant vs. Component Soil Areas and Soil Parameter Value Estimates Dominant soil is defined as the HWSD component soil with the largest share value irrespective of the fact that the other component soils together may have a larger share within one pixel.Soil parameter value estimates are the values each pixel has for a chosen parameter, i.e. pH, salinity, etc.In Figure 2 we have tried to show in a mock-up example how a pixel can be made up of several component soils and the effect the weighted average has on the parameter value estimate. Figure 2. Mock-up examples of two pixels with different distributions of component soils (left); effect of using the weighted average on the overall parameter value estimate versus using that of the dominant soil (right) We used GIS techniques to determine the area of prominence of dominant soils and compared it in size to that where component soils had higher percentages.We used Mondrian (version 1.2), an open source statistical analysis tool (University of Augsburg, 2012), to study the distribution of dominant soil units and component soil units.For the spatial representation of the soil units, a FORTRAN program was designed that allowed assigning the soil unit share to each pixel. Determination of Crop Suitable Areas We used the fuzzy logic approach as discussed in Avellan et al. (2012).Fuzzy classification methods define growth through membership functions and likelihoods (Burrough, MacMillan, & Deursen, 1992).The rationale behind this is that most soil parameters have a large error rate per se, due to sampling and handling errors, and crops are able to grow at various levels of these parameters (Rossiter, 1996).Thus strict Boolean classification systems may be too restrictive in growth ranges and areas.Fuzzy logic approaches have been used for a selected number of crops on limited study areas by other authors e.g.(Baja, Chapman, & Dragovich, 2002;Braimoh, Vlek, & Stein, 2004;Reshmidevi, Eldho, & Jana, 2009;Van Ranst, Tang, Groenemam, & Sinthurahat, 1996). Raster-based soil, terrain and climate parameter values were matched on a sliding scale from 0 to 1 with their respective crop growth likelihoods as determined by (Sys, Van Ranst, Debaveye, & Beernaert, 1993) (Figure 3a).Subsequently, the most optimally matching crop was selected to be the most suitable for a given pixel.Each component soil was assigned one fuzzy value (Figure 3b).Depending on the number of component soils in each soil mapping unit, up to 8 fuzzy values per pixel were assigned.These were aggregated based on their weighted share value of the respective soil mapping unit.Component soils with high share values end up with a stronger influence on the final fuzzy value. Crop growth abilities were then categorized into four subsets as defined by Sys et al. (1993) and (FAO, 1976).Fuzzy value between: 1) 0-0.4 Pixel not suitable for crop growth (N) (none). Pixels are subsequently transformed into land surfaces according to their location on the globe through a FORTRAN programme.The total land surface is considered except Antarctica. Dominant vs. Component Soil Areas In 64% of all pixel the dominant soil holds more than 50% of the pixel's share value.When looking at specific major soil groups, some only exist as dominant soil types (i.e.Is-Lithosols, Ns-Nitosols, U-Rankers and W-Planosols).Most soils comprise only two component soils in their soil mapping unit (i.e.dominant soil plus one additional component soil).Few cases exist where soil mapping units have 6 or more component soils.The share value of the dominant soil component is very high in most of northern Asia, Greenland, the North America and large parts of Africa.These are areas where the dominant soil defines the parameter value estimate (grey areas in Figure 4).In the case of China, due to the way the database was produced, only one-the dominant-soil exists.In the Middle East, Central Asia, the Pacific and Australia, share values of the dominant soil component were very low.These are areas where the other component soils play a larger role in determining the parameter value estimates of the given pixel (black areas in Figure 4, see also mock up example in Figure 2).South America exhibits mostly areas with intermediate share values (data not shown explicitly). Determination of Crop Suitable Areas While using the parameter value estimates of the dominant soil mapping units along with climate and terrain constraints, 9% of the earth's surface result in highly suitable (S1), 25% in suitable (S2) and 19% in marginally suitable (S3) areas (Figure 5).Barley (10.7%), wheat (5.6%), and oil palm (5.2%) are globally the most suitable crops (Figure 6) (Percentages of overall pixel, not of area). While considering the parameter value estimates of all component soils in a given pixel, the area suitable for crop growth amounts to 54.5% of the earth's land surface excluding Antarctica.Roughly 4.5% can be categorised as highly suitable (S1); 27% and 23% can be classified as suitable (S2) and marginally suitable (S3), respectively (Figure 5).The most prominent crops were the same as when using dominant soils only, with adjustments in their overall percentages (barley-11.1%,wheat-6.5%,oil palm-5.9%)(Figure 6).Partnership, 2011).In few cases of crop modelling some authors have undertaken extensive quality control of the underlying soil data and adapted it to their needs (Gijsman, Thornton, & Hoogenboom, 2007;Romero et al., 2012).This is very cumbersome and can only be carried out when sufficient expert staff is available for a specific target objective.However soil datasets are used widely by differing disciplines.We suggest explaining the inherent uncertainty attached to these datasets and lay open the error margin of their use.In this particular case, on the use of all component soils versus only the dominant soils we postulate that the error margin is of about 1% at a global scale. It is clear to the authors that additional parameters can be used from the soil databases as well as a variety of other parameters such as refined climate datasets, in particular at the temporal scale.Knowledge on ethnicity, gender, management practices, adapted crops, irrigation, use of fertilizers and of the use of technology are all factors that influence the suitability of an area for agricultural purposes (FAO, 2007).Obtaining reliable data for these parameters may be even more challenging than for soil databases. Conclusion In this study, we intended to show the differences in model results when using all component soils for the analysis of crop suitability.This is important because it allows determining the level of uncertainty that modellers face when using current global soil databases.Including more parameters does not always mean better results.We showed that the distribution of the number of component soils of the HWSD is very heterogeneous on a geographical scale but is not linked to the quality of the underlying data subset.The error range for using either the dominant component soil versus all component soils could be considered to be 1%-the difference in crop suitable area between the two datasets.The margin of error varies according to the region and increases to up to 4% when looking at the individual suitability classes. Figure 1 . Figure 1.Distribution of the four underlying databases of the Harmonized World Soil Database (HWSD); European Soil Database (ESDB), Soil Map of China (CHINA), Soil and Terrain dataset (SOTWIS), Digital Soil Map of the World; adapted from (FAO, IIASA, ISRIC, ISS-CAS, & JRC, 2009) Overview of the methodology of fuzzy logic crop suitability analysis using just the parameter value estimates of a) the dominant soil (top) or b) of all component soils (bottom) Figure 4 . Figure 4. Analysis of shares and sequences of component soils.Grey areas represent soil mapping units where the share value of the dominant soil component holds more than 50%; Black areas are regions where the dominant soil component holds a share value of more than 50% Figure 5 soils ( Table 1 . Coding of the regions SEAKambodscha, Laos, Thailand, Vietnam, Myanmar, Bangaldesh USA United States of America Region specific changes in crop suitability areas by categories using dominant soil parameter value estimates (d) or component soils (c).S3-marginally suitable, S2-suitable, S1-highly suitable Now, how to make a choice of which dataset to use?The quality for all component soils is heterogenous; the effect on the extent and type of crop suitability minimal.The lack of consistent quality of global datasets is a known issue.A variety of research centres are working towards enhanced soil datasets and sampling, often in collaboration with many others such as in the Global Soil Initiative launched in 2011 (The Global Soil
2018-12-15T17:22:09.495Z
2013-05-29T00:00:00.000
{ "year": 2013, "sha1": "e54da4bbe7f84bca9cfa88a4cdd0577737928ea5", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ep/article/download/27860/16812", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e54da4bbe7f84bca9cfa88a4cdd0577737928ea5", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
9256570
pes2o/s2orc
v3-fos-license
Characterization of the cysteine protease domain of Semliki Forest virus replicase protein nsP2 by in vitro mutagenesis The function of Semliki Forest Virus nsP2 protease was investigated by site‐directed mutagenesis. Mutations were introduced in its protease domain, Pro39, and the mutated proteins were expressed in Escherichia coli, purified and their activity in vitro was compared to that of the wild type Pro39. Mutations M781T, A662T and G577R, found in temperature‐sensitive virus strains, rendered the enzyme temperature‐sensitive in vitro as well. Five conserved residues were required for the proteolytic activity of Pro39. Changes affecting Cys478, His548, and Trp549 resulted in complete inactivation of the enzyme, whereas the replacements N600D and N605D significantly impaired its activity. The importance of Trp549 for the proteolytic cleavage specificity is discussed and a new structural motif involved in substrate recognition by cysteine proteases is proposed. Introduction Semliki Forest virus (SFV) is an enveloped positive-strand RNA virus belonging to the Alphavirus genus of the Togaviridae family. The structure and replication of alphaviruses have been studied in detail (reviewed in [1,2]). The virus has been used as an important tool in studies of protein folding [3,4], intracellular membrane transport and endocytosis [5,6] and viral pathogenesis [7]. The SFV based replicons have been used as expression vectors for the production of recombinant proteins in eukaryotic cells [8]. Attempts to use them for the production of vaccines and in gene and cancer therapies have also been reported [9]. Upon infection of the host cell, the 5 0 two-thirds of the SFV 42S RNA genome is translated into a 2432 amino acid-long polyprotein, designated P1234, which is autocatalytically processed to yield non-structural proteins nsP1-nsP4. All these function as virus-specific components of the membrane-associated RNA polymerase [2,10]. The processing intermediates P123 plus nsP4 are needed for the synthesis of complementary 42S RNA early in infection [11,12], while the complete cleavage products are responsible for the synthesis of positive sense 42S RNA genomes and the subgenomic 26S mRNA [2,11,12]. The protease activity responsible for the non-structural polyprotein cleavage resides in the C-terminal domain of nsP2 protein [13][14][15]. It belongs to the papain-like peptidase type (C9 family of CA clan in the MEROPS database [16]). The papain-related proteases have little sequence similarity but they share some biochemical and structural properties [17]. We have recently isolated the protease domain of SFV nsP2 (Pro39) and tested its activity using model substrates containing short sequences from each of the P1234 polyprotein processing sites fused to thioredoxin protein (Trx12, Trx23 and Trx34) [15]. Several temperature sensitive (ts) mutants of SFV have previously been characterized in our laboratory [18,19]. Temperature-shift experiments performed with these mutants have provided invaluable information on the biology of SFV, particularly on RNA synthesis and the function of the individual non-structural proteins [20][21][22][23]. Sequence analysis of the cDNA derived from the genome of those ts mutants has shown that ts4, ts6, and ts11 mutations are the result of single aminoacid changes in the protease domain of nsP2, M781T, A662T and G577R, respectively [23,24]. Despite these substitutions being located in poorly conserved regions of the nsP2 protease, they cause functional defects in virus propagation. The ts4 mutation has been shown to result in the halt of viral RNA synthesis and processing of the nonstructural polyprotein at the restrictive temperature of 39°C [23,25,26]. The ts6 and ts11 mutants also displayed an RNA negative phenotype, failing to synthesize any viral RNA at 39°C [18]. To obtain an insight on the mechanisms responsible for the virus ts phenotype as well as the role played by other residues of the SFV protease in the proteolytic reaction, we produced Pro39 variants with substitutions at key positions and analyzed their activity with Trx34 model substrate. Pro39 expression plasmids The plasmid construct for the expression of wt Pro39 (residues 459-799 of SFV nsP2), tagged with the peptide LEHHHHHH at its C-terminus, has been described previously [15]. This plasmid was used as a template to obtain the various mutated protein forms. The point mutations were introduced using the Quick Change XL Site-Directed Mutagenesis kit (Stratagene) and verified by DNA sequencing. Expression and purification of Pro39 Wt and mutant Pro39 were expressed in E. coli and purified essentially as described previously [15] with some modifications in the isolation procedure. Briefly, the cell lysates in 20 mM sodium phosphate buffer, pH 7.4, 200 mM NaCl, 0.1% Tween 20 (buffer A), supplemented with 0.1 mM EDTA and 1 mM PMSF, were cleared by centrifugation at 15 000 · g for 30 min at 4°C. The supernatant was supplemented with 150 mM imidazole and loaded to a Hi-Trap Chelating HP column (Amersham Biosciences) charged with NiSO 4 . After extensive washing, Pro39 was eluted with buffer A containing 500 mM imidazole. The resulting protein solution was supplemented with 2 mM EDTA and the buffer was changed to 20 mM HEPES, pH 7.4, containing 200 mM NaCl, 20% glycerol, 0.1% Tween 20, 1 mM DTT, and 1 mM EDTA using a PD10 gel filtration column (Amersham Biosciences). The final preparation was divided into aliquots, frozen and stored at À70°C. Under these storage conditions, Pro39 retained its activity for at least one year. Protease assay An 18 kDa substrate, comprising 19 (nsP3) and 18 (nsP4) residues of the SFV nsP3-nsP4 junction fused to 115 residues of thioredoxin, was used to assess the activity of the purified proteins [15]. The protease activity of the various Pro39 proteins was assayed in 50 mM HEPES, pH 7.4, containing 50 mM NaCl, 1 mM DTT, 2 mM spermidine and 10% glycerol, at a substrate concentration of 0.8 mg/ml. For time-course experiments, enzyme and substrate were mixed in pre-warmed buffer. At the indicated time points, 5 ll aliquots were withdrawn, mixed with electrophoresis sample buffer and boiled immediately. In experiments aimed at determining and comparing enzymatic activities, Pro39 was pre-incubated in buffer for 2 min at the desired temperature prior to the addition of the substrate. The proteolytic products of the reaction were analyzed by electrophoresis in 17% polyacrylamide gels. General analytical methods SDS-PAGE was carried out according to Laemmli [27]. Gels were stained with Coomassie Brilliant Blue and destained in 10% acetic acid before being dried. For the quantification of the proteolytic activity of the proteins, gels were scanned and analyzed by densitometry using the Tina 2.0 program. Care was taken not to overload the gels and the linearity in the signal detection was checked in gels containing known amounts of Pro39 or Trx34. Protein concentrations were determined with the Bradford assay [28] using bovine serum albumin as the standard. Expression and purification of mutant Pro39 In order to study the role of individual residues in activity of SFV protease, point mutations were introduced in the protease domain of nsP2 by site-directed mutagenesis to obtain eight Pro39 mutants (M781T (ts4), A662T (ts6), G577R (ts11), C478A, H548A, W549A, N600D, N605D). The proteins were expressed in E. coli and purified by metal affinity chromatography (Fig. 1). In all cases, the conditions for expression and purification were reproduced as precisely as possible to facilitate comparison of the properties of the mutant enzymes. The expression of the N600D and N605D mutants had a strong negative effect of the host cell, resulting in a slower growth rate and thus smaller bacterial mass as well as in a reduced expression level of the corresponding soluble Pro39. This, in turn, resulted in preparates with lower degree of purity, as can be seen in Fig. 1. Since the two mutant proteins showed a tendency to aggregate, particularly when submitted to the changes of ionic strength associated to many chromatography techniques, further purification steps were not successful and the shown preparates were used for the experiments reported here. Effect of temperature on the activity of Pro39 It has been shown previously that Pro39 readily cleaves the site between nsP3 and nsP4 in the synthetic substrate Trx34 [15]. The proteolytic reaction results in the appearance of a large fragment, L, of 14 kDa that can easily be detected and quantified after electrophoresis (Fig. 2). A smaller fragment, S, with a molecular mass of 4.1 kDa often appears as a broad and rather diffuse band at the bottom of the gel. To determine if the mutations causing the ts phenotype in the virus would also render the protease temperature sensitive in vitro, wt and ts Pro39 were mixed with the Trx34 substrate in buffer pre-warmed at either 28 or 39°C, and incubated further for 60 min at the same temperature. As can be seen in Fig. 2A, wt as well as the mutant Pro39 carrying ts mutations were able to cleave the substrate completely at 28°C. However, performing the reaction at the restrictive temperature of 39°C clearly resulted in an impairment of the enzymatic activity of the mutant Pro39, whereas the wt enzyme appeared to be only slightly affected (Fig. 2B). 10 min pre-incubation of the samples at 39°C, followed by further 60 min incubation at 28°C, resulted in an intermediate pattern of activity in the case of the ts proteins (Fig. 2C). Thus, the properties of the mutant Pro39 in vitro correlated with the temperature-sensitive phenotype of the viruses observed in vivo. Initial experiments were carried out at an enzyme:substrate molar ratios of 1:8. Under those conditions, the reaction was quite fast, reaching a plateau approximately after 10 min (Fig. 3, panels A and B). For a quantitative comparison of the enzymatic activity of the ts and the wt Pro39, we lowered the rate of the reaction by decreasing the amount of enzyme in the samples. At a Pro39:Trx34 molar ratio of 1:32 the reaction proceeded linearly for a period of up to 10 min (Fig. 3C), allowing accurate determination of the reaction rate. Thus, these conditions were used to compare the activity of the dif-ferent proteases at both the permissive and restrictive temperatures. For this, Pro39 was preincubated in buffer for 2 min in the absence of Trx34 in order to avoid any possible contribution of the substrate on the stability of the enzyme. Upon substrate addition, samples were withdrawn at time points 1, 2 and 5 min, and analyzed. We found the activity of the wt and mutant Pro39 to be practically identical when the reaction was performed at 28°C. However, at 39°C the activity of the ts enzymes clearly decreased to levels of 55 ± 8% (mean of 3 experiments), 55 ± 4% (2), and 65 ± 3% (3) for ts4, ts6, and ts11, respectively, whereas the activity of the wt Pro39 was much less affected by the rise in temperature (82 ± 12%, mean of 5 experiments). From the results of the experiments carried out with the wt enzyme at different enzyme:substrate ratios, an apparent K M of approximately 250 lM and a V max close to 160 pmol/min was calculated for Pro39 at 28°C (not shown). Mutagenesis at the putative active site Previously, the protease activity of closely related Sindbis virus (SIN) was assayed via the autocatalytic processing of the non-structural polyprotein synthesized by in vitro translation. Cys 481 , His 558 and Trp 559 were shown to be critical for the proteolytic activity [29]. We mutagenized the homologous residues in SFV Pro39 and analyzed the activity of the purified proteases using Trx34 as the substrate. Mutations C478A and H548A resulted in completely inactive Pro39 supporting the view that they represent the conserved catalytic dyad of the papain-like proteases, being Cys 481 /His 558 the equivalent residues in SIN. Substitution of the homologous Trp 549 with Ala also rendered Pro39 inactive (Fig. 4). The work of Strauss and coworkers [29] also revealed two Asn residues conserved in alphaviruses important for the SIN protease activity. N609D mutation inhibited processing of the polyprotein but allowed viral replication, whereas N614D mutation had the opposite effect -it enhanced proteolysis but was lethal for the virus [12,29]. In order to investigate the effect of these mutations on the catalytic activity of SFV protease directly, Asn residues at the corresponding positions of Pro39 were replaced by Asp. Both N600D and N605D Pro39 mutants showed significantly reduced efficiency of the in vitro Trx34 substrate cleavage compared to the wt enzyme (Fig. 4). Because of the lower purity of these samples, it could be argued that the detected activity, or at least part of it, could be due to the presence of contaminant bacterial proteins able to act upon the same bond on Trx34. It should be noted, however, that we were not able to detect any endogeneous protease activity capable of specific cleavage of Trx34 neither in E. coli BL21 extracts nor in the purified mutant Pro39 preparations. In the latter case, prolonged incubation of these samples at room temperature resulted in aggregation and precipitation of the mutant enzyme, whereas the bulk of contaminating proteins remained soluble. After separation of the precipitated Pro39 by centrifugation, the resulting supernatant did not show any detectable activity against Trx34 (not shown), suggesting that the observed reduced enzymatic activity of the preparates was due to the mutant Pro39 and not to the presence of the contaminating proteins. Role of Trp 549 in the cleavage specificity Trp 549 (Trp 559 in SIN), the residue following the catalytic His, is conserved in the alphavirus cysteine proteases. It is also critical for the protease activity of SFV Pro39 (Fig. 4). To obtain further information on the role of this essential Trp, we scanned the MEROPS database to check if the ''HW'' motif is present in other cysteine proteases. It appeared in eight families of peptidases and in six of them, this motif was strictly conserved. Analysis of the experimental data published for these proteases revealed that all of them process polypeptides with Gly residue at P2 position of the cleavage site, as is the case of all three processing sites of alphavirus non-structural polyprotein [30], or at P1 position or at both. Moreover, we found that this conclusion could be extended to other cysteine proteinases where the catalytic His is followed by Tyr or Phe residue ( Table 1). The proteases listed in Table 1, denoted here as GSM (glycine specificity motif) proteases, are mainly representatives of clans CA and CE, where papain and adenain, respectively, are considered the prototypes. The group includes several families of viral proteases, ubiquitin and SUMO hydrolases, pseudomurein endoisopeptidases and bacteriocin processing enzymes. Representatives of five families have had their tertiary structures solved and deposited in the Protein Data Bank [31]. Despite the lack of sequence similarity, the architecture of the catalytic center is very similar, as illustrated in Fig. 5. What is particularly important is that the aromatic residue (Trp, Tyr or Phe) appears to occupy a virtually identical position in all the structures. Moreover, in the structures where the substrate molecule is present, the contact of the aromatic residue with the penultimate Gly of the substrate can be directly visualized. Another distinctive feature unifying the GSM proteases is their resistance to E-64, known to be a specific inhibitor of papain-like peptidases [17]. Although the majority of GSM enzymes belonging to the papain type, none of them are sensitive to E-64. However, where this has been checked directly, E-64 always failed to block the proteolytic activity of the enzymes listed in Table 1. If a residue is not conserved the variety of aminoacids at the position is given in square brackets. c Provisional assignment, cleavage site has not been determined experimentally. d Turnip Yellow Mosaic virus endopeptidase containing ''HF'' GSM is the only protease in the family, which site specificity has been shown experimentally. Discussion The alphavirus protein nsP2 plays an essential role in virus propagation. It is a multifunctional enzyme involved in many replication processes. Its protease activity, residing at the Cterminus of the protein, is responsible for the maturation of the non-structural polyprotein [32,33]. The N-terminal half of nsP2 possesses helicase [34,35] and RNA triphosphatase activities [36]. The protein plays a central role in regulation of the 26S subgenomic RNA transcription [19,23,26]. In addition, it has been reported to be involved in virus-host interactions [37,38]. The variety of nsP2 functions often makes it difficult to analyze nsP2 function in vivo and in complex systems in vitro where multiple secondary effects are possible. Even the protease activity itself shows different behavior, acting upon three different cleavage sites in the non-structural polyprotein [39]. The cleavage between nsP1 and nsP2 occurs only in cis, whereas the nsP2/nsP3 site is processed in trans and requires the presence of a free nsP2 terminus. However, the site between nsP3 and nsP4 requires only the protease activity and a short specific polypeptide stretch as a substrate for efficient cleavage [15]. Taking these facts into account, we exploited the benefits of a simplified experimental system consisting of the basal protease Pro39 and the model substrate Trx34 [15] to characterize the SFV protease by in vitro mutagenesis. The first question we addressed was the proteolytic activity of purified Pro39 carrying mutations derived from SFV temperature-sensitive mutants ts4 (M781T), ts6 (A662T) and ts11(G577R) [23,24]. The activities of the mutant enzymes were assayed at 28 and 39°C, which are the permissive and restrictive temperatures for the ts viruses, respectively. The wt and the mutant Pro39 cleaved the Trx34 substrate with practically the same efficiency at 28°C. However, a marked reduction in the activity of the mutated proteins was observed at the non-permissive temperature of 39°C. Thus, Pro39 proteins carrying the ts mutations display temperature sensitivity in vitro that correlates with the effects of higher temperatures observed in vivo, suggesting that an impaired protease domain is most likely responsible for the viral ts phenotypes. It is important to note that after a short exposure to 39°C the cleavage efficiency of Pro39 did not revert to the initial 28°C level (Fig. 2C), indicating that there was a loss in the amount of active enzyme. Therefore, the ts mutations affected rather protein stability than the catalytic activity itself, although we cannot exclude that both change at higher temperatures. The nsP2 protease of alphaviruses belongs to a papain-like family of cysteine proteases [40]. In the case of the Sindbis enzyme, site-directed mutagenesis of the non-structural polyprotein strongly suggested that Cys 481 and His 558 form the catalytic dyad of the enzyme [29]. Our present results, based on in vitro analysis of the purified protease domain activity, confirmed that, in the case of SFV, Cys 478 and His 548 are essential for the proteolytic activity. Similarly, the mutation N609D in the SIN polyprotein resulted in impaired polyprotein processing [29], which correlates with the reduced proteolytic activity of the corresponding N605D mutant of SFV Pro39. However, an interesting difference was noticed in the case of the Pro39 N605D mutation. The corresponding change in SIN virus resulted in hyperactive polyprotein processing that was lethal for the virus [12,29], whereas the activity of the Fig. 5. Architecture of the active site of the GSM proteases. The structrures of six GSM-containing proteases were aligned by catalytic site residues with Deep View software [43] and are shown as ribbon models. The catalytic residues shown as ball-stick models: Cys -in yellow, His -in green, the third member of the catalytic triad (Asp, Glu, Asn) -in orange; the aromatic residue of GSM motif (Trp, Phe, Tyr) -in dark blue. The last and the penultimate Gly residues of the substrates are shown in turquoise and pink, respectively. The structures presented: Ulp1-SUMO complex, PDB code 1EUV (A); adenain, 1AVP (B); UCH L3-ubiquitin vinylmethylester complex, 1XD3 (C); Yuh1-ubiquitin aldehyde complex, 1CMX (D); HAUSP-ubiquitin aldehyde complex, 1NBF (E); otubain, 1TFF (F). N609D mutant of SFV Pro39 was reduced. This discrepancy may reflect differences in the organization of the active site of these two similar proteases. At the same time, it cannot be excluded that the hyperactive phenotype observed in polyprotein processing is caused at the level of regulation of the nsP2 activity rather than at the level of the catalytic activity itself. The role of Trp 549/559 in SFV and SIN nsP2, respectively, has been poorly understood. It has been suggested that it might act as a third member of the catalytic triad providing correct orientation for the catalytic histidine [29]. Here we suggest that this Trp residue is crucial because of its role in recognition of the penultimate Gly residue in the processing sites of the non-structural polyprotein. Indeed, the papain-like proteinases are known to be very specific for the amino acid residue at the P2 position of the substrate, usually a bulky or hydrophobic residue [17]. However, the non-structural polyproteins of alphaviruses have Gly preserved at this position [30]. This difference may provide the most straightforward explanation for the resistance of the SFV protease to the E-64 inhibitor [15]. The E-64 leucine moiety mimics the most common occupant of the substrate pocket of the papain-like peptidases (see [41] for papain family and [42] for calpains), but, most probably, it cannot fit the pocket arranged for a tiny Gly residue. The putative role of Trp 549 can be inferred from the finding that all the cystein proteases listed in MEROPS, which have the catalytic His followed by either Trp, Tyr or Phe, processed the substrates with a Gly residue in the cleavage site. The only exception was the pseudomurein endoisopeptidases. In this case, however, the Glu-Ala peptide, linked together via the glutamate side chain carboxyl group, is very similar to the conventional Gly-Ala and can fit also in a small substrate pocket. The structures available for these peptidases showed the same active site architecture and the same position of the aromatic residue of the motif directly interacting with the substrate P2 Gly. There may be some variations in the active site organization since two groups of GSM proteases cleave the substrates with Gly at P1 position instead of P2 (see Table 1). We assume that in those cases, either another residue is allowed at P2 or the Gly at P1 plays a similar role in the recognition event. It should be noted, however, that the Rubella virus protease has been shown to be resistant to E-64, a property seemingly common to all the peptidases in the GSM group. An especially illustrative case is provided by Murine Hepatitis coronavirus, which has two very similar orthologous papain-like peptidases PL pro 1 and PL pro 2. The difference in the catalytic site of these two enzymes, ''HS'' and ''HY'', respectively, correlates with the change at the P2 substrate position and with the resistance to E-64 (Table 1). Based on empiric bioinformatics data, structural and biochemical information, we suggest that alphavirus proteinases, as well as a number of other viral and cellular cystein proteinases, utilize recognition of the Gly residue at the P2 position of the substrate. The glycine specificity motif -the catalytic His followed by an aromatic residue -can be considered a hallmark of this type of peptidases and can be used for predicting the specificity of newly discovered proteinases.
2018-04-03T00:05:53.290Z
2006-01-31T00:00:00.000
{ "year": 2006, "sha1": "32478070c2fd79911d74dccadc2b82ebb5ff1af1", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1016/j.febslet.2006.01.071", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "74c906ffd5ed8d9a6523ccf82d0cf7e9f404c874", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236798133
pes2o/s2orc
v3-fos-license
μ-Oxo-bridged diiron(iii) complexes of tripodal 4N ligands as catalysts for alkane hydroxylation reaction using m-CPBA as an oxidant: substrate vs. self hydroxylation A series of non-heme μ-oxo-bridged dinuclear iron(iii) complexes of the type [Fe2(μ-O)(L1–L6)2Cl2]Cl21–6 have been isolated and their catalytic activity towards oxidative transformation of alkanes into alcohols has been studied using m-choloroperbenzoic acid (m-CPBA) as an oxidant. All the complexes were characterized by CHN, electrochemical, and UV-visible spectroscopic techniques. The molecular structures of 2 and 5 have been determined successfully by single crystal X-ray diffraction analysis and both possesses octahedral coordination geometry and each iron atom is coordinated by four nitrogen atoms of the 4N ligand and a bridging oxygen. The sixth position of each octahedron is coordinated by a chloride ion. The (μ-oxo)diiron(iii) core is linear in 2 (Fe–O–Fe, 180.0°), whereas it is non-linear (Fe–O–Fe, 161°) in 5. All the diiron(iii) complexes show quasi-reversible one electron transfer in the cyclic voltammagram and catalyze the hydroxylation of alkanes like cyclohexane, adamantane with m-CPBA as an oxidant. In acetonitrile solution, adding excess m-CPBA to the diiron(iii) complex 2 without chloride ions leads to intramolecular hydroxylation reaction of the oxidant. Interestingly, 2 catalyzes alkane hydroxylation in the presence of chloride ions, but intramolecular hydroxylation in the absence of chloride ions. The observed selectivity for cyclohexane (A/K, 5–7) and adamantane (3°/2°, 9–18) suggests the involvement of high-valent iron–oxo species rather than freely diffusing radicals in the catalytic reaction. Moreover, 4 oxidizes (A/K, 7) cyclohexane very efficiently up to 513 TON while 5 oxidizes adamantane with good selectivity (3°/2°, 18) using m-CPBA as an oxidant. The electronic effects of ligand donors dictate the efficiency and selectivity of catalytic hydroxylation of alkanes. Introduction In nature, non-heme diiron enzymes, such as methane monooxygenases, ribonucleotide reductases etc., activate oxygen and catalyze alkane oxidation reactions. Among these enzymes, soluble methane monooxygenases having a m-oxo bridged diiron core are the widely investigated metalloenzymes involved in the conversion of methane into methanol using molecular oxygen under ambient conditions. 1-5 Therefore, the diiron(III) complexes having an Fe-O-Fe core have received greater attention in the eld of hydrocarbon oxidation under mild conditions (Scheme 1). [6][7][8] Signicantly, nature has evolved a wide variety of coordination environments around iron centers to differentiate the function of the enzymes from one another and utilised distinct intermediates, which are supposed to be involved in their intrinsic catalytic behaviour. [9][10][11][12][13] In the case of heme enzymes the oxoiron(IV) porphyrin p-cation radical is found to be the oxidizing intermediate involved in alkane hydroxylation. 14,15 On the other hand, the involvement of the Fe IV 2 O 2 diamond core is observed as the reactive intermediate species in methane oxidation by the soluble methane monooxygenases (sMMO) and the enzymes hold two oxidizing equivalents divided on two iron centers. 13,16 As alkane functionalization is an important chemical transformation in the eld of organic and synthetic chemistry, selective oxidation of hydrocarbons under mild conditions has become an exciting and challenging scientic objective. Therefore, the development of a diiron catalyst for alkane hydroxylation reaction has attracted greater attention to illustrate the oxidizing intermediates and catalytic pathway of enzymes. [17][18][19] In earlier studies attempts have been made to reproduce the structural and functional aspects of the enzymes and several model complexes have been reported as both functional and structural models for methane monooxygenases enzymes. 2,[20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] A few m-oxo-bridged diiron(II) complexes were developed as structural mimics of the active center of sMMO and related enzymes, in which the active site coordination environment of the sMMO have been mimicked. 2,[36][37][38][39][40][41] Also, the involvement of high valent Fe IV ]O species in the alkane hydroxylation reaction was proved and are characterized the species by X-ray crystallographic techniques. 2,[42][43][44][45][46] The diiron(III) complexes of tris(2pyridylmethyl)amine (TPA) and related ligands are known as effective sMMO models. 47 However, such ligands do not stabilize the diiron core in solution, and the resulting complexes display varied reactivity, depending on them being mono-or diiron complexes. 48,51,52 Whereas, the sMMO model derived from TPA-containing dinucleating ligand has been stabilized diiron core in solution and act as effective catalyst for alkane functionalization. 49 The diiron(III) complexes have been utilised as catalysts for various alkane oxidation reactions using different types of oxidants such as molecular oxygen, hydrogen peroxide (H 2 O 2 ), tbutyl hydroperoxide (t-BuOOH) and m-chloroperbenzoic acid (m-CPBA). For instance, the unsymmetrical diiron-m-oxo complex [L 3 4 Fe III (Cl)(m-O)Fe III Cl 3 ], where L 3 4 is N,N 0 -dimethyl-N,N 0 -bis(2-pyridylmethyl)propane-1,3-diamine, exhibits hexane oxidation reaction with molecular oxygen as oxidant in the presence of trimethylhydroquinone as reductant. 50 Various diiron(III) complexes with pyridyl, imidazolyl and benzimidazolyl nitrogen donating ligands have been used as catalysts for alkane and benzene oxidation reactions using H 2 O 2 or t-BuOOH or m-CPBA as oxidants and achieved moderate to good selectivity. [51][52][53][54][55][56][57] Similarly, various diiron(III) complexes with phenolate ligands have been used as catalysts for alkane oxidation reactions with good alcohol selectivity. 55,[58][59][60] Interestingly, various diiron(III) complexes with carboxylate oxygen as ligand donors exhibited efficient and selective oxidation of alkanes with various oxidants and with high A/K ratio. [61][62][63][64][65] Likewise, the diiron(III) complexes with N-heterocyclic carbene ligands catalyzed the benzene hydroxylation to phenol with H 2 O 2 as oxidant. 66 Interestingly, several diiron(III) complexes catalyzes intra-molecular aliphatic 67 and aromatic oxidation reactions, where the phenyl group is usually oxidized using various oxidants. [68][69][70][71][72] Although, various m-oxo-bridged nonheme diiron(III) complexes that mimic the functions of diiron enzymes have been reported earlier, the design and study of diiron(III) complexes would enhance the understanding further to utilize the complexes as excellent catalysts for the oxidation of organic substrates, particularly for alkane functionalization and alkene epoxidation reactions. Moreover, the factors determining the selectivity as well as efficiency of the catalysts remain still unclear. Even though, several studies proved the involvement of Fe IV ]O species in alkane hydroxylation, it is difficult to eliminate the possibility of involvement of Fe V ]O species and a few reports support the involvement of later species also in alkane hydroxylation reaction. [73][74][75] All the above observations prompted us to isolate a few diiron(III) complexes of systematically varied tripodal 4N ligands having pyridine, imidazole and sterically demanding quinoline moieties and weakly binding -NMe 2 groups and to study the ligand stereoelectronic factors upon the efficiency as well as alcohol product selectivity of the complexes as catalysts for alkane hydroxylation reaction (Scheme 2). All the present diiron(III) complexes catalyse the hydroxylation of alkanes like cyclohexane and adamantane efficiently with good alcohol selectivity using m-CPBA as the oxidant within an hour. Further, when the pyridine moiety in the diiron(III) catalyst is replaced with -NMe 2 donor group the selectivity of the catalyst remains approximately the same. In contrast, for adamantane oxidation the incorporation of sterically hindering quinolyl donor around diiron(III) leads to a high 3 /2 bond selectivity. Catalytic oxidations The oxidation of alkanes was carried out at room temperature under research grade nitrogen atmosphere. In a typical reaction, oxidant m-CPBA (0.8 mol dm À3 ) was added to the mixture of diiron(III) complex (1  10 À3 mmol dm À3 ) and alkanes (3 mol dm À3 ) and in CH 2 Cl 2 : CH 3 CN mixture (4 : 1 v/v). Aer 30 min the reaction mixture was quenched with triphenylphosphine, the reaction mixture was ltered over a silica column and then eluted with diethylether. An internal standard (bromobenzene) was added at this point and the solution was subjected to GC analysis. The mixture of organic products were identied by Agilent GC-MS and quantitatively analyzed by HP 6890 series GC equipped with HP-5 capillary column (30 m  0.32 mm  2.5 mm) using a calibration curve obtained with authentic compounds. All of the products were quantied using GC (FID) with the following temperature program: injector temperature 130 C; initial temperature 60 C, heating rate 10 C min À1 to 130 C, increasing the temperature to 160 C at a rate of 2 C min À1 , and then increasing the temperature to 260 C at a rate of 5 C min À1 ; FID temperature 280 C. GC-MS analysis was performed under conditions identical to those used for GC analysis. The averages of three measurements are reported. Physical measurements Elemental analyses were performed on a Perkin Elmer Series II CHNS/O Analyzer 2400. 1 H NMR spectra were recorded on a Bruker 400 MHz NMR spectrometer. Electronic spectra were recorded on Agilent 8453 Diode Array Spectrophotometer. Low temperature spectra were obtained on Agilent 8453 Diode Array Spectrophotometer equipped with an UNISOKU USP-203 cryostat. ESI-MS analyses were recorded on a Micromass Quattro II triple quadrupole mass spectrometer. Cyclic voltammetry (CV) and differential pulse voltammetry (DPV) were performed at 25 AE 0.2 C using a three-electrode cell conguration. A platinum sphere, a platinum plate and Ag(s)/AgNO 3 were used as working, auxiliary and reference electrodes, respectively. The platinum sphere electrode was sonicated for two minutes in dilute nitric acid, dilute hydrazine hydrate and in double distilled water to remove the impurities. The reference electrode for non-aqueous solution was Ag(s)/Ag + , which consists of a Ag wire immersed in a solution of AgNO 3 (0.01 M) and tetra-N-butylammonium perchlorate (0.1 M) in acetonitrile placed in a tube tted with a Vycor plug. The instruments utilized included an EG & G PAR 273 Potentiostat/Galvanostat and P-IV computer along with EG & G M270 soware to carry out the experiments and to acquire the data. The temperature of the electrochemical cell was maintained by a cryo-circulator (HAAKE D8-G). The E 1/2 observed under identical conditions for Fc/Fc + couple in acetonitrile was 0.102 V with respect to the Ag/Ag + reference electrode. The experimental solutions were deoxygenated by bubbling research grade nitrogen and an atmosphere of nitrogen was maintained over the solution during measurements. The products were analyzed by using Hewlett Packard (HP) 6890 GC series Gas Chromatograph equipped with a FID detector and a HP-5 capillary column (30 m  0.32 mm  2.5 mm). GC-MS analysis was performed on an Agilent GC-MS equipped with 7890A GC series (HP-5 capillary column) and 5975C inert MSD under conditions that are identical to that used for GC analysis. Crystal data collection and structure renement The diffraction experiments were carried out on a Bruker SMART APEX diffractometer equipped with a CCD area detector. High quality crystals, suitable for X-ray diffraction was chosen aer careful examination under an optical microscope. Intensity data for the crystal was collected using MoK a (l ¼ 0.71073Å) radiation on a Bruker SMART APEX diffractometer equipped with CCD area detector at 100 and 293 K. The data integration and reduction was processed with SAINT soware. An empirical absorption correction was applied to the collected reections with SADABS. The structure was solved by direct methods using SHELXTL and rened on F 2 by the full-matrix least-squares technique using the SHELXL-97 package. [78][79][80] Even though, the data of 2 was collected at LN temperature (100 K) during the structure solution it was observed that carbon atoms of the coordinated acetonitrile molecule in 2 appeared as diffused peaks and the methyl carbon is disordered. Both these carbon atoms were located from the difference Fourier map and since the peak heights of the carbon atoms were small and diffused the whole coordinated CH 3 CN molecule was rened only isotropically. For the disordered methyl carbon, the occupancy factor is assigned using FVAR command. Crystal data and additional details of the data collection and renement of the structure are presented in Table 1. The selected bond lengths and bond angles are listed in Table 2. Syntheses and characterization of ligands and their diiron(III) complexes The tripodal tetradentate 4N ligands L1-L6 (Scheme 1) were synthesized according to known procedures which involve reductive amination reaction. The ligands L1-L6 were prepared by reductive amination of 2-picolylamine with two moles of pyridine-2-carboxaldehyde (L1) and N,N-di-methylethylenediamine with two moles of pyridine-2-carboxaldehyde (L2) or 6-methylpyridine-2-carboxaldehyde (L3) or 6-bromopyridine-2-carboxaldehyde (L4) or 1-methylimidazole-2-carboxaldehyde (L5) or quinoline-2carboxaldehyde (L6) using sodium triacetoxyborohydride as reducing agent and were characterized by 1 H NMR spectroscopy and mass spectrometry. The reaction of (Et 4 N) 2 The molecular structure of [Fe 2 (m-O)(L2) 2 Cl 2 ] 2+ 2 is shown in Fig. 1, together with the atom numbering scheme and the selected bond lengths and bond angles are collected in Table 2. . The Fe-O-Fe bond angle of 180.0 suggests that the (m-oxo)diiron(III) core has a linear structure. The Fe/Fe distance is 3.541Å, which is in the range found for the already reported complexes with Fe-O-Fe core (3.35-3.55Å). 47,53,58,62 The molecular structure of [Fe 2 (m-O)(L5) 2 Cl 2 ] 2+ 5 is shown in Fig. 2, together with the atom numbering scheme and the selected bond lengths and bond angles are collected in Table 2. The molecule contains no inversion centre and each iron atom in 5 possesses a distorted octahedral coordination geometry with slight difference in bond lengths and bond angles and is Electronic absorption spectral studies The electronic spectral data of all the diiron(III) complexes are summarized in Table 3 and the typical electronic absorption spectrum of 2 is shown in Fig. 3. In MeOH : ACN (1 : 3 v/v) solvent mixture, all the present diiron(III) complexes exhibit two absorption bands in the ranges 250-285 and 370-400 nm. The lower energy band in the range 370-400 nm is assigned to weak m-oxo-to-Fe(III) ligand to metal charge transfer transition (LMCT). The higher energy band in the range 250-285 nm is assigned to p-p* transition in the ligand moiety. The spectral properties of all the diiron(III) complexes are very similar to those found for all of the previously reported diiron(III) complexes of the same type, revealing the similarities in the structures of these complexes. 47,58 Also, there is no signicant difference in spectral behavior of the diiron(III) complexes and mononuclear iron(III) complexes of the same ligand has been observed. It has been previously reported that the m-oxo-to-Fe(III) CT transition for all the diiron(III) complexes has been found to be blue-shied when the Fe-O-Fe bond angle of diiron(III) core changes. 83 Thus, for the (m-oxo)diiron(III) complexes, upon increasing the Fe-O-Fe angle, the 400- 84 We have also observed the same blue shi when the bond angle tends to become 180 . Electrochemical properties The electrochemical properties of the diiron(III) complexes were investigated in methanol : acetonitrile solvent mixture by employing cyclic (CV) and differential pulse voltammetry (DPV) on a stationary platinum electrode. All of the complexes show a cathodic reduction wave in the range À0.48 to À0.62 mV, but not any coupled oxidation wave in the CV (Fig. 4). Fe(III)-O-Fe(III) + e À / Fe(II)-O-Fe(III) The E 1/2 values of the Fe III /Fe II redox couples (À0.44 to À0.58 V, Table 3) fall in the range observed for similar type of oxo-bridged diiron(III) complexes. They are highly negative mainly due to the strong coordination of the bridging oxo-group and chloride ions and follow the trend 1 < 2 < 3 > 4 > 5 < 6. On replacing one of the pyridyl nitrogen donors in 1 by -NMe 2 group to obtain 2, the Fe III /Fe II redox potential is shied to less negative values due to the weaker coordination of the sterically hindered -NMe 2 group to iron(III) center. A similar shi in the Fe III /Fe II redox potential from less negative region to more negative region is observed upon replacing both the pyridyl nitrogen donors in 2 by 6-methylpyridyl donor to obtain 3, revealing that the methyl group on the pyridyl ring makes the pyridyl nitrogen to coordinate weakly with the iron(III) center. Whereas on replacing one of the pyridyl nitrogens in 2 by N-Meimidazolyl donor to obtain 4, the Fe III /Fe II redox potential is shied to more negative values due to the stronger coordination of the electron-releasing N-Me-imidazole (pK a : pyH + , 5.2, MeImH + , 7.0) nitrogen donor and hence its stronger coordination as in 2. The Fe III /Fe II redox potential is further shied to more negative value upon replacing both the pyridyl donor in 2 by N-Me-imidazolyl donor to obtain 5, which is consistent with the Fe-N im bond length observed for 5 being shorter than the Fe-N py bond length for 2 (cf. above). But, the Fe III /Fe II redox potential is shied to more positive value upon replacing both the pyridyl donor in 2 by the quinolyl donor to obtain 6 due to the coordination of the bulky quinolyl group weaker than the pyridyl donor. All the above observations reveal that the introduction of strong donor, leading to the shi in Fe III /Fe II redox potential to more negative values, renders the FeN 4 OCl coordination sphere more compact stabilizing iron(III) oxidation state. Whereas the FeN 4 OCl coordination sphere of complexes with quinolyl or pyridyl nitrogen donors is less compact, as evident from their less negative Fe III /Fe II redox potential. Also, both electronic as well as steric effects play a major role in determining the Lewis acidity of the diiron(III) center and the redox potential is well tuned upon varying the ligand donor functionalities. Reaction of diiron(III) complexes with m-CPBA The reaction of diiron(III) complex 2 with m-CPBA in methanol at room temperature was investigated using UV-visible spectroscopy. No appreciable changes were observed when 2 was treated with m-CPBA, revealing that the strong coordination of chloride ion with iron(III) center, renders the complex less reactive towards the oxidant. When the diiron(III) complex 2 was treated with silver perchlorate monohydrate to remove the coordinated chloride ions as silver chloride by centrifugation. The electronic absorption spectrum of the supernatant solution is found to be similar to that of the diiron(III) complexes with slight shi in wavelengths towards higher energy region. The reaction of supernatant liquid with m-CPBA produced a pink colored species showing a new absorption band around 565 nm (Fig. 5). ESI-MS analysis of the pink solution shows a prominent peak cluster at m/z value of 495.96, corresponding to the presence of the intramolecular oxo-transferred species [(L2)Fe(5-Cl-salicylate)] + . When the pink solution was treated with small amount of con. HCl and extracted with dichloromethane, the GC-MS analysis of the extract shows the formation of 5-chlorosalicylic acid, revealing that upon binding with the iron(III) center m-CPBA undergoes intramolecular oxo transfer to the phenyl ring, that is, self-hydroxylation of m-CPBA. When iron(III) perchlorate was treated with L2, 5chlorosalicylic acid and triethylamine in acetonitrile, the complex [(L2)Fe(5-Cl-salicylate)] + was formed, as diagnosed by an absorption band around 565 nm. This conrmed that the new species formed upon reaction of 2 with m-CPBA corresponds to [(L2)Fe(5-Cl-salicylate)] + . Interestingly, the treatment of mononuclear chlorido complex [Fe(L2)Cl 2 ] + of the same ligand L2 does not involve in intramolecular oxo-transfer of m-CPBA, but the perchlorate complexes take part in the intramolecular oxo transfer reaction, revealing that at least two vacant sites on the complex species are needed for selfhydroxylation of m-CPBA, as reported earlier (Fig. 6). 85 So, it is clear that upon treatment of the diiron(III) complexes with silver perchlorate the dimeric core is broken to form monomeric solvent coordinated species, which then takes part in the intramolecular oxo transfer. Nam et al. Catalytic oxidations of alkanes by diiron(III) complexes The experimental conditions and the results of catalytic oxidation of alkanes into alcohols for all the diiron(III) complexes 1-6 are summarized in Tables 4 and 5. The conversion of alkanes into hydroxylated products was quantied by employing gas chromatographic analysis involving authentic samples and an internal standard. The catalytic ability of the diiron(III) complexes towards oxidation of alkanes like cyclohexane and adamantane was explored by using m-CPBA, H 2 O 2 and t-BuOOH as oxidants in CH 2 Cl 2 : CH 3 CN solvent mixture (3 : 1 v/v) at room temperature. Also, it was found that H 2 O 2 and t-BuOOH were not effective oxidants for hydroxylation of alkanes. Control reactions performed in the absence of the diiron(III) complexes with m-CPBA as oxidant yielded only very small amounts of the oxidized products for all the substrates (cyclohexane, 3 TON; adamantane, 5 TON). In the presence of the complexes, the oxidation of cyclohexane proceeds to give cyclohexanol as the Scheme 3 Proposed mechanism of intramolecular arene hydroxylation. 2) with 60% conversion of oxidant to oxidized products. The observed A/K value for cyclohexane oxidation suggests the involvement of a high-valent iron-oxo species rather than a freely diffusing radical species (A/K z 1 for radical reaction) in the catalytic reaction. In contrast to the high TON observed when m-CPBA is used as oxidant for hydroxylation of alkanes, the complex 1 shows a very low TON when H 2 O 2 or t-BuOOH is used as the oxidant. Upon replacing one of the pyridyl donors in 1 by a weakly coordinating -NMe 2 group to obtain 2, the catalytic oxidation of cyclohexane occurs to provide 430 TON of cyclohexanol, 48 TON of cyclohexanone and 16 TON of 3-caprolactone. This may be due to the weak coordination of the -NMe 2 group rather than the pyridyl donor as revealed from the crystal structure makes the bridging oxogroup leading to the decrease in Lewis acidic character of the iron(III) center, which may stabilize the high-valent iron-oxo species involved in the catalytic reaction. Previously it was reported that the stability of the high-valent iron-oxo species generated from certain mononuclear iron(II) complexes has been correlated with the number of pyridine donors present in the primary ligand. So, the present diiron(III) complex is also expected to stabilize the high-valent iron-oxo species so that they can act as efficient turn over catalyst for alkane hydroxylation. Thus the behavior of 2 towards alkane substrates can be compared with several non-heme iron catalysts: (a) the Gif family of catalysts, which afford mainly ketone products; 89 76,93,94 Thus the A/K ratio (12.2) found for 2 corresponds most closely to those associated with the catalyst group c. Upon replacing both the pyridyl nitrogen donor in 2 by 6-methylpyridyl donor to obtain 3, the catalytic oxidation of cyclohexane occurs to yield 390 TON of cyclohexanol, 52 TON of cyclohexanone and 14 TON of 3-caprolactone. Upon replacing the pyridyl donors in 1 by the 6-methylpyridyl donor both the catalytic activity and selectivity decrease due to the weaker coordination of the later one. Interestingly, upon replacing one of the pyridyl donors in 2 by N-Me-imidazolyl nitrogen donor to obtain 4, cyclohexane is oxidized to 450 TON of cyclohexanol, 51 TON of cyclohexanone and 12 TON of 3-caprolactone. Upon introduction of the strongly coordinating N-Me-imidazolyl group both the catalytic activity and selectivity increased. But, upon replacing both the pyridyl donors in 2 by N-Me-imidazolyl donor to obtain 5, the catalytic oxidation of cyclohexane proceeds to give 370 TON of cyclohexanol, 58 TON of cyclohexanone and 23 TON of 3-caprolactone. Upon introduction of two N-Meimidazolyl nitrogen donor it is expected that the total TON and selectivity increase; however, we observe both the total TON and selectivity to decrease. Upon replacing both the pyridyl nitrogen donors in 2 by quinolyl nitrogen donors to obtain 6, the catalytic oxidation of cyclohexane occurs to give 332 TON of cyclohexanol, 43 TON of cyclohexanone and 15 TON of 3caprolactone. Adamantane oxidation The catalytic activity of all the diiron(III) complexes 1-6 towards oxidation of adamantane has been also explored and the results are summarized in Table 5. All the complexes catalyze the oxidation of adamantane efficiently to give 1-adamantanol and 2-adamantanol as the major products along with 2-adamantanone as the minor product. Complex 1 catalyzes the oxidation of adamantane to give 241 TON of 1-adamantanol, 53 TON of 2-adamantanol and 15 TON of 2-adamantanone (total This trend is the same as that observed for cyclohexane oxidation, revealing that the electron-releasing nature of the donor atom play a signicant role on the catalytic reaction of the diiron(III) center as well as formation and stabilization of the high-valent iron-oxo intermediate. Complex 3 also catalyses adamantane oxidation to give 355 TON of oxidized products, which is higher than that observed for 1. Whereas, the introduction of strongly s-bonding N-Me-imidazolyl donor the complexes 4 and 5 catalyze with lower TON compared to complexes 1-3. However, complex 5 catalyzes oxidation of adamantane with very high selectivity (3 /2 , 18.5), which may be due to the stabilization of high-valent iron-oxo species. The high 3 /2 selectivity observed indicates the involvement of high-valent iron-oxo species in adamantane oxidation also. Interestingly, all the present diiron(III) complexes show higher selectivity in the hydroxylation of cyclohexane (A/K, 5-7; Table 4) and adamantane (3 /2 , 9-18; Table 5), signifying the involvement of metal-based oxidants rather than non-selective freely diffusing radical species in the alkane hydroxylation. Under nitrogen atmosphere, almost the same type of reactivity pattern was observed, revealing that the cyclohexylperoxide species is not involved in the catalytic reaction. This observation also strongly supports the involvement of metal-based oxidants. 89,95,96 We propose that the m-CPBA binds with diiron(III) center by replacing a chloride ion to form the adduct [ Conclusions A few non-heme m-oxo-bridged diiron(III) complexes of tripodal 4N ligands have been isolated and characterized by spectral and electrochemical methods. In the X-ray crystal structures of the molecules 2 and 5, both the iron(III) centers possess a distorted octahedral coordination geometry. All the diiron(III) complexes catalyze the hydroxylation of cyclohexane and adamantane efficiently with good selectively in the presence of m-CPBA as oxidant. The observed selectivity for cyclohexane (A/K; 5-7) and adamantane (3 /2 ; 9-18) suggest the involvement of highvalent iron-oxo species rather than freely diffusing radicals in the catalytic reaction. Interestingly, 4 oxidizes cyclohexane (A/K, 7) very efficiently up to 513 TON while 5 oxidizes adamantane with good selectivity (3 /2 , 18) in the presence of m-CPBA within one hour. The stereoelectronic effects of ligand donors play a vital role in determining the catalytic efficiency of the diiron(III) complexes towards hydroxylation of alkanes. Interestingly, the incorporation of the strongly coordinating Nmethylimidazole donor renders the complex to act as efficient catalyst by stabilizing the high-valent iron-oxo intermediate species whereas the incorporation of weakly coordinating quinolyl donor makes the complex to act as a relatively poor catalyst by destabilizing the high-valent iron-oxo intermediate species. Conflicts of interest There are no conicts to declare.
2021-08-04T00:14:28.218Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "bd52a14ddf3e7aab36127c1729947a70b3d80231", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/ra/d1ra03135j", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e1a2fe5ec7480c69ea6398ebeb4f70a342098214", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
239033932
pes2o/s2orc
v3-fos-license
Whole Exome Sequencing in Patients with Phenotypically Associated Familial Intracranial Aneurysm Objective Familial intracranial aneurysms (FIAs) are found in approximately 6%–20% of patients with intracranial aneurysms (IAs), suggesting that genetic predisposition likely plays a role in its pathogenesis. The aim of this study was to identify possible IA-associated variants using whole exome sequencing (WES) in selected Korean families with FIA. Materials and Methods Among the 26 families in our institutional database with two or more IA-affected first-degree relatives, three families that were genetically enriched (multiple, early onset, or common site involvement within the families) for IA were selected for WES. Filtering strategies, including a family-based approach and knowledge-based prioritization, were applied to derive possible IA-associated variants from the families. A chromosomal microarray was performed to detect relatively large chromosomal abnormalities. Results Thirteen individuals from the three families were sequenced, of whom seven had IAs. We noted three rare, potentially deleterious variants (PLOD3 c.1315G>A, NTM c.968C>T, and CHST14 c.58C>T), which are the most promising candidates among the 11 potential IA-associated variants considering gene-phenotype relationships, gene function, co-segregation, and variant pathogenicity. Microarray analysis did not reveal any significant copy number variants in the families. Conclusion Using WES, we found that rare, potentially deleterious variants in PLOD3, NTM, and CHST14 genes are likely responsible for the subsets of FIAs in a cohort of Korean families. INTRODUCTION The global prevalence of intracranial aneurysms (IAs) is estimated to be 3.2% [1]. In 6%-20% of patients with IA, one or more of their family members also have an IA [2]. These cases are defined as familial intracranial aneurysms (FIAs) and are reported to have a more severe phenotype in terms of a higher number of aneurysms and a higher risk of rupture than those without a familial history [3][4][5]. Several linkage studies and genome-wide association https://doi.org/10.3348/kjr.2021.0467 kjronline.org the variants of these genes may explain some aneurysms in certain ethnic groups, they are rarely replicated across different studies. Previous studies have mostly focused on the presence of an aneurysm and not on its phenotypic presentation, such as its location, shape, and size. We hypothesized that if a specific gene was associated with IA, the characteristics of the aneurysm would be shared among members of the family. Therefore, to increase the possibility of gene identification, we reasoned that detailed information on the aneurysm should be obtained and considered when recruiting the family. The purpose of this study was to use NGS to identify potential IA-associated variants in families that share a specific phenotype. Study Population The Institutional Review Board of Asan Medical Center approved this prospective study (IRB No. 2018-1106. Informed written consent for blood sampling and magnetic resonance angiography screening was obtained from all study participants. IA was defined as a saccular dilatation of any size occurring in the intracranial arteries; FIA was defined as when at least two first-degree relatives in a family were diagnosed with IA. A family history of IA was identified in 28 (4.4%) patients among the 638 patients with IA in a tertiary hospital's prospectively collected database from between January 2011 and August 2018. We then selected families with FIA for further genetic testing according to the following inclusion criteria: 1) demonstration of the pedigree of the disease status in the family, 2) two or more affected members and one or more non-affected members are available for genetic testing, 3) available angiographic data for the participants (both affected and unaffected), 4) genetically enriched samples where the family has a severe phenotype of IA (multiple, early onset, ruptured) and common site involvement among families [3][4][5]; and 5) consent to participated provided by the patient and family members. Physical examination, ultrasonography, and/or computed tomography angiography were performed to rule out any known or unknown syndromes associated with IA. Other first-degree relatives who had not been screened for IA underwent magnetic resonance angiography. Whole Exome Sequencing (WES) Analysis Genomic DNA was extracted from peripheral blood cells using the Chemagic Magnetic Separation Module I (Chemagic MSM I) extraction robot with a DNA Blood 200 μL Kit. SureSelect Human All Exon V5 (Agilent Technologies) was used for library preparation, and sequencing was performed on the Illumina NextSeq500 platform (Illumina Inc.), which generated 2 x 150 bp paired-end reads. The averages of the 30 x and 20 x coverage for the target regions were 89.18% and 94.34%, respectively. Trimmomatic v.0.36 was used to trim sequences of sequencing adapters and suffixes of low quality (i.e., Phred quality score < 10). Based on the standards and guidelines for the interpretation of sequence variants from the American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG/AMP) [17], the candidate variants were classified into five types: pathogenic variant (PV), likely pathogenic variant (LPV), variant of uncertain significance (VUS), likely benign variant (LBV), and benign variant (BV). We assigned PP3 (pathogenic supporting) as a variant if at least two out of three meta-predictors (CADD, REVEL, M-CAP) and SIFT or Polyphen2 calculated a pathogenicity score above their respective thresholds. All clinically significant and novel variants were confirmed using independent Sanger sequencing [18]. Chromosomal Microarray To identify submicroscopic deletions or duplications that are difficult to assess using whole exome sequencing (WES), copy number analysis was performed using CytoScan HD (Affymetrix) according to the manufacturer's protocol. Regions of homozygosity and copy number variants (CNVs) shared between affected and unaffected siblings were eliminated as potential candidate regions. Thresholds for the detection of candidate pathogenic CNVs in affected subjects were set to 25 CNV markers for deletions and 50 CNV markers for duplications. CNVs were interpreted based on the technical standards of a joint consensus recommendation of the ACMG and the Clinical Genome Resource (ClinGen) [19]. Clinical Phenotypes of Three Families with FIA Used in This Study Thirteen individuals from three families were selected for WES. In all three families, two or more members had IAs at a common location (Fig. 1). In family A, the proband and his mother had paraclinoid aneurysms. The characteristics of this family included early onset (III-1, 2), presence of multiple aneurysms (average number of aneurysms ≥ 2) in a common location (II-4, III-1), and relatively few risk factors ( Table 1). The father of the siblings also had an IA in the middle cerebral artery. Family B had two affected siblings and two unaffected siblings in the second generation. The proband (II-4) had two small unruptured aneurysms in the right internal carotid artery at the origin of the posterior communicating artery (P-COM) and at the top of the basilar artery, and her older brother (II-2) also had a small internal carotid artery aneurysm at the origin of the P-COM artery. Family C had three affected females (I-1, II-2, and II-4) whose IAs were commonly located in the paraclinoid region of the internal carotid artery. WES Analysis and Variant Filtering WES was performed in all living affected individuals and at least one unaffected first-degree relatives of the probands. Among the > 90000 variants initially discovered in the WES, an average of approximately 500 variants for each individual were selected after excluding those with insufficient coverage, a frequency of 0.01 or more in the population, and variants that did not affect the protein (Fig. 2). Through a family-based approach according to Mendelian inheritance patterns, 40, 38, and 27 variants (autosomal dominant) and 222, 76, 162 variants (autosomal kjronline.org dominant reduced penetrance) were selected in each family, respectively. There were no variants that showed segregation of autosomal recessive patterns among the three families. Finally, 11 pathogenic or damaging variants potentially associated with IA were derived through pathogenicity prediction algorithms and knowledge-based prioritization from previous genetic studies. Potential IA-Associated Genes All variants were heterozygous and missense variants, except for the nonsense mutation in the C9orf92 gene ( Table 2). GBA and C9orf92 genes have been reported as susceptible genes associated with brain aneurysm in the GWAS catalog, and the remaining genes were reported to be associated with aneurysm or vascular/connective tissue disorders in the OMIM. The genes found in recent NGS studies were not identified in this study [8,[20][21][22][23][24][25][26][27]. Of the 11 genes, PLOD3, NTM, GBA, CHST14, SLC2A10, and C9orf92 genes have been reported to be related to IA or intracranial hemorrhage [28][29][30][31]. When assuming complete penetrance of the autosomal dominant variants, one variant of the PLOD3 gene in family A, no variants in family B, and two variants of the SLC2A10 and CHST14 genes in family C remained. Table 3 summarizes the function of all candidate Chromosomal Microarray Several chromosomal losses or gains were found in each family, but most of the CNVs were benign or likely benign. One copy number gain of unknown significance was detected in family A, which did not segregate with the phenotype. In addition, no genes were potentially related to IA in the corresponding regions. DISCUSSION In this study, WES was performed in three selected FIA families to identify genetic variants associated with IAs. A total of 13 participants were sequenced, of whom 7 had IAs. Among the 11 potential IA-associated variants, we noted three rare, potentially deleterious variants (PLOD3 c.1315G>A, NTM c.968C>T, and CHST14 c.58C>T) after considering gene-phenotype relationships, gene function, co-segregation, and variant pathogenicity. The PLOD3 gene encodes lysyl hydroxylase 3 (LH3), which is involved in post-translational modification of collagens, including type IV collagen [32,33]. As such, pathogenic variation of this gene can lead to complex connective tissue disorders resembling Stickler syndrome, Ehlers-Danlos syndrome, and epidermolysis bullosa [34][35][36]. Although vascular complications are rare manifestations of these syndromes, some cases of aneurysms or arterial dissection have been reported [34,35]. In addition, embryonic lethality with intracranial hemorrhage has been reported in LH3-knockout mice [33]. Although the PLOD3 mutation found in family A was a heterozygous variant, it could be a potential IA-associated variant considering the severe variability of the phenotype of PLOD3-related diseases [34]. Neurotrimin (NTM) belongs to the IgLON family of glycosylphosphatidylinisotol (GPI)-anchored cell adhesion molecules and has been implicated in the promotion of neurite outgrowth and adhesion [37]. Luukkonen et al. [29] reported that the NTM gene is associated with IA and thoracic aortic aneurysm and suggested that truncations in the NTM gene caused IA and thoracic aortic aneurysm in a family. The 11q25 chromosomal region has been suggested as a susceptibility locus for both IA and aortic aneurysms in several independent linkage studies [38,39]. Although the individual (II-1 in family B) unaffected by IA had a rare PV of the NTM gene, it is still considered a potential IA-associated variant when considering the reduced penetrance or late onset of the aneurysm phenotype. Among the other candidates, GBA and C9orf92 genes were suggested to be IA-susceptible genes in a recent GWAS study of the Korean population [28]. Biallelic PVs of the GBA gene cause Gaucher disease, and a heterozygous variant is a well-known risk factor for PD [42,43]. In a previous study, the rs75822236 in GBA gene showed the strongest association with the risk of IA formation (odds ratio = 161.46) with sufficient statistical power (1.1 x 10 -19 ), whereas the SNP in the C9orf92 gene was underpowered because of the small sample size [28]. Another candidate gene in family C, SLC2A10, encodes the facilitative glucose transporter glucose transporter 10 (GLUT10). Homozygous or compound heterozygous PVs of this gene cause arterial tortuosity syndrome, which is characterized by tortuosity, elongation, stenosis, and aneurysm formation in major arteries [44]. In contrast, heterozygous carriers of this gene variant are asymptomatic and do not show any notable vascular anomalies [45]. The heterozygous carriers (II-2 and II-4 in family C) in our study also did not show any arterial abnormalities that indicated arterial tortuosity syndrome. In our study, we selected families that would be most genetically enriched for IAs considering the phenotypes, which include common locations of the IA among family members, multiple IAs, early onset, and fewer risk factors. In particular, our study is distinct from other studies in terms of the selection criteria that the affected members in each family should share the same aneurysm location. We assume that the intuition of the physicians who diagnosed and treated the patients played an important role in identifying their genetic predisposition. Many genetic studies have been performed on FIA, and several genetic variations have been identified through linkage studies, GWAS, and NGS; however, these can explain only a small proportion of the total IAs in certain ethnic groups [7]. The current literature suggests that marked genetic heterogeneity may exist in distinct populations, and https://doi.org/10.3348/kjr.2021.0467 kjronline.org only two genetic studies, a linkage study and a GWAS, have been performed on the Korean population to date [28,46]. Our study is the first FIA study using NGS in Korea and may serve as a basis for establishing a genetic database for Korean patients with aneurysms. If sufficient data on FIA is accumulated through genetic studies, simple genetic testing using an NGS panel could offer great clinical benefits in terms of risk stratification, treatment decisions, and the screening of unaffected family members. Multiple factors are intricately involved in aneurysm development [3][4][5]. Gene-environment interaction and phenocopy hinder genetic studies on this matter, especially in patients with multiple risk factors such as hypertension, smoking, old age, and female sex. Therefore, it is difficult to determine whether the genetic variations in our study were entirely responsible for FIAs. Further validation using replication studies and expression or functional analyses are required to support our results. The limitations of this study are as follows. First, this study suggested several candidate genes, but these have not been fully validated. Further validation studies, such as replication studies for sporadic IA groups or functional analysis of the corresponding genes are needed. Second, the basic assumption of this study was that there would be some rare variants with strong effects that could explain the IAs of each family. However, the IAs in the families may be caused by environmental factors or common genetic variants, rather than rare variants, even though we have selected the most genetically enriched families with FIA in our database. Third, there were no candidate variants that were only found in the affected members of family B, and we thus had to find the most probable candidate (NTM) by assuming reduced penetrance. In addition, although the variants in PLOD3 and CHST14 genes were segregated in families A and C, the number of affected members may not be sufficient to exclude the possibility of false-positive results. Lastly, some participants only underwent magnetic resonance angiography, which may have produced falsenegative or false-positive findings, especially for tiny aneurysms. Despite these limitations, our study presented the use of a methodology for finding rare PVs using WES for IAs, a relatively common multifactorial disease. Further familial studies with more severe phenotypes and more affected members would be able to identify additional candidate genes with higher confidence. In conclusion, we studied three families that were genetically enriched for IA and performed WES to identify possible IA-associated variants. We found that the rare, potentially deleterious variants in PLOD3, NTM, and CHST14 are likely responsible for a subset of FIAs. Our findings may contribute to the understanding of IA pathogenesis, the establishment of an FIA genetic database in Korea, and further validation of IA candidate genes. Conflicts of Interest The authors have no potential conflicts of interest to disclose.
2021-10-20T15:14:34.950Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "03aaca5f7554ff459b22d6dbcb8ce44ae11f6860", "oa_license": "CCBYNC", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8743149", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f2c1466f5e560b76cf5e19d179d211c680450864", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235615509
pes2o/s2orc
v3-fos-license
Allogeneic Bone Application in Association with Platelet-Rich Plasma for Alveolar Bone Grafting of Cleft Palate Defects Aim: The aim of this study is to compare allogeneic bone grafts associated with platelet-rich plasma (ALBGs-PRP) to autogenous bone grafts (ATBGs) for alveolar reconstructions in patients with cleft lip and palate (CLP). Materials and Methods: The Maxillofacial Surgery Service of the Comprehensive Care Center for CLP (CCCLP) in Curitiba (Paraná, Brazil). Patients: Thirty out of 46 patients with 8–12 years of age and pre- or trans-foramen unilateral clefts were operated by the same surgeon. Groups were selected randomly after coin-toss for the first surgery to be ALBG-PRP. Interventions: Pre- and post-surgery cleft defect severity was registered by a score system using superimposed digitalized peri-apical radiographs. The hypothesis indicated ABG-PRP to be similar to the ABG was proved. Results: There was no statistically significant difference (P < 0.05) in bone augmentation for the ABG-PRP group (79.88%) when compared to the ABG group (79.9%). Conclusion: ABG-PRP is indicated as a successful treatment modality to reduce the need for additional donor sites and reduce morbidity and hospital stay. Introduction A collaborative project on the epidemiology of craniofacial anomalies indicated in 2011 that the prevalence of cleft lip and cleft lip and palate (CLP) was 3.28/10,000, and 6.64/10,000, respectively. Multidisciplinary approach is required to treat these patients from birth to adulthood in order to rehabilitate the missing hard and soft tissues. [1] The reconstruction of the alveolar process favors permanent teeth eruption, movement of teeth through the alveolar process using orthodontic forces, [2] and reestablishment of esthetics and masticatory function with implant-supported prosthesis. [3] Bone grafts stabilize the dental arch, optimize the periodontal support of the teeth adjacent to the cleft, and close the oral-nasal clefts, reducing speech difficulties, and food regurgitation into the nasal cavity. [4] The autogenous bone graft (ATBG) is considered the gold standard treatment in the field of alveoli reconstructions for treating the CLP patient. [5] However, the morbidity involved in the bone graft harvesting, the length of surgery, the risk This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. For reprints contact: WKHLRPMedknow_reprints@wolterskluwer.com of infection, and the bone graft quantity limitations have been the impetus for the use of an alternative method for bone grafting. Among these techniques, allogeneic bone grafts (ALBGs) obtained from Bone Banks and the use of platelet-rich plasma (PRP) to optimize the grafting procedure appears promising. [6] This investigation is aimed to assess bone augmentation using X-ray analysis, after mixing the ABG-PRP and compare it to the gold standard treatment of the CLP patients, which is ATBGs. Materials and Methods Of 46 patients with CLP reviewed, 30 were included in the research and were offered surgery for alveolar cleft reconstruction in the Maxillofacial Surgery Service of the Comprehensive Care Center for CLP (CCCLP) in Curitiba, Paraná, Brazil. We obtained Institutional review board approval before commencing this study by the CCCLP committee. In addition, all patients included in this study had been informed of the research details and signed the consent form on this research protocol. The consent included that one of the two procedures would be performed and that it would be decided at the time of the surgery. The selection criteria of the patients were according to age, sex, and type of cleft. The surgeon reviewed all patients. Patients showed unilateral, pre-and trans-foramen clefts, according to Spina's classification. [7] The average age in the ATBG (control group) was 15.5 and varied from 11 to 23 years; 9 were male and 6 were female. In the ALGB-PRP (experimental group), the average age was 14.8 and varied from 9 to 23 years; coincidentally, 9 were male and 6 were female. The patients were selected randomly for the surgical technique to be used. The randomization processes used was to include the first case scheduled for surgery in the experimental technique and the second in the control group and repeat this pattern until 30 cases were scheduled. The control group consisted of 15 patients receiving ATBG (from the chin and ramus of the mandible) for alveolar cleft grafting. The experimental group received ALBG-PRP. The surgeries were performed under general anesthesia, with nasotracheal intubation contra-lateral to the fistula. Lidocaine 2% with 1:200.000 diluted epinephrine was used for the anesthesia. A mucoperiosteal flap was used with two releasing incisions bilaterally in the premolar areas. After suturing the nasal mucosa, the cleft was filled with autogenous bone (control group) or ALBG-PRP (experimental group), according to the randomization indicated above. Next, the flap was rotated mesially and sutured with 5-0 nylon thread [ . Cephalosporin was applied intravenously while the patient was hospitalized, and prescribed orally after the patient was discharged for the following 10 days. In addition, analgesia was controlled with dipyrone (450 mg/day). Four months after surgery, the patients were reexamined clinically and radiographically by means of digital periapical radiographs (Siemens Heilodent 60B, with 60 kV and 10 mA, exposure time of 0,16s and SENS-A-RAY 2000 system sensor using SUA II648-2 of Regam Medical Systems). The magnification of this radiological apical system was of approximately 2%. In addition, this system optimizes density, sharpness, reduces radiation dosages and enhances the borders facilitating the superposition of the graphic images for analysis. All periapical X-rays were obtained using the Bisecting technique. The images were processed using Adobe Photoshop 4.5 from ADOS, and a numeric scale was created proportional to the size of the original image in a ratio of 21 pixels/mm. From this scale, parallel lines were drawn to evaluate the degree of density of the augmented bone [ Figures 6-8]. The first line drawn was on the cervical region of the teeth adjacent to the cleft, representing less distortion, whereas the other lines were drawn parallel to this one with a 3 mm distance between them. Contemporary Clinical Dentistry | Volume 12 | Issue 2 | April-June 2021 144 The treatment was considered successful when the concavity format of the graft was detected between lines 1 and 2 (space A). The other graft formats, located between lines 2 and 3 (space B) or the ones located between lines 3 and 4 (space C) were considered failures. Clinically, only the grafts in space C needed an additional surgical procedure [ Figure 8]. The ALBG was obtained from the Bone Bank of the Clinical Hospital of the Federal University of Parana, in Brazil. We chose to use the cortical-cancellous bone, with particle sizes ranging from 0.5 to 1.0 mm, which came in packages containing 5 g. The PRP was obtained in the laboratory where the blood was drawn, through brachial vein, to 5 ml test tubes containing 0.5 ml of sodium citrate (anticoagulant). Twenty-two milliliters of blood were drawn from each patient to obtain 5 ml of PRP. The blood was centrifuged with 1000 rpm (Biofixette) for 7 min. After centrifugation, the plasma was separated from the red blood cells by means of pipetting 0.5 ml. Platelet count was performed using a platelet counter before and after the centrifugation. The success and failure of the control and test groups were assessed using the Multifactor ANOVA test. Results Data from the experimental group (number of charts, sex and age) and the measurements of the ideal bone augmentations are presented in Table 1. The bone augmentation for each patient of the control group ranged from 34.12% to 100% of the total area planned for the bone fill of the defect. The mean bone augmentation for this group was 79.9%. The individual variation is in Table 2. The gender distribution among the patients showed 60% (n = 9) of male and 40% (n = 6) female. The age distribution was, only 26.5% (n = 4) of the patients were between 9 and 145 Contemporary Clinical Dentistry | Volume 12 | Issue 2 | April-June 2021 12 years old, which is the recommended age for surgery, [8] whereas the great majority, 73.3% (n = 11) were operated in a nonideal age range. Whereas In the test group, the worst performance was of 14.16% and the best was of 100% of the total area of bone augmentation, whereas the general mean value of bone augmentation for the group was 79.88%. The result of each patient was expressed in Table 1. The gender distribution, shows 60% (n = 9) of the patients were male and 40% (n = 6) were female. In relation to the age range, 40% (n = 6) were in the recommended age for surgery (9 and 12 years old), and 60% (n = 9) were operated on after that age. The results are registered in Tables 2 and 3. The results of the total bone gain for each patient in the control group varied between 34.12% up to 100% of the total area that was planned for the graft to fill the defect. The mean bone gain for this group was 79.9%. The individual variation can be visualized in Table 2. The statistical method applied to verify statistical significance for the independent groups was the Student's t-test [ sex and age) and the measurements of the ideal bone augmentations This analysis showed that none of the factors: group (control and experimental), sex, age (ideal and nonideal) have statistical significant difference over the digital measurement variable in a confidence level of 95% Discussion The results demonstrated that the age was a fundamental influencing factor, whereas the success rates of the control (receiving ATBG) and test (receiving ALBG-PRP) groups were higher for the patients that were operated in the ideal age. Among the 9 patients from the experimental group that were operated on during a nonideal age, 5 reached total success and 4 reached partial success or failure. In the control group, 6 patients, among the 11 patients operated on during a nonideal age, reached total success and 5 patients reached partial success or failure. This data show the importance of the treatment being conducted on the correct age group. The analysis of success or failures of the bone graft procedure performed in CLP patients, depend on the investigation criteria. All patients benefit from the treatments provided; however, according to the criteria used, success should be evaluated individually. Among the success criteria we can quote: • Oral-nasal cleft closure; • Bone support for the adjacent teeth and for impacted teeth; • Bone bridge formation and stabilization of the maxillary segments; and • Nasal alar base support and nasolabial contour. The oral-nasal cleft closure stands out as being the most important outcome. Many authors [9][10][11] suggest the use of mucoperiosteal flaps when performing these types of ridge augmentation. Studies [9,12] suggest vertical incisions directed to the buccal vestibule and incision of the periosteum on the base of the flap to facilitate graft coverage, optimizing the mobility, and reducing suture tension. In the present investigation, the mucoperiosteal flap was used to maintain the keratinized gingiva. [11] The dental gingival tissue can also be preserved, reducing the need of free gingival grafts, and resulting in better conditions for teeth to erupt in the site where teeth will receive orthodontic force or for prosthetic anchorage using dental implants. Table 4: Result of the ANOVA Analysis for the digital measurement variable. This analysis decomposes the variability of the digital measurement in contributions according to the factors: group, sex and age. Once the sum of the type III squares was chosen, the contribution of each factor was measured with the effect of the other factors being removed. The probability values tested the statistical significance of each factor. Since the P values on the statistical analysis are smaller than 0.05, no factor showed a significant statistical effect on the digital measurement variable on a confidence level in 95%. ALBG [19] showed the absence of growth factors, therefore, are not considered an ideal bone to support teeth eruption. The aim of this investigation was to find an alternative bone graft for treating CLP patients, which promotes less morbidity with similar efficiency to the ATBG. The excellent amount of newly formed bone that was obtained with the ALBG was confirmed with X-ray analysis, and the efficacy observed was confirmed by the possibility of the canine to erupt into the newly formed bone site, in addition to the possibility of using orthodontic forces in the grafted sites. In addition, the difference between the groups, in relation to the success rate of the treatments, was not considered statistically significant; therefore, the clinical results between both procedures used in this investigation were very similar. The literature [15,16,18] approaching allogeneic bone grafting procedures state that they show promising results. The results obtained from this investigation are in accordance with the reviewed literature. Negative immune responses to the ALBG [14,16,17] were not observed in the group that received it. The allogeneic bone is a bone conductor, which is less efficient when compared to the autogenous bone that is a bone conductor and inductor. To increase the properties of this graft, we mixed PRP, which contains growth factors, rendering the allogeneic bone also a bone inductor. Studies [20][21][22] show the efficiency and benefit of PRP in the healing process. The main effect of the PRP is for optimization of the tissue healing processes mainly using the platelet-derived growth factor and the transforming growth factors-β1 and -β2 (TGFs-β1 and β2). Authors conducted a study [22] using polypeptides (growth factors) present in the blood plasma, PDFG e TGF-β1 and β2 that showed PRP to have essential activity in tissue repair. There are divergences among authors regarding the various techniques used for attaining PRP. The type of centrifuge to be used, the number of rotations per minute, the need of thrombin use, and the location (in office or laboratory) to conduct these procedures, are debatable. PRP has shown success when attained in a dental office setting. [20,21] However, it is ideal to obtain the PRP in a specialized laboratory, preferably in hospital facilities, to avoid transport and contamination risks. Author divergences [20][21][22] among PRP attainment did not alter the final results of the platelet concentrate. The literature shows no consensus on the quantity of PRP to be used for these types of surgical procedures. In the present study, the mean values obtained from the PRP were 864.000 ± 59.560 platelets/ml using one centrifugation. The authors conducted a study [23] where they reached 1.200.000 platelets/ml using a technique approach of two centrifugations but needing to add thrombin to facilitate blood clot. In the present study, the mean values obtained from the PRP were 864.000 ± 59.560 platelets/ml using one centrifugation and having better efficiency without needing thrombin. Therefore, only calcium chloride (3,3%) was used to revert the anticoagulant (calcium citrate 0.150 M). Investigators [24] analyzed the magnification of conventional X-rays. In this study, they compared panoramic, bitewing, and periapical X-rays. The panoramic X-rays showed a 27% greater magnification. The other two X-rays showed an 8% magnification. This observation demonstrated that the panoramic X-rays are not indicated for bone graft follow-ups due to the presence of distortions. The periapical X-ray analysis offers better reliability of the images in addition to being a better assessment technique. Authors [25] stated that the computerized tomography has an advantage of rendering in three-dimensional images, which permit evaluation of the volume of the graft. These authors criticize the conventional X-rays because they can show a difference in results 3 months after the procedure has been done, prolonging the clinical assay. In the computerized tomography, the images of graft incorporation can be obtained as early as 1 month after the surgical procedure. Equipment availability and cost-benefit should dictate the method used to evaluate the results. The most precise images obtained were from the computerized tomography; however, the high cost and excess radiation should be taken into consideration. Among the conventional X-rays, the periapical examination is the most indicated, due to the ease of imaging and reduced degree of magnification compared to occlusal and panoramic X-rays. However, long-term storage can become a problem. As an alternative, in computerized tomography, and in conventional X-rays, the digitalized image can be a very interesting option. In addition, it permits instantaneous visualization of the images. Some advantages of the digitalized image are storage in floppy discs and CD-ROM; maintaining image quality for a longer period; it is a more inexpensive alternative; and the patient is submitted to less radiation when compared to the computerized tomography technique. The disadvantage of periapical radiographs is that it is a two-dimensional image. Considering the possibility of failure of the alveolar cleft grafting procedure, the literature [26,27] showed that oral infections (cavity and periodontal disease), nasal infection, suture dehiscence, split-thickness, and mucoperiosteal flap dehiscence, deciduous teeth extraction during surgery, insufficient maxilla immobilization and excessive surgeries in the site result in fibrosis and reduced vascularization. Authors [28] stated that the complications with suture dehiscence and bone sequestration increase in proportion to the patients' age. Authors [29] consider cleft size, mesial rotation of the adjacent teeth, permanent tooth eruption level, factors that can contribute to bone graft failure, or higher bone graft resorption. An investigator [10] emphasized the importance of dental hygiene before and after the surgical procedure. All these factors can influence the success rate of the treatment. To minimize these risks, it is important to work with a multidisciplinary team for treating CLP patients. Some problems can be avoided if the patient and their parents participate actively in increasing their hygiene level. However, some problems as tooth extraction and localized infections in the surgical site, in addition to choosing the correct flap design, depending on the surgeon's judicious evaluation. Other factors that could impose a negative effect on the treatment are cleft size, a need of greater bone quantity, reduced vascularization of the graft, and excessive surgeries in the site hindering the flap mobility. Conclusion The authors concluded that Allogeneic bone is an interesting alternative for alveolar cleft grafting procedures, with similar results when compared to the autogenous bone grafting procedures. In addition to having the advantage of reduced morbidity and surgery length, the PRP showed to be an important auxiliary in the tissue repairing process; optimizing ALBG and adjacent soft tissue healing. Financial support and sponsorship The authors did not receive finantial support from any institution in order to conduct this study. Conflicts of interest There are no conflicts of interest.
2021-06-24T13:24:34.364Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "cc04738ee3a56d309293c100a118fe456b06fa11", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ccd.ccd_342_20", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bc01ab5fd4419593318287a6d047f33c00619a2c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266935658
pes2o/s2orc
v3-fos-license
On the importance of wind predictions in wake steering optimization . Wake steering is a technique that optimizes the energy production of a wind farm by employing yaw control to misalign upstream turbines with the incoming wind direction. This work highlights the important dependence between wind direction variations and wake steering optimization. The problem is formalized over time as the succession of multiple steady-state yaw control problems interconnected by the rotational constraints of the turbines and the evolution of the wind. Then, this work proposes a reformulation of the yaw optimization problem of each time step by augmenting the objective function by a new heuristic based on a wind prediction. The heuristic acts as a penalization for the optimization, encouraging solutions that will guarantee future energy production. Finally, a synthetic sensitivity analysis of the wind direction variations and wake steering optimization is conducted. Because of the rotational constraints of the turbines, as the magnitude of the wind direction fluctuations increases, the importance of considering wind prediction in a steady-state optimization is empirically demonstrated. The heuristic proposed in this work greatly improves the performance of controllers and significantly reduces the complexity of the original sequential decision problem by decreasing the number of decision variables. Introduction As global energy consumption increases, there is a strong willingness and necessity to decarbonize electricity production.Hence, renewable energies are becoming increasingly important (Chu and Majumdar, 2012).Wind energy, particularly, is the focus of considerable research and development, with turbines becoming larger and more numerous within wind farms.Ensuring efficient control as wind turbines operate is necessary to maximize the benefits of wind energy. In the context of global warming, designing more efficient wind farms is essential.Wake steering is the subject of growing interest within the community to optimize the energy production of wind farms.However, most research regarding wind farm control technologies disregards the relevance of the wind direction variation.This work is motivated by a central question: from what magnitude of wind direction fluctuations is it necessary to consider the wind evolution in a wake steering optimization?To answer this question, this work proposes a new controller based on wind predictions and conducts a synthetic sensitivity analysis of wake steering and wind evolution using steady-state models and artificial wind data. In wind farm optimization, the use of low-fidelity models (usually based on steady-state models) is favored over higher-fidelity models (usually based on computational fluid dynamics and real-time wake interaction) due to the complexity and computational load associated with solving dynamic equations for every turbine in the farm.Some recent works such as Janssens and Meyers (2024) explore realtime optimal control of wind farms using large-eddy simulations (LESs).However, this research area is still in the early stages and for large-scale wind farm optimization, steadystate models are still widely used. In wind farm flow control (WFFC), developing effective closed-loop controllers is essential for scaling to larger wind farms and dealing with unpredictable wind conditions.These controllers dynamically adapt their strategies in real time using continuous sensor feedback to guide their decisions.Model-based, closed-loop controllers, in particular, rely on simulators of the environment to conduct continuous optimization while the farm is in operation.Fast and computationally efficient simulation is crucial for these controllers to quickly react to wind and turbines changes.This work focuses on the optimization process itself, adhering to community standards by using widely accepted, open-source, lowfidelity simulators. Wake effect A single wind turbine reaches its maximum power output when fully aligned with the wind.When the wind direction changes, a turbine uses its yaw to rotate its nacelle on a horizontal plane.By using active yaw control, a wind turbine can keep track of the changes in the wind direction and ensure maximum energy production over time by minimizing its misalignment with the wind.It corresponds to greedy control, where a wind turbine solely tries to maximize its power output (Yang et al., 2021). In the space immediately behind a turbine, the wind speed is slower and more turbulent.Such a phenomenon is called the "wake effect" and is the natural consequence of wind power extraction by the machine.When a wind turbine is located in the wake of another, its power output is reduced (because of a slower wind speed) and its fatigue increased (because of the turbulence).Within a wind farm, depending on the wind direction and the farm layout, most of the turbines can be affected by the wake of others. Because of wake effects, greedy control can be suboptimal within a farm.Therefore, instead of keeping every turbine aligned with the wind, yaw control can also be used to voluntarily misalign some turbines in relation to the direction of the wind (Boersma et al., 2017).When a turbine is misaligned with the wind, its wake effect is steered.By intelligently yawing the turbines and steering the wake effects, the wind flow across the turbines can be optimized.Such a method is known as WFFC (Meyers et al., 2022).A simple example of a two-turbine wind farm is given in Fig. 1. Current implemented wake steering strategies usually involve lookup tables (LUTs) (Fleming et al., 2017;Siemens Gamesa Renewable Energy, 2019).Wake steering strategies are computed for a finite set of different wind conditions prior to the farm operation.The yaw angles of each turbine are computed with steady-state models, regardless of the wind and turbine dynamics.Because a wake steering strategy creates misalignment with the wind, it is highly dependent on variations in the direction of the wind.The wind direction can change over time, and yaw control is constrained by the limited rotational speed of the nacelles.If the wind varies in directions and frequencies that the yaw actu- ators cannot easily track, computing adequate wake steering strategies over time can be a challenging task. Wind direction dynamics The study of wind direction dynamics is gaining interest within the research community.Wind direction dynamics can be broken down into large-scale drifts and small-scale fluctuations (van Doorn et al., 2000) and can be observed on different scales: the synoptic scale describes long distances and extended time periods, the mesoscale depicts the farm level and time periods from days to weeks, and the microscale corresponds to the turbine level and variations from seconds to minutes.The wind direction is fundamentally nonstationary, and there is incomplete knowledge regarding the physical and statistical characteristics of wind direction fluctuations across specific length scales and timescales that are essential for effective WFFC (Dallas et al., 2024). As the farm operates, the wind direction varies in both time (at the farm level) and space (at the turbine level).The study by von Brandis et al. (2023) found that spatial wind direction changes relevant to the operation of wind farm clusters in the German Bight exceed 11°in 50 % of cases.In this present work, numerical simulations are run with steady-state wake models.Therefore, only variations in the wind at the farm level are studied.When the direction varies over time, this work considers it to affect the whole wind farm. WFFC is most beneficial at low wind speeds because this is where small changes in the wind speeds can lead to important power output variations.The same wake steering strategy will lead to higher power gains at low speeds compared to higher wind speed.Because the wind direction variability is higher for low wind speeds (von Brandis et al., 2023;van Doorn et al., 2000;Dallas et al., 2024), the study of dependence between wind direction variations and yaw control is important.Also, because the impact of climate change on wind dynamics is unknown, designing robust controllers is necessary for long-term operation. Related works As tracking wind direction is essential for wind turbines, the literature is rich in studies seeking better wind direction tracking mechanisms.Song et al. (2018) developed a model predictive control (MPC)-based controller on a finite control set to track the wind directions.Hure et al. (2015) designed a yaw controller based on very short-term wind predictions.But performing WFFC and wake steering is a more complex optimization problem. LUTs can be adapted for dynamic control with different methods.Usually, a low-pass filter is used to apply control only for high variations of the direction.A sampling method can be used to adjust the yaw control frequency, and hysteresis mechanisms avoid unnecessary yaw control and restrict the yaw actuators (Kanev, 2020a).Simley et al. (2021) improved a traditional LUT by anticipating the wind direction changes ahead of upstream turbines.Kanev (2020b) performed WFFC with a receding horizon using gradient-based optimization and ran tests in large-eddy simulations under realistic variations in wind direction and speed.But the wake steering strategies of an LUT fundamentally do not consider the wind dynamics; only their implementation does. Regarding machine learning (ML) methods, and more particularly reinforcement learning (RL), which is becoming a source of great interest to the scientific community, wind direction variations are often overlooked.The importance of the wind direction dynamics is clearly pointed out by Saenz-Aguirre et al. (2019) and Saenz-Aguirre et al. ( 2020), but most of the studies carried out later only consider static or quasi-static wind directions.Some recent works have started to consider time-varying wind directions in WFFC optimization (Kadoche et al., 2023). Contributions The remainder of this paper is structured to mirror the three main contributions.Each contribution forms the basis of an individual section, and Sect. 5 concludes.The contributions and their corresponding sections are as follows. -This work proposes a discretized formalization of the WFFC problem over time as the succession of multiple steady-state optimization problems interconnected by the rotational constraints of the turbines and the evolution of the wind.Due to the discretization hypothesis and the yaw actuation constraints, the important hypotheses regarding the transition between one steady state and the next are formulated.This formalization is conducted in Sect. 2. -To develop a prediction-based controller, this work presents a reformulation of the instantaneous, steadystate original sequential decision problem over a future time window.The default objective function is augmented by a new heuristic, computed on a prediction of the wind.The proposed heuristic acts as a penalization for the optimization without increasing its dimension and encourages solutions that will guarantee future energy production.The heuristic and the other studied controllers are detailed in Sect.3. -This work conducts a sensitivity analysis of the wind direction variations and wake steering optimization.It empirically demonstrates the importance of a windprediction-based control when the magnitude of the wind direction fluctuations becomes large.The new proposed heuristic greatly improves the performance of a traditional steady-state wake steering optimization when the variations of the wind direction are important.Numerical simulations using synthetic wind data are conducted in Sect. 4. Problem formalization The environment is composed of a wind farm and some exogenous variables related to the wind direction and the wind speed.At a time step t, a turbine i is characterized by its absolute angular position β i t ∈ [0, 360] [°] and its relative orientation or yaw (often used to compute the power output) Adding and subtracting by 180 ensures that the yaw stays in the range [−180, 180].As illustrated in Fig. 2, the yaw corresponds to the rotational movement going from the absolute angular position β i t to the wind direction K t such that β i t + α i t mod 360 = K t .Positive values of the yaw indicate that the turbine is rotated anticlockwise from the wind direction, and negative values of the yaw indicate that the turbine is rotated clockwise from the wind direction. At a time step t, the yaw setting u i t ∈ [u min , u max ] [°] of a turbine i corresponds to the rotational movement of the turbine between time steps t and t + 1.Because of mechanical constraints related to the yaw actuator of the nacelle, the yaw setting is bounded between two consecutive time steps.As illustrated in Fig. 2, the setting is used to update the orientation of the turbine (2) Power The power curve of a turbine gives the theoretical power output (megawatts, MW; y axis) of the machine as a function of the wind speed ν [m s −1 ] (x axis), considering no yaw misalignment, such that with ρ [kg m −3 ] the air density, A [m 2 ] the rotor blade area, and C P the power coefficient of the turbine.The theoretical power output is strictly positive if the wind speed is within certain bounds At a time step t, the power output of a turbine i considering yaw misalignment [MW] is computed from the power curve and the yaw angle such that with p a parameter accounting for power losses due to misalignment and [α cut-in , α cut-out ] [°] a safety bound for the yaw taken into account by the indicator function.Because too much misalignment with the wind can damage the machine, if the yaw is too great, the turbine is shut down and its power output is null.The variables are the wind direction K t , the absolute angular position β i t of the turbine, and the yaw α i t of the turbine.The wind direction indicates where the wind is coming from; e.g., a wind direction of 270°indicates a wind coming from the west.Here, the nacelle is misaligned with the incoming wind direction: the turbine is rotated clockwise from the wind, so α i t < 0. The yaw setting u i t gives the next orientation of the turbine at time step t + 1. Policy A policy π is a function returning the yaw settings (u 0 t , u 1 t , . .., u N−1 t ) of all the turbines at a time step t given a state s t .Each wind farm controller is associated with a specific policy.In this work, the state s t may be composed of an observation of the current wind (K t , V t ), a prediction of the wind at time step t + 1 (K t+1 , V t+1 ), a prediction of the wind at time step t + 2 (K t+2 , V t+2 ), and until time step t + L, with L the prediction horizon. Therefore, the general form of a state is States can be categorized based on two distinct properties: with perfect or imperfect information and with or without foresight knowledge.Depending on the possible combinations, there are four classes of states, listed in Table 1. System evolution An episode is defined by H time steps during which turbines are controlled via their yaw.An episode is characterized by time series for the wind directions and the wind speeds as well as initial positions for the nacelles.During an episode, it is assumed that all the states belong to the same class (defined in Table 1) and the policy is presumed to be stationary (it does not change over time). The full evolution of an episode is described in Algorithm (1).At each time step, the policy returns the yaw settings based on the current state, the system is updated, and the power output of the farm is computed.The yaw setting of a turbine i at the end of time step t is indexed t + 1 (because it has been updated) and it is the one used for the power computation of time step t. At a time step t, to compute the power output of each turbine, local wind velocities are needed.Such computations rely on complex fluid mechanics, depending on the incoming wind and the updated yaw angles of each turbine.For optimization, performing such complex computations is computationally expensive.Therefore, in this work, these computations are carried out by a steady-state simulator ), which is used as a substitute for real-life measurements. The simulation is said to be steady-state because it only depends on the current global wind data and the updated yaw angles.It does not consider previous wind data, previous yaw angles, or time delays in the wake propagation.The evolution of an episode over time is constrained by the rotational bounds of the turbines and the variations of the wind. At each time step, during the "control policy" operation, a controller knows the evolution mechanisms of the system; i.e., it can conduct any computations with the f control , f yaw , f i simulation , and f power functions but based on the wind data provided by the state.Because such data can be noisy, all the computed values can be inexact.For example, at a time step t, if a controller computes the yaw of a turbine i based on its updated orientation β i t+1 , it would be equal to α i t+1 = f yaw (K t , β i t+1 ).Because the observed wind direction K t can be different from the true wind direction K t , the yaw α i t+1 estimated by a controller can be different from its true value α i t+1 . Transition regime At a time step t, for a turbine i, and due to the steady-state nature of the simulation, the WFFC problem thus formalized considers a single power output P i t .In reality, during a time step, the wind is time-varying and a turbine takes time to rotate because of mechanical constraints.Therefore, the discretization of the continuous control problem results in the loss of some information and possibly less inaccurate power outputs.To ensure that the discretized power outputs are good approximations, from one time step to another, a turbine is assumed to rotate immediately and the wind is assumed to be quasi-constant. The duration of a time step is always considered constant during an episode.At a time step t, when a setting u i t is applied to a turbine i, the rotational time T r (minutes) for the turbine to go from its current orientation β i t to its next orientation β i t+1 is always considered largely inferior to the duration of the time step, i.e., T r t for all u i t ∈ [u min , u max ].A turbine always rapidly reaches its target position before the end of the time step duration.But during a time step, no other control will be applied to the turbine.For this reason, the rotational constraints [u min , u max ] need to be consistent with the duration of a time step t. The coherence time T c (minutes) of a wind variable (either the direction or the speed) is the maximum duration during which the variable is quasi-constant.If the coherence time of the wind direction is strictly smaller than the time step duration, a discretized value K t would stretch too far away from its corresponding continuous signal.The same goes for the speed.Therefore, in this work, the coherence time is always equal to the time step duration, i.e., T c = t, for both the direction and the speed. Controllers At a time step t, the yaw settings (u 0 t , u 1 t , . .., u N −1 t ) are denoted as u t .A controller is defined by its policy π (s t ) with the state s t described in Sect.2.4.This work compares a naive control where each turbine is aligned as much as possible with the wind and three optimized wake steering control strategies.In an episode, at each time step t, during the control policy operation of Algorithm (1), each controller computes the yaw settings such that u t = π (s t ) by maximizing a specific objective function f obj (s t , u t ) with regard to the turhttps://doi.org/10.5194/wes-9-1577-2024 Wind Energ.Sci., 9, 1577-1594, 2024 E. Kadoche et al.: On the importance of wind predictions in wake steering optimization Algorithm 1 Full episode evolution over time. Naive controller The naive controller always tries to keep turbines aligned with the current wind direction as much as possible.It is a weak baseline as it does not conduct any wake steering optimization.It runs with no foresight (i.e., L = 0), as it is only concerned with the current observed wind direction K t .Therefore, the state s t is reduced to {K t , {β i t } i∈{0,1,...,N−1} }.It consists of a greedy control (no wake steering) where the objective function at a time step t minimizes the amplitude of the yaws, i.e., with with At a time step t, the rotational movement required for a turbine i to stay aligned with the observed wind direction is equal to f yaw (K t , β i t ).Because of the rotational constraints, this movement is clipped such that it is always an acceptable setting with regards to the yaw actuator, giving a closed-form expression for the solution, defined as (11) Wake steering Compared to naive control, wake steering is used to optimize the power output of the farm.In this work, two distinct wake steering strategies are used.One is based only on the instantaneous wind data, and one is based on instantaneous and predicted wind data.The instantaneous controller searches for the yaw settings maximizing the instantaneous power output of the farm.The prediction-based controller maximizes the instantaneous and future power outputs.At each time step t, the same Gauss-Seidel (GS) method is used for both controllers, but with different objective functions.In this work, optimization is conducted with a GS method (described in Algorithm A1).A similar approach was first proposed by Fleming et al. (2022) with a serial-refine algorithm. The GS method works as follows.A first solution is initialized from the naive controller, where each initial yaw setting keeps its turbine aligned as much as possible with the wind.Then, the GS method iterates over each turbine, from upstream to downstream ones.At each iteration, it solves the optimization problem for the current turbine, considering the yaw settings of all others fixed, by conducting a grid search over a discretized solution space S = {u min +l • u max −u min n y −1 } for all l ∈ {0, 1, . .., n y − 1}, with n y being a precision parameter.Once optimized, the setting of the current turbine is updated, and it goes to the next one. Instantaneous controller The instantaneous controller searches for the yaw settings maximizing the immediate power output of the farm.It always runs under no foresight (i.e., L = 0), as it performs wake steering for the current observed wind data only.Therefore, the state s t is reduced to {K t , V t , {β i t } i∈{0,1,...,N −1} }.It is a steady-state optimization performed on one time step where the objective function at a time step t is the immediate normalized power output, i.e., with ν i t = f i simulation (K t , V t , {α j t+1 } j ∈{0,1,...,N−1} ), ( 13) with with with with The optimization problem thus described multiplies the number of decision variables by L + 1.Also, the computation of the local velocities at a given time step depends on all the previous yaw settings.Therefore, the prediction-based decision problem significantly increases the complexity, and because there is no simple solution, this work proposes a reformulation.The objective function given by Eq. ( 16) can be split between the current time step t and the next ones, from t + 1 to t + L, such that The first term of Eq. ( 21) is the normalized power output of the farm for the current time step.It corresponds to the objective function of the instantaneous controller defined in Sect.3.2.1.It only depends on the current yaw settings u t .Now, focusing on the second term, the closed-form expres-sion of the f power is written, giving The complexity brought by the prediction-based controller comes from the fact that Eq. ( 22) depends on the local velocities ν i t+k and the updated yaw angles α i t+k+1 corresponding to the optimized yaw settings of each future time step.To decrease the complexity, this work proposes modifying Eq. ( 22) in the following way. -Each local velocity ν i t+k is replaced by the corresponding predicted global wind speed V t+k .It reduces the complexity coming from the steady-state simulation by removing the dependence on the updated yaw angles.While this simplification removes future local wind data, it retains some information regarding potential future energy production by relying on the predicted global wind data only. -Each updated yaw angle α i t+k+1 depending on the optimized yaw setting u i t+k is replaced by the expected yaw angle αi t+k+1 if a naive controller was used instead.It reduces the complexity coming from the optimization, as there is a closed-from expression for the naive controller, as provided by Eq. ( 10).Replacing the wake steering optimization performed in the future with a naive wind tracking solution reduces the number of optimization variables of the original problem while keeping good solutions.Indeed, proper yaw settings are known to be close to the wind on average. -The cosine function at power p of each yaw angle is replaced with a simpler penalization for yaw misalignment.The penalization chosen corresponds to 1 minus the normalized absolute value of that yaw angle.It provides linearity and better interpretability. -The indicator function is removed so that there is no discontinuity.Even if a yaw is too great, it can be of some interest for the optimization to know about the potential power output.The more a turbine is misaligned, the less likely it will be to produce energy and the more it will be penalized. - The only variables specific to each turbine are the yaw angles updated from a naive controller, which are already normalized.Therefore, it becomes unnecessary to normalize the overall expression by N .With such modifications, Eq. ( 22) becomes a new heuristic H t defined as with with βi t+k+1 = f control ( βi t+k , ûi t+k ), ûi t+k computed with a naive controller defined by Eq. ( 10), (25) with βi t+1 = f control (β i t , u i t ), u i t computed from a wake steering optimization. (26) Because this new proposed heuristic depends on neither the future optimized yaw settings (naive control) nor the future local velocities (no simulation), it does not increase the number of optimization variables.The heuristic is a scalar acting as a penalization for the optimization.The final objective function of the prediction-based controller can finally be written as with ν i t = f i simulation (K t , V t , {α j t+1 } j ∈{0,1,...,N−1} ), ( 28) with with H t (s t , u t ) defined by Eq. ( 23). ( 31) The heuristic is the discounted weighted sum of the future theoretical power outputs.By choosing certain optimized yaw settings u t for the current time step, the heuristic uses a naive controller over a future time horizon of L time steps to evaluate how well the turbines will manage to stay aligned with the predicted wind directions.The higher the potential future energy production, the more critical it becomes for the current yaw settings not to rotate the turbines too far away from the predicted wind direction. For example, if the future expected power outputs are high, the heuristic will encourage yaw settings that will put the turbines in good orientations for the future.The heuristic will penalize the objective function for yaw settings that will prevent turbines from keeping track of the wind.An illustration of the heuristic is given in Fig. 3, describing the first term (wake steering optimization) and the second term (heuristic based on a wind tracking control) of Eq. ( 27). Upper bound To have an upper bound in terms of performance (power output) of a wake steering strategy, the rotational constraints are relaxed.It means that in Eq. ( 6), the variables u min and u max are equal to −180 and 180°, respectively.Between two consecutive time steps, each turbine is assumed to be capable of reaching any orientation. From a different point of view, the upper bound corresponds to the wake steering instantaneous controller, but for a complete steady-state version of the system evolution presented in Algorithm (1).All time steps are entirely independent from each other, as there are no longer any rotational constraints for the turbines. The same objective function of the instantaneous controller, presented in Sect.3.2.1, is used.It always runs under no foresight (i.e., L = 0), as it performs wake steering for the current wind data only.Therefore, the state s t is reduced to {K t , V t , {β i t } i∈{0,1,...,N −1} }.The yaw settings computed by the upper bound would not be admissible in reality if the corresponding targeted orientations are too far away from the current ones. Simulations In Sect.4.1 the process used to generate wind data is described, and in Sect.4.2 the experiment setting is given.Finally, the results and the empirical conclusions that can be drawn are explained in Sect.4.3. Wind data scenario The wind data time series are artificially generated with custom Wiener processes.The wind directions {K t } t∈{0,1,...,H +L−1} are computed with Algorithm (2).The wind speeds {V t } t∈{0,1,...,H +L−1} are computed with Algorithm (3).To generate the time series, an initial value is cumulatively incremented at each time step by variable m t .Each increment m t is independently sampled from a normal distribution of mean 0 and standard deviation σ t such that with τ a normalization variable with regard to the number and range of the generated values and δ X t a variation parameter for the wind variable X (either the direction or the speed). To maintain the wind directions in the range of valid values, i.e., [0, 360] [°], the modulo operation is sufficient.To maintain the wind speeds in the range of valid values, i.e., [ν min , ν max ] [m s −1 ], a mirrored function as explained in Fig. 4 is proposed.The generated values inside the wind speed bounds are not modified.The generated values outside the bounds are recursively mirrored inside the bounds. The variable δ X t defines the level of variation of the wind variable X time series (either the direction or the speed).The rotation zone represents the range of possible orientations a turbine can take at a given time step after being controlled.First, wake steering optimization is performed to find the setting u i t , which yields a power output of P A MW for case A and P B MW for case B at time step t.By considering P A > P B , case A would be preferred.But then the heuristic computes the future expected yaw angles if a naive wind tracking solution is used.From these expected yaw angles, the heuristic computes the expected power outputs based on the predicted wind speeds.Here, while the yaw setting of case A gives a better immediate solution than the one given by case B, it keeps the turbine further away (i.e., giving greater yaw angles) from the future wind.The solution of case B would then be preferable.The heuristic encourages the choice of yaw settings that may not be the best at the current time step but that ensures future power output. When equal to 0, the signal is constant.As δ X t increases, the absolute value of the increments increases on average.At each time step, δ X t is sampled from a uniform distribution defined between δ X min and δ X max .When δ X min and δ X max are equal, all increments {m t } t∈{0,1,...,H +L−1} are independently sampled from the same distribution: the generated time series is stationary with regard to the increments.When δ X min < δ X max , increments are independently sampled from different distributions: the generated time series is nonstationary with regard to the increments.In Fig. 5 the impact of the δ K t variable is shown for the wind direction. https://doi.org/10.5194/wes-9-1577-2024 Wind Energ.Sci., 9, 1577-1594, 2024 Algorithm 2 Wind direction generator. Input: H + L number of points K init initial wind direction δ K min , δ K max bounds for the variation variable τ = 360/(H + L) Input: H + L number of points V init initial wind speed ν min , ν max bounds for the wind speed δ V min , δ V max bounds for the variation variable τ = (ν max − ν min )/(H + L) Experimental setting The function f i simulation (K t , V t , {α j t+1 } j ∈{0,1,...,N−1} ) computes the local wind speed in front of a turbine i at a time step t given wind data K t , V t , and the yaw of each turbine {α j t+1 } j ∈{0,1,...,N−1} .This function, introduced in Sect.2.5, is ensured by the low-fidelity, steady-state simulator FLORIS (NREL, 2021).FLORIS is used with a Gauss curl hybrid wake model.The Gaussian velocity model is implemented based on Bastankhah and Porté-Agel (2016) and Niayifar and Porté-Agel (2016).To compute the deflection of the wakes depending on the yaws, the models described by Bastankhah and Porté-Agel (2016) and King et al. (2021) are used.The turbulence model described by Crespo and Hernández (1996) is used.The optional wake modeling options "secondary steering", "yaw added recovery", and "transverse velocities", provided by FLORIS and giving additional features to the f i simulation function, are enabled.A wind farm of 34 International Energy Agency (IEA) identical 15 MW (Gaertner et al., 2020) wind turbines is used.It has cut-in and cut-out speeds of ν cut-in = 3 m s −1 and ν cut-out = 25 m s −1 , respectively.Each wind turbine has a rotor diameter of 242.24 m, i.e., a rotor area of 46 087 m 2 .The air density is ρ = 1.225 kg m −3 and the tunable parameter accounting for the power losses due to misalignment is p = 1.88.WFFC strategies are sensitive to the distances be- tween turbines.To make the numerical simulations more robust to the distances between turbines, a diamond shape is used for the layout.With a diamond shape, there is an identical distance between each machine and its surrounding turbines.Using 34 machines creates a sufficiently large wind farm for wake steering to be impactful and is sufficiently small for optimization to converge quickly.A FLORIS illustration of the layout used is given in Fig. 6. The limits for the wind speed are ν min = 4 m s −1 and ν max = 10 m s −1 .The interval [4, 10] m s −1 approximately corresponds to the ascending part of the power curve, where wake steering is the most beneficial for the farm.For wind speeds of [10, 25] m s −1 , the power output is constant; if the wind speed is reduced because of wake effects, there will be no power deficit.Because this work conducts a sensitivity analysis of yaw control, the wind speed is kept in the range of [4, 10] m s −1 . The horizon size is H = 144 and the length of the foresight for the prediction-based controller is L = 10.The initial wind values are K init = 270°and V init = 8 m s −1 .The discount factor used for the heuristic H t is γ = 0.99.The precision parameter for the GS methods is n y = 120, giving the grid search method good precision. More technical details regarding the simulations and numerical instabilities are given in Appendix B. The time step duration t is intentionally undefined, as it will be explained in Sect.4.3.1.Depending on the time step duration value, different interpretations of the same results will be made.For example, if t corresponds to 5 min, then the horizon L = 10 means that the prediction-based controller has access Wind Energ.Sci., 9, 1577-1594, 2024 https://doi.org/10.5194/wes-9-1577-2024 Figure 6.Layout in the form of a diamond shape.The farm comprises 34 identical IEA 15 MW wind turbines.There is an identical space equivalent to the diameter of four turbines between a machine and its adjacent turbines.A distance of four turbine diameters is sufficiently small to create detrimental wake effects for the farm, and therefore the optimization is pertinent and sufficiently large for the design to be realistic.Here the direction is 287.4°, the wind speed is 8.4 m s −1 , and yaws are computed with the instantaneous wake steering controller. to a prediction of the wind of 50 min.In Table 2, a summary of the experimental setting used in this work is given. Results To empirically demonstrate the importance of optimizing yaw control over a long-term time horizon, numerical simulations are performed with perfect and imperfect (noisy) wind predictions.In the graphs, for each curve, the center-Table 2. Detail of the variables and their values used across the simulations.This configuration is shared by all the numerical simulations.The foresight length is equal to 10 only for the predictionbased controller.Otherwise, it is equal to 0. The yaw rotational constraints will vary across the simulations, but α cut-in and α cut-out are always equal to u min and u max , respectively.For one episode, the total farm power output of a controller C given by Algorithm (1) is denoted as The metric to benchmark a controller C is the power gain [%] between the total farm power output of C and the total farm power output of the naive controller.The power gain is equal to 100 • (P C −P naive ) P naive . The performance of each controller presented in Sect. 3 is tested for increasing values of δ K t .Numerical simulations are run on 21 different values of δ K t , with δ K t ∈ {0, 1, 2, . .., 20}.The wind speed is always generated with δ V t = 1.Because this work explores the impact of wind direction on wake steering, the magnitude of the wind speed fluctuations is kept small.The wind direction and wind speed increments are stationary: max for all t ∈ {0, 1, . .., 153}.The objective here is to study the impact of the wind direction variations on yaw control.The greater the δ K t value, the stronger the variations.Because the nacelles have a limited rotational speed, the study of the wind direction fluctuations is crucial for yaw control.The standard deviations of the wind direction time series are related to the δ K t parameter in Eq. (32).To better illustrate the wind direction evolution, the time series K defined as is used.Each value of K lies in the range [0, 180] [°].To study the magnitude of the variations, the absolute values are taken.Figure 7 illustrates the influence of some δ K t time series on the magnitude of wind direction variations K. In Fig. 8, the power gains of each controller compared to a naive controller are plotted.The yaw limits u min and α cut-in are equal to −15°(a) or −30°(b).And u max and α cut-out are equal to 15°(a) or 30°(b).These yaw constraints offer enough liberty for a wind turbine to rotate between two consecutive time steps and are small enough to limit the induced fatigue.The detailed results are given in Appendix C in Tables C1 and C2. As the variations of the wind direction increase, the performance of each controller diverges from the others.For small variations of the wind direction, both the instantaneous controller and the prediction-based controller give similar results.When the variations of the wind direction become large, the instantaneous controller struggles to maintain good performance.The heuristic of the prediction-based controller manages to find better yaw control strategies.The gap between the performance of the upper bound with the other controllers shows how strong wind direction variations, in relation to the rotational constraints of each machine, impact yaw control. Based on the results given in Fig. 8, several general statements can be drawn.As previously said, the time step duration is intentionally imprecise.The reason is that different values of t will lead to different interpretations.The following statement is true for any values of t, with respect to the hypotheses of the transition regime, described in Sect.2.6. -For wind turbines that can rotate from −15 to 15°every t minutes, if the wind direction changes by more than 7.34°every t minutes, it is important to consider future wind data in a steady-state yaw control optimization. -For wind turbines that can rotate from −30 to 30°every t minutes, if the wind direction changes by more than 12.23°every t minutes, it is important to consider future wind data in a steady-state yaw control optimization. Noisy predictions In the second set of simulations, the robustness to noisy predictions of each controller is tested.The yaw limits are u min = α cut-in = −15°and u max = α cut-out = 15°.The {K t } t∈{0,1,...,153} time series are computed with δ K min = 0 and δ K max = 20.The time series {V t } t∈{0,1,...,153} are always computed with δ max , the increments are nonstationary for the wind direction.The corresponding K time series is plotted in Fig. 9. Only the noise applied to the wind directions strongly impacts the different policies.The prediction-based controller results in a poorer performance than a naive controller from a noise of 8°.The wind speed noise insignificantly affects the performance of the algorithms.This corroborates the fact that yaw control mainly depends on the wind directions.Because the prediction-based controller uses more wind data points, it is more robust than the instantaneous controller. Conclusions As WFFC is becoming more important to increase the energy production of wind farms, this work studies wake steering as a steady-state optimization problem over time.The yaw control problem is formalized as successive multiple steady-state optimization problems interconnected by the rotational constraints of the turbines and the evolution of the wind.Because the function computing the power outputs is steady-state, only the dynamics of a homogeneous global wind and the rotational constraints of the machines are captured.Low-fidelity, steady-state simulators are used because they are not time-consuming and they are suitable for optimization.But future works should perform the same studies with continuous and higher-fidelity simulators such as HAWC2Farm (Liew et al., 2023), better capturing the dynamics of the wake effects from one time step to another.This becomes especially important when the variations of the wind direction become important.Traditionally, yaw control is optimized in a steady-state manner.Yaw settings are computed so that they maximize the instantaneous power output of the farm.To optimize wake steering over a long-term time horizon, an MPC method is usually used.Such an approach increases the complexity of the optimization problem, making it harder to solve.To overcome such complexity, a reformulation of the steady-state optimization problem is proposed in this work to consider future wind data.The traditional objective function is augmented by a new heuristic estimating the future expected theoretical power outputs of the farm, weighted by how far the turbines will be from the wind if they are controlled by a naive approach.The new prediction-based controller pro-posed in this paper has the same number of decision variables as an instantaneous optimization. Lastly, this work conducts a sensitivity analysis of yaw control and the variations of the wind direction.It demonstrates the importance of optimizing yaw control over future wind data when the variations of the wind directions become large.For strong wind variations, the new prediction-based controller greatly improves the performance without increasing complexity.This work shows, for example, that if deploying wind turbines that can rotate from −15 to 15°every t minutes, and if the wind direction changes by more than 7.34°every t minutes, it is important to consider future wind data in a steady-state yaw control optimization.Experhttps://doi.org/10.5194/wes-9-1577-2024 Wind Energ.Sci., 9, 1577-1594, 2024 This study is conducted on synthetic wind data so future works should explore the same question of dependence between the wind variations and yaw control over real wind data.Because the hypotheses regarding the transition regime may be far from reality, the proposed heuristic could be combined with low-pass filters and hysteresis mechanisms for more realistic implementations.Future works should incorporate the fatigue in the optimization process, as WFFC can have a major impact on the lifetime of each turbine.For example, the objective function of the prediction-based controller could be augmented by a heuristic taking into account the magnitude of the yaw actuations.However, the results provided by this work also suggest that with wake steering strategies more robust to wind direction variations, it would be possible to reach the same level of performance with fewer yaw actuations. Appendix A: Gauss-Seidel method The GS method iterates over each turbine in the direction of the wind, one by one, from upstream turbines to downstream ones.The turbines' default coordinates {cx i , cy i } i∈{0,1,...,N−1} [m] are rotated such that the wind is coming from the west.The initial yaw settings are computed with a naive controller.By doing so, the initial solution is already a good enough solution that keeps turbines as aligned with the wind as possible.At each iteration, it solves the optimization problem by varying the yaw setting of the current E. Kadoche et al.: On the importance of wind predictions in wake steering optimization Appendix C: Detailed results Table C1.Detailed results of the simulations conducted on perfect predictions in Sect.4.3.1.Yaw limits are u min = α cut-in = −15°and u max = α cut-out = 15°.For each δ K t the total power output of the farm in 10 4 MW is given for each controller. Figure 1 . Figure1.Example of WFFC on a two-turbine wind farm with the wind coming from the west.The first (upstream) turbine is misaligned and its wake effect is steered away from the second (downstream) turbine.By letting the wind flow more freely to the second turbine, the misalignment of the first turbine increases the total power output of the farm. Figure 2 . Figure2.Example of a wind turbine i seen from above at a time step t.The variables are the wind direction K t , the absolute angular position β i t of the turbine, and the yaw α i t of the turbine.The wind direction indicates where the wind is coming from; e.g., a wind direction of 270°indicates a wind coming from the west.Here, the nacelle is misaligned with the incoming wind direction: the turbine is rotated clockwise from the wind, so α i t < 0. The yaw setting u i Figure 3 . Figure3.Illustration of the heuristic for a turbine i at time step t for two different cases.The horizon is L = 2 and the wind data are the same for cases A and B. The rotation zone represents the range of possible orientations a turbine can take at a given time step after being controlled.First, wake steering optimization is performed to find the setting u i t , which yields a power output of P A MW for case A and P B MW for case B at time step t.By considering P A > P B , case A would be preferred.But then the heuristic computes the future expected yaw angles if a naive wind tracking solution is used.From these expected yaw angles, the heuristic computes the expected power outputs based on the predicted wind speeds.Here, while the yaw setting of case A gives a better immediate solution than the one given by case B, it keeps the turbine further away (i.e., giving greater yaw angles) from the future wind.The solution of case B would then be preferable.The heuristic encourages the choice of yaw settings that may not be the best at the current time step but that ensures future power output. Figure 4 . Figure 4. Toy example of the mirrored function used to keep the generated wind speeds inside specific bounds.Raw data are generated thanks to a process described by Algorithm (3).Raw data points inside the wind speed bounds are not modified: the black and red curves overlap.Data points outside the wind speed bounds are recursively mirrored inside the bounds. Figure 5 . Figure 5. (a) Sine and cosine values for δ K t = 1.(b) Sine and cosine values for δ K t = 4. (c) Sine and cosine values for δ K t = 9.Example of different wind direction signals generated with different δ K t values, considering that δ K t = δ K min = δ K max for all t ∈ {0, 1, . .., 49} and K init = 7°.If δ K t = 0, all the generated points are equal to K init .The sine and cosine values are plotted for illustration convenience (it avoids the discontinuity issue of degrees).Note that the behavior shown in this example is the same for the wind speed, but values are in the range [ν min , ν max ]. Wind speed limits [ν min , ν max ] [4, 10] m s −the mean and the colored (shaded) area corresponds to the standard deviation of the results obtained through 11 Monte Carlo trials. Figure 7 . Figure 7. (a) The mean value (and standard deviation) of wind direction variations is given as a function of δ K t .For example, for δ K t = 5, the mean absolute variation of the wind direction is around 6.11°.(b) Example of time series K for different values of δ K t ∈ {0, 4, 8, 12, 16, 20}.Again, as the δ K t parameter increases, the magnitude of the variations of the wind direction increases. Figure 8 . Figure 8.(a) Yaw limits are u min = α cut-in = −15°and u max = α cut-out = 15°.At δ K t = 6, the prediction-based controller increases the power output of a naive approach by 6.23 %.It corresponds to absolute variations of the wind direction of 7.34°as displayed in Fig. 7. (b) Yaw limits are u min = α cut-in = −30°and u max = α cut-out = 30°.At δ K t = 10, the prediction-based controller increases the power output of a naive approach by 8.93 %.It corresponds to absolute variations of the wind direction of 12.23°as displayed in Fig. 7. Considering future wind data in a steady-state yaw control optimization becomes mandatory when δ k t 6 for yaw constraints [−15, 15]°and δ k t 10 for yaw constraints [−30, 30].From these points, the heuristic H t provided by the prediction-based controller greatly improves the performance of a classic instantaneous steady-state optimization. Figure 9 . Figure 9. Plot of the time series K for the 11 different seeds.Wind directions are generated with δ K min = 0 and δ K max = 20.The mean is 12.53°and the standard deviation is 1.12.Here, the increments vary from one time step to another because they are nonstationary. )3.2.2 Prediction-based controllerA traditional prediction-based controller searches for the yaw settings of time steps t, t + 1, . .., t + L that maximize the power output over that horizon.It always runs with foresight (i.e., L 1).The corresponding sequential decision problem over a future time window can be stated under a form usually exploited by the MPC community, defined as max u t ,u t+1 ,...,u t+L
2024-01-12T07:04:14.913Z
2024-07-24T00:00:00.000
{ "year": 2024, "sha1": "96a05740deed32893c6a383c88dd1c6d568ce164", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5194/wes-9-1577-2024", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "aa82d4b54e1e33f85d2881f19197e949545dfeba", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
272129444
pes2o/s2orc
v3-fos-license
The Digital Literacy and Social Media Content-Making Training Program for SMEs and Housewives in the Hepi Bandung Community ABSTRACT INTRODUCTION The emergence of new technology in communication and information has triggered various new behavioral actions.One of these behaviors is how various social media-based platforms dominate usage time in accessing media.The average Indonesian spends 3 hours 23 minutes a day accessing social media.From Indonesia's total population of 265.4 million, active social media users reached 130 million, with a penetration of 49% (Kemp, 2018). he behavior of using new media in Indonesia is an interesting issue because of the high growth rate of internet users in Indonesia.The existence of the internet cannot be separated from the use or choice of using new media because, in the current digital era, internet networks are like a connecting line that unites various platforms with abundant sources of information (Cahyono, 2016;Utama & Herawati, 2017). New media usage has given rise to various negative issues, such as misuse of content, dissemination of information content that is not credible, cyberbullying, and several other issues related to the use of new media (Okditazeini & Irwansyah, 2018).However, this is at a higher level of appropriate technology or features, but rather the ability and level of understanding of users in directing activities in a positive direction.On the other hand, homemakers with small to medium enterprises (SMEs) still need to use new media effectively to promote their products.Instead, promotions carried out are still very minimal and limited (Silalahi, 2022).If we look at current marketing developments in product promotion, many market players create product advertisements through video content.Housewives need more expertise in upgrading themselves to create content and videos as a medium for product promotion.There is a need to increase expertise in creating video content that can be used as product promotion media to increase buyer interest, sales, and income (Dewa & Safitri, 2021;Hadi & Zakiah, 2021). Although housewives in the HEPI Community (Harmony of Positive Energy for Wives), for example, have easier access to content through new media, they have more complex challenges, such as educational background, internet accessibility, infrastructure, economic background, etc.Some of the problems faced include using new media as media for product promotion for sale and purchase with creative content in the form of videos and photos that can be maximized.Hence, there is a need to improve skills in creating video content easily through applications that can be used by mothers in the Community via Mobile Phone devices. Activities in the Community Service (PKM) program used institutional channels to provide the community with direct teaching to enhance knowledge, technology, arts, and culture as part of Tri Dharma (Juddi et al., 2023).Telkom University's School of Communication and Business Lecturers were conducting community service projects centered on digital literacy and enhancing digital competency, particularly about cyberspace ethics.On December 27, 2023, workshops on creating social media content for SMEs and Business Housewives in the HEPI Bandung community were held in the School of Communication and Business building at Telkom University. Media Ethical Literacy Moh Faidol Juddi, an academic at Telkom University Bandung, carried out this activity-Juddi's teaching course focuses on digital communications.In addition, Juddi is also active in TikTok with @juddijoyodiningrat account, focusing on literacy related to the application of communication theories and concepts, particularly related to digital phenomena. Digital Content Production Workshop Chairunnisa Widya Priastuty, a Telkom University academic focusing on social media strategy studies, conducted the activity.Besides being an academic, Chairunnisa has practical experience in the same field.In this session, participants were allowed to practice producing social media content independently. Sharing session At this point, the presenters tried to address the questions and shared experiences of the participants in a way that was relevant to their everyday lives.This session was conducted to increase the effectiveness of the understanding developed between the presenters and the target program participants during the sharing session by having a direct conversation about the issues they deal with daily regarding digital technology. RESULT AND DISCUSSION The community service initiative started by lecturers at Telkom University's School of Communication and Business was completed on December 27, 2023.The target community was given access to various topics related to digital literacy through this training series, including literacy digital media and digital content production workshop.This session began at 9:00 am and lasted for around three hours.This activity is divided into two stages, namely the material dissemination, literacy and workshop stages, and sharing sessions, which are assisted by students and a team of PKM Telkom University lecturers who help to prepare and oversee the implementation of the training program. Media Ethical Literacy Media ethics literacy has focused on the sharenting and oversharing issues.The sharenting phenomenon affects parents who have children aged infants to toddlers.Excessive exposure of children to social media (oversharing) can harm children's mental development (Fatmawati & Sholikin, 2019;Wahyudi, 2023).Sharenting could make children grow up mentally unhealthy and can develop anxiety in children in the future.For parents, sharenting can also create pressure to maintain a picture of perfect family life.However, there are some positive impacts of sharenting, including parents feeling that sharing with others can help build a sense of friendship and community.This can also provide communication for families who live in distant locations.Even though sharenting has become normal in society, it still requires special attention because oversharing has a big impact. Furthermore, media ethics literacy also conveyed the dangers of cyberbullying and flaming.A lack of digital literacy can result in someone being cyberbullied and flaming.Cyberbullying is a type of bullying behavior that is known as the act of continuously harassing or hurting other people in cyberspace.Meanwhile, flaming is an online argument in the form of a war of words in cyberspace using language that contains anger, vulgarity, threats, and derogation.https://doi.org/10.35568/abdimas.v7i1.4447©LPPM Universitas Muhammadiyah Tasikmalaya Effective content creation and regular content uploading are crucial for developing personal branding or managing a business on social media.By fostering a sense of connection and trust, personal branding is the most efficient approach to establish a worldwide identity and reach a wider audience (Patel, 2023).Customers nowadays typically purchase products from brands they follow on social media (Williams, 2020).Personal branding on social media can serve various functions, such as generating revenue or upholding a positive reputation.As a result, we must recognize the significance of our online persona as social media users.Any personal or educational material we share can shape others' opinions of us, both positively and negatively (Safiaji, 2020). The participant's knowledge regarding applications for managing Instagram content was unexpected; several participants who attended knew what applications, such as Capcut and Canva, could help edit video and photo content.In delivering this material, the resource person also dissected several Instagram accounts, including the Instagram account of the HEPI Bandung community.Furthermore, all housewives' participants were also asked to practice creating interesting content in a short time, namely 15 minutes, and upload the results to their respective Instagram accounts.The participants who attended were very enthusiastic about creating this content.This session shows the difference between participants who already understand and are used to creating interesting content and participants who still need to understand fully. CONCLUSION The increase in internet usage in Indonesia, unfortunately, is not matched by an increase in knowledge related to ethics in cyberspace.Many internet users, especially housewives, do sharenting and oversharing.In addition, the dangers of cyberbullying and flaming on the internet also need to be anticipated.Therefore, the importance of digital literacy targeting housewives in the HEPI Bandung community in building awareness of internet ethics effectively.Furthermore, it is also important to understand the technical aspects of branding, both for personal and business needs that they run.This activity aims to help the participants avoid detrimental consequences in the future, especially for housewives, through understanding digital literacy and digital content production workshops.This activity was carried out in two stages: delivery of material (as well as a workshop) and sharing sessions.The aim of delivering the material is to increase the participants' knowledge.The goal of the two-way sharing session is to increase participants' knowledge and provide suitable solutions to their issues.By distributing feedback surveys, assess the activities.The evaluation's findings demonstrate how well literacy-related activities have met participants' requirements.Increasing digital literacy is good, especially as we use digital media to become a more informed society.In addition, the participants wish that this type of literacy exercise might go on concurrently to assess and enhance the understanding that has been developed collectively. FIGURE 2 . FIGURE 2. Media Ethical Literacy by the Speaker Digital Content Production Workshop FIGURE 3 . FIGURE 3. Digital Content Production Workshop TABLE 2 . The Feedback Result
2024-08-29T15:59:40.864Z
2024-01-31T00:00:00.000
{ "year": 2024, "sha1": "135588fa3d61edbf9eb70d36090fc5eb479b8c1b", "oa_license": null, "oa_url": "https://doi.org/10.35568/abdimas.v7i1.4447", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c03c1274114f9b05ce253554ef74a8551195d85a", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [] }
14607395
pes2o/s2orc
v3-fos-license
Lemierre's Syndrome: Rare, but Life Threatening—A Case Report with Streptococcus intermedius Lemierre's syndrome (LS) is a rare, but a life-threatening complication of an oropharyngeal infection. Combinations of fever, pharyngitis, dysphagia, odynophagia, or oropharyngeal swelling are common presenting symptoms. Infection of the lateral pharyngeal space may result in thrombosis of the internal jugular vein, subsequent metastatic complications (e.g., lung abscesses, septic arthritis), and significant morbidity and mortality. LS is usually caused by the gram-negative anaerobic bacillus Fusobacterium necrophorum, hence also known as necrobacillosis. We present a case of LS caused by Streptococcus intermedius, likely secondary to gingival scraping, in which the presenting complaint was neck pain. The oropharyngeal examination was normal and an initial CT of the neck was done without contrast, which likely resulted in a diagnostic delay. This syndrome can be easily missed in early phases. However, given the potential severity of LS, early recognition and expedient appropriate antimicrobial treatment are critical. S. intermedius is an unusual cause of LS, with only 2 previous cases being reported in the literature. Therefore, an awareness of the myriad presentations of this syndrome, which in turn will lead to appropriate and timely diagnostic studies, will result in improved outcome for LS. Introduction Lemierre's syndrome (LS) is a life-threatening, but a rare complication of an oropharyngeal infection [1]. In the preantibiotic era, Lemierre's syndrome was associated with a case-mortality rate of 32%-90%, with embolic events in 25% and endocarditis in 12.5% of the patients. It is still a potentially life-threatening disease with a reported mortality of up to 17% [2]. In the post antibiotic era, it was named the "forgotten disease" until recently, when it started presenting more frequently and uniquely. The suggested diagnostic criteria are (1) history of recent oropharyngeal infection, (2) clinical or radiographic evidence of thrombophlebitis of the internal jugular vein (IJV), and (3) isolation of an anaerobic pathogen [1,3]. LS usually presents as a sore throat, and pharyngitis is the entry source for more than 85% of cases, while otitis media or dental infection accounts for <2% of cases [1,4]. As the disease progresses, the soft tissues of the neck are invaded by anaerobic oral pathogens, followed by local invasion of the lateral pharyngeal space and septic thrombophlebitis of the internal jugular vein (IJV). It may lead to septic emboli and metastatic abscess, especially in the lungs and joints. Complications like meningitis, osteomyelitis, splenic abscesses, cranial nerve involvement, carotid thrombosis, and mediastinitis have been reported [3,5,6]. LS is usually caused by the gram-negative anaerobic bacillus Fusobacterium necrophorum, hence also known as necrobacillosis [1,4]. Other etiological agents like Peptostreptococcus, Group B and C Streptococcus, Staphylococcus, Enterococcus species and Proteus have also been isolated [1,7,8]. Fusobacterium is a natural colonizer in the oropharynx of healthy adults. However, pharyngitis weakens the mucosal barrier and allows Fusobacterium to enter the bloodstream and cause complications. Early diagnosis with imaging and blood cultures in clinically suspicious patients can prevent mortality and morbidity. S. intermedius, one of the member of Streptococcus milleri group, is a microaerophilic commensal found commonly in the upper respiratory and gastrointestinal tract and is capable of causing pyogenic infections especially in the liver, brain and skin [8,9] and most importantly the heart valves [10]. To the best of our knowledge, this is the first case report of LS with Streptococcus intermedius in an immunocompetent adult resulting from a gingival procedure with a normal oropharyngeal examination at the time of presentation. Case Presentation A middle-aged woman presented to the emergency room (ER) with complaints of severe neck pain and occipital headaches for one week, which were not relieved with analgesics. She denied fevers, sore throat, cough, shortness of breath, or any trauma to the neck. Past medical history was significant for epilepsy and prior episodes of supraventricular tachycardia (SVT). She denied smoking or illicit drug use. She had a dental scraping of her left mandibular molars two weeks prior to gingivitis. Vital signs were stable in ER. Physical exam was positive for neck tenderness and minimal restriction of neck movements. Laboratory data revealed a white blood cell (WBC) of 11.9 × 10 3 /mm 3 (neutrophils 79%). Computerized tomography (CT) scan of the head and neck without contrast and lumbar puncture were done to rule out subarachnoid hemorrhage. The results did not reveal any abnormalities. Hence the patient was discharged on muscle relaxants and analgesics. She returned to the ER in 5 days with high grade fevers, worsening neck pain and a headache. Temperature was 101 • F, heart rate 162/min, respiratory rate 16/min, blood pressure 152/96 mmHg, and oxygen saturation of 92% on room air. Pharyngeal exam showed no erythema, swelling, or exudates. There was no evidence of otitis media or an active gingivitis either. There was no dental caries noted at that time, and there was no heat or cold intolerance. Percussion tenderness was not present. Neck examination showed restriction in range of movements, and a tender cord-like mass was palpable on the left side of neck. Cardiopulmonary examination revealed diffuse crackles in both lungs and no cardiac murmurs. Laboratory data showed a WBC count of 33.6×10 3 /mm 3 (bands 44%) and an ESR of 98 mm/hr. Complete metabolic profile (CMP), including electrolytes, renal function (BUN, creatinine), and liver enzymes (LFTs) were all within normal range. EKG showed SVT. Chest X-ray showed small bilateral pleural effusions and bilateral pulmonary infiltrates without cavitations. CT scan of the head and neck with contrast demonstrated a thrombus in the left internal jugular vein (IJV) (Figure 1) extending to left sigmoid sinus ( Figure 2) and bilaterally into the cavernous sinus ( Figure 3). There was diffuse edema around the soft tissues of the neck. Preliminary blood cultures grew grampositive cocci in chains. She was started empirically on Blood cultures (4 out of 4) grew Streptococcus intermedius within 48 hours, and the organism was found to be highly susceptible to penicillin (but also susceptible to clindamycin and vancomycin). Subsequently, antibiotics were changed to ampicillin-sulbactam. A blood culture six days after admission also grew the same organism. 2D echocardiogram did not reveal any valvular vegetations. Anticoagulation was stopped after 10 days due to a drop in hemoglobin (from 10.5 g/dL to 6.9 g/dL), although Esophagogastroduodenoscopy (EGD), colonoscopy, and a CT scan of the abdomen and pelvis did not reveal an obvious bleeding source. Serum LDH, reticulocyte and haptoglobin were within normal limits, hence arguing against hemolysis. Ampicillin-sulbactam was continued for eight weeks, and the patient had a slow but complete clinical recovery with radiographic resolution of the clot. Discussion In summary, we present a middle-aged female who presented with severe neck pain and occipital headaches for a week who did not have fevers or a sore throat on initial presentation but did provide a history of dental work performed two weeks ago. A noncontrasted CT of the head and neck failed to reveal any pathology on admission, and she was discharged from the ER whereby she very rapidly deteriorated over the next five days and then presented with worsening fevers, headaches, neck pain, and a leukocytosis of 34,000. Five blood cultures performed over a period of six days grew Streptococcus intermedius, and a CT head and neck with contrast showed thrombosis of the jugular vein, cavernous sinuses, and left sigmoid sinus. Patient survived and recovered after eight weeks of ampicillin-sulbactam. Lemierre's syndrome is a rare disease, typically caused by the microorganism Fusobacterium necrophorum. Tonsillitis is the most common primary infection (87.1%) followed by mastoiditis (2.7%) and odontogenic infections (1.8%) [11,12]. This is typically followed by invasion of the pharyngeal lateral wall and thrombophlebitis of the internal jugular vein followed by high grade bacteremia and septic seeding of vital organs, most commonly the lungs. It is quite likely that our patient developed LS secondary to the gingival scraping that she underwent two weeks before her symptoms started. S. intermedius is a rare causative organism, and only 2 case reports of LS were found with this bacterium [8,13]. Escalona et al. [8] reported a case with LS due to S. intermedius. This patient presented with extensive mandibular swelling due to an infected molar and fevers, along with an edematous floor of the mouth on physical exam. Chemlal et al. [13], reported a patient with LS, who had a recent pharyngitis presenting with fever and lower chest pain related to multiple pulmonary abscesses. Pharyngitis is a single most common presentation of LS as mentioned by Wright et al. [14]. In contrast, our patient's mouth examination was totally benign, and she did not have a history of sore throat in the recent past. Our patient's benign presentation and normal oropharyngeal examination might have delayed her diagnosis. It is tempting to speculate that differences in virulence properties between S. intermedius and F. necrophorum which is the usual pathogen responsible for LS may have contributed to an atypical presentation. However, the previous 2 reported cases of S. intermedius [8,13] did present with oropharyngeal signs. Therefore, it is important to recognize that LS can present without any signs of pharyngitis or an active dental or ear infection and hence can be missed in the early phases of infection [3]. Diagnosis using CT scan of the head and neck with IV contrast is considered superior to a neck ultrasound as it is better in locating the anatomical extension of the thrombus [4,5]. CT scan in the absence of contrast may be of limited utility (as was the case in our patient). Blood cultures should be sent on a patient with persistent severe pharyngitis and signs of sepsis and even in patients presenting with fevers and severe neck pain. Penicillin is the drug of choice, but due to recent penicillin-resistant strains of Fusobacterium, drugs like Clindamycin or beta lactam/beta-lactamase inhibitor are preferred [7,14,15]. Therapy should be started as soon as the syndrome is suspected and should be continued for at least 6 weeks [14][15][16]. Surgical drainage of abscess and IJV ligation may be indicated for patients who fail to respond to antibiotics, as was done in the preantibiotic era, though the ligature is not frequently done now [4,8]. Routine use of anticoagulation is controversial as there are no randomized trials, and sepsis-related thrombocytopenia is often seen in these cases [14,16]. Anticoagulation should strongly be considered, if there is clot propagation involving the cavernous sinus or if there are septic emboli [4,5,7]. However anticoagulation can increase the risk of bleeding and expansion of hematoma. Summary Lemierre's syndrome usually presents in childhood but may present atypically in middle-aged people, as in our patient. It can happen after pharyngitis, otitis media, odontogenic infections, or dental procedures [4]. The number of reported cases is increasing, due to the restricted use of antibiotics for sore throat and tonsillitis [5,7]. High grade bacteremia with Streptococcus intermedius due to septic thrombosis, without any signs of an oral or pharyngeal infection at the time of presentation, is a unique feature of this case. Further, the ability of other oral flora to be causative agents of Lemierre's syndrome is not as well established and recognized as it is with Fusobacterium necrophorum. This suggests that a benign mouth exam should not exclude the diagnosis of LS. In light of this, we recommend that LS should be considered in the differential diagnosis in patients presenting with persistent sore throat, mastoiditis, recent history of a dental procedure, and/or signs of active gingivitis, accompanied with neck pain and swelling. Blood cultures should be obtained and CT imaging of the neck with IV contrast should be performed. This, in turn, will enable timely diagnosis and improved outcome.
2018-04-03T04:05:43.341Z
2012-11-04T00:00:00.000
{ "year": 2012, "sha1": "78069d4ae217871e3e71e7226a2b2562474ef09c", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/crim/2012/624065.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82fa3a23890b450df06385e63a34c699b9136e80", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
188410954
pes2o/s2orc
v3-fos-license
PROGRAM DESCRIPTION / DESCRIPTION DU PROGRAMME Explaining the Method Behind Our Madness: 3-part Series on Comprehensive Searches for Knowledge Syntheses <jats:p>n/a</jats:p> Introduction The production of knowledge syntheses (KS), including systematic and scoping reviews, has been steadily increasing over the last twenty years. Recent estimates indicate a three-fold increase in the number of published systematic reviews over the last decade [1], and that nearly half of all published scoping reviews have been published within the last six years alone [2]. This trend is evident at the University of Toronto, where graduate students are being encouraged to include a KS component as part of their comprehensive exams or three-article theses. This has led to an increase in the number of one-on-one consultations between librarians and graduate students. Unfortunately, it is often clear during these consultations that these students are not being formally trained in KS search methods, reporting standards, or citation management solutions. Further evidence indicates this is not just happening at our institution [3][4][5][6]. To address this increasing need at the University of Toronto, librarians at the Gerstein Science Information Centre are offering a three-part workshop series designed to teach graduate students how to search for systematic and scoping reviews. In 2016, Sandra Campbell and colleagues at the University of Alberta's John W. Scott Health Sciences Library described what they believed to be the first published curricula for a three-hour stand-alone KS searching workshop designed for a researcher audience [7]. They observed that while librarians have long been involved in teaching KS search strategies as part of broader systematic review courses, there are few examples in the published literature of distinct KS searching workshops. While there has been recent discussion on how instruction is incorporated into KS service models [8]; we remain unaware of other existing librarian-led KS searching workshops for graduate students that deliver advanced content as a three-part series. Description The workshop series, titled Strategies for Systematic, Scoping, or Other Comprehensive Searches of Literature, is composed of three 2.5 hour sessions. It is recommended that participants take each session in order to complete the series, though this is not always the case. Students are required to preregister using the online calendaring platform LibCal, where they can also read the program description, learning objectives, and instructor biographies. We open each session to a maximum of 50 registrants; we can accommodate 40 participants in our electronic classroom. Though each session has always been fully booked, and there has usually been students on the waitlist, we expect a relatively small nonattendance rate of 10-15%. We typically offer the sessions on Tuesday afternoons, three weeks in a row, though sometimes flexibility is required to accommodate our schedules. Eligible participants of the series can earn two credits towards the Graduate Professional Skills (GPS) program, an initiative of the University of Toronto School of Graduate Studies that is designed to prepare graduate students for their future careers. In order to claim credits, participants must be current graduate students at the University of Toronto, attend and participate in all three sessions of the series, and complete a short reflective questionnaire following the final session. Two librarians are responsible for delivering the content, and are supported by one student assistant who can help answer questions and keep students on track during the session. Despite the high student to instructor ratio, we encourage an informal atmosphere where students are free to interrupt to ask questions or make comments. Each session utilizes a combination of lecture slides, individual activities, online polls, and group activities. Course materials are made available to participants on a password-protected LibGuides website. We plan content and activities to meet what we call our "hidden agenda": to empower graduate students with the vocabulary and skills necessary to engage in crucial conversations with their supervisors and colleagues and, ultimately, improve the quality of their review research. Our teaching philosophy is rooted in the firm belief that we need to clearly explain and justify review search methods, that our students ought to learn complex database techniques, and that they are capable of thinking critically about systematic and scoping review search strategy development. We believe in authentic and intentional engagement, a focus on processes not tools, and incorporating active learning. Part I: Structured Approach to Searching the Medical Literature for Knowledge Syntheses Our introductory session's objectives are to have students be able to:  Identify the key differences between systematic reviews, scoping reviews, and literature reviews as they relate to the search  Incorporate tools and resources for proper reporting and management of their review  Utilize strategies for turning a research question into a searchable question with inclusion/exclusion criteria ( Figure 1)  Identify databases for their review and explain when to use them  Practice using an objective, structured method for developing sensitive search strategies required for knowledge synthesis, utilizing controlled vocabulary, textwords, and advanced techniques  Apply a structured approach to searching in OVID Medline Fig. 1 Search concepts vs inclusion/exclusion criteria Guided through a combination of lecture and individual activities, students complete a semicomprehensive search of an example question in Ovid Medline, saved and ready for Part II. After completing a search question activity (Figure 1), we guide students through a process for objective search strategy development. First, we show students how to identify synonyms through various methods beyond brainstorming, including examining MeSH entry terms and interactively scanning known relevant articles [9]. Next, we demonstrate how to discover relevant subject headings by browsing the MeSH hierarchy and using tools such as the Yale MeSH Analyzer [10] and PubReMiner [11]. Finally, we show students how to iteratively test elements of their search strategy (e.g., using the NOT operator to determine optimal proximity operator width), determine whether their search captures previously identified relevant articles, and what to do next if it does not. Part II: Beyond MEDLINE: Translating Search Strategies for Knowledge Syntheses This session focuses largely on why and how we translate search strategies; we take an active-learning approach [12] in which students will:  Review Medline strategy from Part 1 and prepare it for translation  Delve deeper into the advanced features of interfaces and databases which allow for editing and refining a search strategy  Identify potential sources for bias in their search and develop strategies to mitigate them  Translate and execute structured search strategies using different databases, including OVID Embase, Ebsco CINAHL, and Cochrane Central (Figure 2)  Prepare database search strategies and compose search methods, such that they can be repeated and to ensure proper reporting Part II is an innovative session for three reasons: 1) we teach students to justify elements of their search strategy as mitigating potential sources of bias; 2) we spend nearly 1.5 hours leading students through a group database translation activity utilizing short demonstrations, Google Docs (Figure 2), and student in-class presentations; and 3) this is an entirely digital day, with no paper materials used for any of the activities. [13] designed to reinforce the major learning objectives and hidden agenda of the entire series. Through this activity, students are able to see the relationship between search strategies and the overall quality of the review itself. Outcomes We have offered the series six times, to 291 students, since its pilot run in March 2017. A three-part 7.5 hour series is a significant time commitment; however, student feedback and reflections, as well as follow-up consultations, seem to indicate that the advanced techniques and content is appreciated and being absorbed. We evaluate the series in three ways: observations of student engagement during activities, ticket-out-thedoor evaluation forms, and a short reflection questionnaire. The most valuable assessment method for program development comes from our own in-class observations and conversations with students about their learning. For instance, in the pilot version of our series, we attempted an activity designed to teach students how to translate search strategies. We found that during this activity, students were simply copying and pasting the Medline strategy into Embase. When we asked that they switch to CINAHL, this problem was highlighted as students simply copied the Medline queries (along with the Ovid syntax) into the search bars. These observations made it clear we had spent too much time on database mechanics and not enough time teaching the art and process of translation. We take the last five minutes of each session to have students pair up and fill out a short ticket-out-thedoor evaluation form (Figure 3). This helps us gauge what learning outcomes are being met, and which require more attention. After each session, we review and summarize the "muddy points", re-visiting them at the beginning of the next session. Typically, students note needing more time to practice at home, difficulty keeping up with database syntax, and how to know when to stop searching. We consistently hear positive feedback regarding Part I's lecture material, our overall instruction style, and the meaningful activities in Part II and Part III. Finally, we attempt to gauge whether students are learning from a short reflection-based assignment that is required of all GPS participants. One week following Part III, we ask eligible participants to respond to the following three prompts: 1) Can better searches improve the quality of research? If yeshow? No-why? 2) How will you ensure your searches are reproducible and exhaustive? 3) What question(s) has this workshop raised for you? Inspired by the richness of responses, we are now pursuing a qualitative research study on graduate students' attitudes and practices conducting comprehensive searches for systematic and scoping reviews. Discussion In our pilot run, we had three sessions as we do now, but we had combined database translation and grey literature in Part II, and Part III was an advanced EndNote session. Instructor observations and student feedback indicated that there was not enough time in Part II for all of the content, and one student astutely pointed out that it was unfair to teach EndNote, a feebased software program at the University of Toronto, as part of the GPS program. The EndNote instructor also found that the students were not prepared to learn advanced functions required for systematic review citation management. We decided to split Part II into the current iterations, and offer two EndNote sessions (one basic, one advanced) shortly after the series, but not as part of the GPS program. We actively promote the EndNote sessions during the series, and these sessions are regularly fully booked. One of the most significant changes we made following the pilot was to extend the length of each session from 2 hours to 2.5 hours. This has allowed us to spend more time teaching textword syntax in Part I, explaining and coordinating the translation activity in Part II, and leaving plenty of time for discussion during the Part III capstone activity. It also gives us more time to cover muddy points from the previous week's session and to answer any questions as they arise. It is important to note that despite extending the series by 1.5 hours, and replacing the EndNote session with search instruction, the search related learning objectives have stayed exactly the same. Moving forward, we are investigating how best to teach grey literature search strategy in a large group setting to such a diverse group of students, as well as how to discuss emerging review methods. We believe that students can and should learn advanced database techniques, and that they are capable of thinking critically about KS search strategy development. We hope that other librarians will continue to explore strategies to teach this content to students. To support this goal, we will make our slides and activities available upon request. Statement of Competing Interests No competing interests declared.
2019-06-13T13:23:27.531Z
2019-03-14T00:00:00.000
{ "year": 2019, "sha1": "862013e99ec5422acee55fc342cd4d6a5be6104d", "oa_license": "CCBY", "oa_url": "https://journals.library.ualberta.ca/jchla/index.php/jchla/article/download/29391/21545", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a2ac197c454b17f4aa09ef6ed9fed7b9392d5db0", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "History" ] }
245537630
pes2o/s2orc
v3-fos-license
Explaining the $b \to s \ell^+ \ell^-$ anomalies in $Z^\prime$ scenarios with top-FCNC couplings Motivated by the recent anomalies in $b \to s \ell^+ \ell^-$ transitions, we explore a minimal $Z^\prime$ scenario, in which the $Z^\prime$ boson has a flavour-changing coupling to charm and top quarks and a flavour-conserving coupling to muons. It is found that such a $Z^\prime$ boson can explain the current $b \to s \ell^+ \ell^-$ anomalies, while satisfying other flavour and collider constraints simultaneously. The $Z^\prime$ boson can be as light as few hundreds GeV. In this case, the $t \to c \mu^+ \mu^-$ decay and the $tZ^\prime$ associated production at the LHC could provide sensitive probes of such a $Z^\prime$ boson. As a special feature, the $Z^\prime$ contributions to all rare $B$- and $K$-meson processes are controlled by one parameter. This results in interesting correlations among these processes, which could provide further insights into this scenario. In addition, an extended scenario, in which the $Z^\prime$ boson interacts with the $SU(2)_L$ fermion doublets with analogous couplings as in the minimal scenario, is also investigated. Introduction The flavour-changing neutral current (FCNC) processes are sensitive to possible contributions from heavy mediators and provide a complementary way to direct searches for new physics (NP) at high-energy frontiers. While there is so far no direct evidence for NP at the LHC, recent measurements of the rare b → s + − decays exhibit several interesting discrepancies from the Standard Model (SM) predictions of the branching ratios, the angular distributions, and the lepton flavour universality (LFU) ratios [1][2][3][4]. In this respect, the ratios defined as R K ( * ) ≡ B(B → K ( * ) µ + µ − )/B(B → K ( * ) e + e − ) are of particular interest, because the hadronic uncertainties largely cancel out here [5]. Therefore, they provide a sensitive test of the LFU. In the SM, these ratios are predicted to be close to unity up to tiny electromagnetic corrections [5][6][7][8]. Recently, the LHCb collaboration presented an updated measurement of R K using the full data set of 9 fb −1 [9]: −0.039 (stat) +0.013 −0.012 (syst), for 1.1 < q 2 < 6.0 GeV 2 , where q 2 denotes the dilepton invariant mass squared. This new result confirms the previous LHCb measurement using a data set of 5 fb −1 [10]. However, the tension with the SM prediction has increased from previously 2.5σ to now 3.1σ, due to reduced experimental uncertainties. LHCb has also reported a measurement of R K * using the full Run-I data set of 3 fb −1 [11]: 0.66 +0.11 −0.07 (stat) ± 0.03(syst), 0.045 < q 2 < 1.1 GeV 2 , 0.69 +0.11 −0.07 (stat) ± 0.05(syst), 1.1 < q 2 < 6.0 GeV 2 , which are found to be about 2.1σ and 2.4σ lower than the SM predictions, respectively. Very recently, the LHCb measurements of R K S and R K * + have also been reported [12]. Previous measurements from the BaBar [13] and Belle [14,15] experiments are also consistent with the LHCb results, although with relatively large uncertainties. All these measurements of R K and R K * together provide intriguing hints of the LFU violation. Recently, the LHCb collaboration reported the most precise measurement of the branching ratio of the B s → φµ + µ − decay using the full Run-1 and Run-2 data sets [16], B(B s → φµ + µ − ) = (2.88 ± 0.15 ± 0.05 ± 0.14) × 10 −8 , for 1.1 < q 2 < 6.0 GeV 2 , where the uncertainties are, in order, statistical, systematic, and from the branching fraction of the normalization mode. This measurement is found to lie 3.6σ below the SM prediction [17][18][19][20]. Using the same full data set, LHCb later presented an improved measurement of the branching ratio of the rare B s → µ + µ − decay [21], which is statistically consistent with the previous world average [22]. This new result is compatible with the SM expectation [23,24] within 1σ. However, after combining all the measurements of the B s,d → µ + µ − decays from the ATLAS [25], CMS [26], and LHCb [21] experiments, the total discrepancy with the SM is found to be at the level of 2σ [27,28]. In addition, the latest measurements of the B 0 → K * 0 µ + µ − [29] and B + → K * + µ + µ − [30] decays show tensions in the angular distributions with respect to the SM predictions. The local discrepancies in the angular observables P 2 and P 5 are observed to be at 2.5-3.0σ in two q 2 bins [29,30]. Although none of the individual deviations is statistically significant, and further refinement of the hadronic uncertainties in some observables is still an ongoing theoretical issue [31][32][33], the global tension in the b → s + − decays has motivated a lot of NP interpretations [1][2][3][4]. In this respect, one of the most popular NP explanations are models with an extra heavy vector Z boson. In these models, the Z boson has couplings to quarks, as well as to either electrons or muons. Depending on the quark couplings involved, these Z models can be classified into two categories: (i) The Z boson has flavour-violating couplings to b and s quarks and the b → s + − transitions receive contributions from tree-level Z exchange . (ii) The Z boson has flavour-conserving couplings to top quark and affects the b → s + − transitions via one-loop penguin diagrams [59,60]. In this paper, based on our previous works [61,62], we will consider another possibility, where the Z boson has flavour-violating couplings to top and charm quarks. This scenario does not suffer from the constraints from B s −B s mixing, and the Z boson contributes to the b → s + − decays at the one-loop level. We will derive constraints on the Z mass and couplings from various flavour and collider processes, and study the possibility of explaining the b → s + − anomalies in such a scenario. Future prospects of searching for such a Z boson at the LHC will also be discussed. This paper is organized as follows. In section 2, we introduce the phenomenological Z scenarios, which have the desired flavour-changing couplings to explain the current b → s + − anomalies. In section 3, we recapitulate the theoretical frameworks for various flavour processes and discuss the Z effects. In section 4, we give our detailed numerical results and discussions. Our conclusions are finally made in section 5. Z scenarios with top-quark FCNC couplings We consider two phenomenological scenarios involving a Z boson. In the first scenario (denoted as scenario I), the Z boson has flavour-changing couplings to c and t quarks and a flavourconserving coupling to the µ lepton. Their interactions are described by the effective Lagrangian where P L = (1−γ 5 )/2, and the fermion fields c, t and µ refer to the mass eigenstates. Generally, the couplings X L ct and λ L µµ are complex and real, respectively. In this scenario, the Z boson affects the b → sµ + µ − transitions via one-loop penguin diagram and could explain the anomalies observed in B → K ( * ) + − decays. Furthermore, sizable contributions to the s → dµ + µ − transitions could arise from the Z penguin diagram. In the second scenario (denoted as scenario II), the Z boson is assumed to interact with the SU (2) L fermion doublets with similar couplings as in scenario I. In the mass eigenstate basis, the effective Lagrangian takes the form 1 where Q L,i and L L,i denote the left-handed SU (2) L quark and lepton doublets of the i-th generation, respectively. As in scenario I, the couplings X L 23 and λ L 22 are generally complex and real, respectively. In this scenario, the bsZ and the tcZ interaction have the same coupling strength due to SU (2) L invariance. As a consequence, the Z contribution to b → s processes is mainly induced by the bsZ interaction at the tree level, and the phenomenology of these processes is similar to that in the so-called minimal Z scenario discussed in refs. [34,45]. As in scenario I but contrary to the minimal Z scenario, the s → d transitions can receive the Z contribution at the one-loop level. Furthermore, the Z boson also couples to the muon neutrino due to SU (2) L invariance, which may affect the b → sνν and s → dνν processes. The Z effects in various flavour processes will be discussed in detail in the next section. Alternatively, one can consider a Z boson with flavour-changing couplings to u and t instead of c and t quarks in the above scenarios. Then, the Z contributions to the top-quark production and FCNC decays can be very different from that in scenarios I and II. For the Z couplings to leptons, a right-handed µ + µ − Z interaction can also be added to simultaneously accommodate the b → sµ + µ − and (g − 2) µ anomalies [64,65]. Similarly, scenarios with an e + e − Z instead of a µ + µ − Z coupling in the above scenarios can also be considered. Such a Z boson could be directly produced in e + e − colliders, but loses the possibility of explaining the (g − 2) µ anomaly. We leave all these possibilities for future studies, and consider in this paper only the above two scenarios that have the couplings required to explain the b → s + − anomalies. Z effects in various flavour processes In this section, we recapitulate the theoretical frameworks for various low-energy flavour processes and discuss the Z contributions to them as well as to the top-quark physics. The rare decays B s → µ + µ − , B → K ( * ) µ + µ − , and B s → φµ + µ − are induced by the b → sµ + µ − transitions, and provide promising probes of NP effects. With the Z contributions taken into account, the effective Hamiltonian for the b → sµ + µ − transitions can be written as [66] where explicit definitions of the effective operators O 1−8 can be found in ref. [66]. The operators most relevant to our study are, however, the two semi-leptonic operators O 9 and O 10 defined by O 9 = e 2 16π 2 (sγ µ P L b)(¯ γ µ ), and respectively. In the SM, their Wilson coefficients have been calculated including next-to-nextto-leading-order (NNLO) QCD [67][68][69] and next-to-leading-order (NLO) electroweak corrections [70]. In the Z scenario I, the tcZ vertex can affect the b → sµ + µ − transitions through the Z -penguin diagram shown in figure 1 (b), with the resulting NP Wilson coefficients given by [61,62] C NP, I 9µ = −C NP, I 10µ = [71,72]. The loop function f (x) is defined as which is obtained from the calculation of similar diagrams with anomalous tcZ vertex in refs. [61,62]. It is noted that the Z contributions are enhanced by the CKM factor V cs /V ts . In the Z scenario II, the Z boson can also affect the b → sµ + µ − transitions at tree level through the diagram in figure 1 (c). The total contributions to the NP Wilson coefficients are given by Numerically, it is found that the tree-level contribution is dominant and the loop-level contribution can be safely neglected. For m Z = 1 TeV, C NP, I 9µ = (−4.4 − 0.1i)X L ct λ L µµ in scenario I, while C NP, II 9µ = (585 + 11i)X L 23 λ L 22 in scenario II. A lot of effort has been put into the theoretical treatment of B → Kµ + µ − , B → K * µ + µ − , and B s → φµ + µ − decays [19,33,[73][74][75][76][77]. Besides the LFU ratios R K ( * ) introduced in section 1, the angular observables in these decays are also known to provide valuable information of potential NP contributions, and hence have been analyzed in detail in refs. [78,79]. For these observables, the main theoretical uncertainties come from the heavy-to-light transition form factors. During recent years, significant progress has been made in calculating these form factors from lattice QCD [18,80] and light-cone sum rules (LCSR) [19,[81][82][83][84][85]. For a recent review of the b → sµ + µ − decays, we refer to refs. [86][87][88], where the SM calculations, input parameters, form factors, and theoretical uncertainties are discussed in detail. B s −B s mixing The B s −B s mixing is induced by the W -box diagram in the SM, and could receive contributions from the tree and penguin diagrams in the Z scenario II, as shown in figure 2. The effective Hamiltonian for B s −B s mixing can be written as [89] H ∆B=2 with the effective operator O VLL = (sγ µ P L b)(sγ µ P L b). Analytical expression of the SM Wilson coefficient C VLL SM and the QCD renormalization group evolution (RGE) can be found in refs. [66,90] and [89,91], respectively. In the Z scenario II, the Z -penguin diagram shown in figure 2 (c) is found to be negligible, and the tree-level Z -exchange diagram shown in figure 2 In the Z scenario I, the Z boson contributes to the B s −B s mixing starting at the two-loop level, and its effects are therefore expected to be small. Here we will take C VLL NP, I = 0. With the effective Hamiltonian in eq. (12), the off-diagonal mass matrix element of mixing is given by [89] where C VLL = C VLL SM + C VLL NP , and the most recent lattice calculations of the hadronic matrix element B s O VLL B s can be found in ref. [80]. Then, the mass difference between the two mass eigenstates B H s and B L s and the CP violation phase read [92] ∆m s = 2|M s 12 |, and respectively. In the case with a complex Z coupling X L 23 , the phase φ s can deviate from the SM prediction and hence affects the CP violation S ψφ measured in the decay B s → J/ψφ [92]. b → sνν decays The rare decays B → X s νν and B → K ( * ) νν are all induced by the quark-level b → sνν transition. With the Z effects taken into account, the effective Hamiltonian governing the b → sνν decays can be written as [93,94] H b→sνν where denotes the neutrino flavour. In the SM, the Wilson coefficient incuded by the Z-penguin and W -box diagrams, and are lepton flavour universal. Analytical expression of the Inami-Lim function X(x t ) can be found in refs. [95,96]. Numerically, we obtain C SM (µ W ) = 1.481 ± 0.009 [97] after including the two-loop electroweak corrections [98]. In the Z scenario II, the tree-level and the one-loop penguin diagram shown in figure 3 contribute to the b → sν µνµ transition, resulting in with the loop function f (x t ) defined already by eq. (10). Numerically, the tree-level Z -exchange contribution dominates over that from the one-loop Z -penguin diagram, as in b → sµ + µ − decays. As there are no couplings of the Z boson to ν e and ν τ neutrinos, C NP, II e = C NP, II τ = 0. In the Z scenario I, there is no direct coupling of the Z boson to neutrinos. The Z boson contributes to the b → sνν transition at least at the two-loop level and hence can be safely neglected, i.e. C NP, I = 0. The inclusive decay B → X s νν is the theoretically cleanest rare B-meson decay [99]. With the effective Hamiltonian in eq. (16), its differential decay width can be written as [93] dΓ with the overall factor given by where λ(x, y, z) = x 2 + y 2 + z 2 − 2(xy + yz + zx),m i = m i /m b , and s b = q 2 /m 2 b with q 2 the invariant mass squared of the neutrino pair. The factor κ(0) = 0.83 contains the virtual and bremsstrahlung QCD corrections to the b → sνν matrix element [100,101]. For the B → K * νν decay, the dineutrino invariant mass spectrum can be written as [93] with the three transversity amplitudes given, respectively, as Figure 4: Feynman diagrams for the s → dµ + µ − transition, including the selected SM diagrams (a, b), and the Z contributions (c, d). After taking into account the Z effects, the effective Hamiltonian inducing the short-distance (SD) contribution to K L,S → µ + µ − decays can be written as [107] H s→dµ In the SM, the function Y = Y (x t ) describes contributions from the penguin diagrams with internal top quark [69,70,95,96], while Y NL involves the charm-quark contributions [108]. In both the Z scenarios I and II, the s → dµ + µ − transition is induced by the penguin diagrams shown in figure 4, which result in the NP contributions with the loop function f (x t ) defined already in eq. (10). Here, the NP contributions in both of the two Z scenarios are of the same magnitude, which is different from what is observed in the For the K L,S → µ + µ − decays, only the SD part of a dispersive contribution can be reliably calculated. The branching ratio of K L → µ + µ − can be written as [107] B with λ t = V * ts V td , λ c = V * cs V cd , and λ ≈ V us denoting the Wolfenstein parameter. The factor κ µ contains the relevant hadronic matrix element that can be extracted from the K + → µ + ν µ decay, and numerically we have κ µ = (2.009 ± 0.017) × 10 −9 (λ/0.225) 8 [107,108]. The charm contribution P c (Y ) is found to be P c (Y ) = 0.115 ± 0.017 at the NNLO in QCD [108]. For the K S → µ + µ − decay, the SD and long-distance contributions add incoherently in the total rate [109][110][111][112]. The SD part of the branching ratio is given by [109][110][111][112] with τ K S the lifetime of K S and f K the decay constant. s → dνν decays The K + → π + νν and K L → π 0 νν decays are induced by the s → dνν transition. They are both characterized by the theoretically clean virtue, since the relevant hadronic matrix elements can be extracted with the help of isospin symmetry from the leading semi-leptonic K 3 decays [113]. With the Z effects taken into account, the s → dνν decays are governed by the effective Hamiltonian [95,96] H s→dνν In the SM, similar to the b → sνν transition, the Z-penguin and W -box diagrams with internal top quark result in a flavour-universal Wilson coefficient X SM = X(x t ), with the Inami-Lim function X(x t ) introduced already in eq. (16). The contribution from internal charm quark is represented by the function X NL . In the Z scenario II, the Z boson can only interact with the ν µ neutrino. Its contributions to the operators with ν e and ν τ neutrinos arise from twoloop level and can be therefore neglected, i.e., X e NP,II = X τ NP,II = 0. However, the Z -penguin diagrams similar to the one shown in figure 4 contribute to the Wilson coefficient X µ , and we find X µ NP,II = −Y II NP . In the Z scenario I, as in the case of the b → sνν transition, the Z effects arise firstly at two-loop level and are neglected, i.e., X NP,I = 0. The rare decay K L → π 0 νν proceeds in the SM almost entirely through direct CP violation [119,120]. It is completely dominated by the SD loop diagrams with top-quark exchanges, and the charm contribution can be fully neglected [113]. With the help of isospin symmetry, the branching ratio of K L → π 0 νν decay after summing over three neutrino flavours reads [113,121] where κ L encodes the hadronic matrix element extracted from the K 3 data [114,122], and numerically κ L = (2.231 ± 0.013) × 10 −10 (λ/0.225) 8 . The parameter δ denotes the indirectly CP-violating contribution, and is highly suppressed by the K 0 −K 0 mixing parameter | | [120]. t → cµ + µ − decay In the SM, the rare FCNC decay t → cµ + µ − is highly suppressed by the Glashow-Iliopoulos-Maiani mechanism [123], with a branching ratio of O(10 −10 ) [124,125]. However, this process could be significantly enhanced by the Z boson through the tree-level diagrams shown in The branching ratio of the Z -mediated t → cµ + µ − decay can be written as where Γ t is the total width of the top quark. The differential decay width can be calculated from the left tree-level diagram shown in figure 5, with the result in the Z scenario I given by where q 2 denotes the dilepton invariant mass squared, and Γ Z the finite decay width of the Z boson. Since the Z boson cannot be on-shell in the case of m Z > m t , we can safely neglect the finite-width effect of the Z boson here. After including the NLO QCD correction, the differential decay width of t → cµ + µ − can be rewritten as where the factor f NLO (q 2 ) can be obtained from the NLO QCD calculation of the t → cZ decay [126][127][128], and is given explicitly by with β = 1 − q 2 /m 2 t . Numerically, the NLO QCD corrections decrease the LO width by 6.4% ∼ 9.1% for m t < m Z < 1 TeV. Expressions in scenario II can be obtained from the above formulas with the replacement (X L ct , λ L µµ ) → (X L 23 , λ L 22 ). As will be shown in subsection 4.2.2, the branching ratio B(t → cµ + µ − ) is predicted to be below O(10 −5 ) in the two Z scenarios. Therefore, their contributions to the top-quark total width Γ t can be safely neglected. m Z < m t In the case of m Z < m t , the branching ratio of the Z -mediated t → cµ + µ − decay can also be written as eqs. (32) and (33). However, since the intermediate Z boson can be on-shell in this case, it is necessary to consider the Z finite-width effect in eq. (32). When m W < m Z < m t , the main Z decay modes in scenario I are Z → µ + µ − and Z → bcW + decays. Therefore, the total width of the Z boson can be written as where the decay width of Z → µ + µ − is given by and the width of the top-quark mediated Z → bcW + decay reads and y Z = m 2 Z /m 2 t . In scenario II, the Z boson can also decay into the additional channels Z → ν µνµ and Z → bs. The total width can then be written as with The decay widths Γ II (Z → µ + µ − ) and Γ II (Z →cbW + ) can be obtained from eqs. (36) and (37) with the replacement (λ L µµ , X L ct ) → (λ L 22 , X L 23 ). In the SM, t → bW is the main decay channel of the top quark. When m Z < m t , the top quark can also decay into an on-shell Z , which contributes to the top-quark total width as After including the NLO QCD correction, we can write the decay width of t → bW as [129][130][131] with the LO result given by For the t → cZ decay in scenario I, the decay width including the NLO QCD correction can be written as with the tree-level result given by The factor f NLO (q 2 ) is obtained from the calculation of NLO QCD correction to the t → cZ decay [126][127][128] and its explicit expression has been given in eq. (34). Numerically, we find f (m 2 Z ) ≈ −10.7% ∼ +5.3% when m Z varies in the range 105 GeV < m Z < m t . Expressions in the Z scenario II can be obtained from the above formulas with the replacement (X L ct , λ L µµ ) → (X L 23 , λ L 22 ). Since the decay rate of t → cZ is proportional to |X L ct | 2 in scenario I, the top-quark width can provide a unique constraint on the tcZ coupling in the case of m Z < m t . Numerical results and discussions In this section, we proceed to present our numerical analysis of the Z scenarios presented in section 2. In table 1, we list the main input parameters used throughout this work. Table 2 summaries the SM predictions and the current experimental data for the various processes discussed in the previous section. Flavour constraints As discussed in section 3, the Z contributions to the b → sµ + µ − , s → dµ + µ − , b → sνν, and s → dνν transitions are all controlled by the products X L ct λ L µµ /m 2 Z in scenario I or X L 23 λ L 22 /m 2 Z in scenario II. These two products are complex in general and constraints on them will be derived in terms of their real and imaginary parts in the following analysis. In the Z scenarios, the constraints from the branching ratios B(K L,S → µ + µ − ) SD are shown in figure 7. As discussed in subsection 3.3, the Z boson in scenarios I and II affects the s → dµ + µ − transition via the same penguin diagram. Therefore, the constraints on the Z parameters are identical in the two scenarios, as shown in figure 7. It can also be seen that the bounds on the imaginary parts of X L ct λ L µµ /m 2 Z and X L 23 λ L 22 /m 2 Z are quite weak. In addition, the constraints from the s → dµ + µ − decays, while being relatively weaker compared to that from the b → sµ + µ − decays, are still compatible with the parameter regions required to explain the b → sµ + µ − anomalies shown in figure 6. Since the Z boson does not couple directly to the neutrinos in the Z scenario I, the b → sνν respectively. In order to constrain the Z parameters, we consider five b → sνν processes (i.e., the decays B → X s νν, B 0 → K ( * )0 νν, and B + → K ( * )+ νν), and two s → dνν processes (i.e., the decays K + → π + νν and K L → π 0 νν). We show in figure 8 the allowed regions of the real and imaginary parts of X L 23 λ L 22 /m 2 Z by the branching ratios of these decays. It can be seen that the bound from the s → dνν decays is much weaker than that from the b → sνν transitions. This can be understood by the fact that the Z contributions to the former is suppressed by O(λ 3 ) while its contributions to the latter do not suffer any CKM suppression. In addition, the constraints from the b → sνν and s → dνν decays are consistent with the best fit region from the b → sµ + µ − processes. In the Z scenario I, as explained in subsection 3.2, the B s −B s mixing cannot provide any relevant constraint on the Z parameters. In the Z scenario II, the Z contribution is proportional to the product (X L 23 ) 2 /m 2 Z due to the tree-level Z exchange. After taking into account the constraints from the mass difference ∆m s and the CP-violating observable S ψφ , we obtain the allowed regions in the plane ReX L 23 /m Z , ImX L 23 /m Z , which are shown in figure 9. It can be seen that the mass difference ∆m s provides a strong bound on ReX L ct /m Z . For ImX L 23 /m Z , the bound arises mainly from the CP-violating observable S ψφ and is weaker compared to the one on ReX L 23 /m Z , due to the currently larger experimental uncertainty of S ψφ . The phase of the maximum value of |X L 23 |/m Z also matches the CKM phase β s [134]. Summarizing the numerical analysis made above, we can see that the strongest constraints are provided by the b → sµ + µ − processes in the Z scenario I. Therefore, after considering all the flavour processes, we can obtain the combined allowed regions from figure 6, with the numerical results given by at the 95% CL. However, different from the scenario I, the B s −B s mixing also provides an independent bound on the parameter product X L 23 /m Z . Numerically, the combined constraints shown in figure 9 correspond to at the 95% CL. In the case of λ L 22 /m Z ∼ O(1) TeV −1 , constraints on X L 23 /m Z from the b → sµ + µ − processes and the B s −B s mixing are of the same order. Furthermore, since ReX L 23 λ L 22 /m 2 Z is lower bounded by the b → sµ + µ − processes, a lower bound, λ L 22 /m Z > 0.07 TeV −1 , can be derived in order to simultaneously satisfy the constraints from the B s −B s mixing. Therefore, the Z couplings in scenario II exhibit the hierarchy λ L 22 X L 23 . Finally, our numerical analysis has shown that the main constraints in scenario II are obtained from the processes affected by the tree-level Z contributions. Therefore, the flavour phenomenology of the Z scenario II is almost identical to that of the minimal Z scenario discussed in refs. [34,45]. Collider constraints In this subsection, we discuss the collider constraints on the Z scenarios in the case of both m Z > m t and m Z < m t . For recent collider studies of similar Z scenarios with top-FCNC couplings, we refer to refs. [162][163][164]. m Z > m t In the case of m Z > m t , constraints on the Z parameters could be obtained from the decay t → cµ + µ − . However, the current LHC searches for the decay t → cµ + µ − have been only performed at the Z peak, with |m µ + µ − − m Z | < 15 GeV, and interpreted as a bound on B(t → cZ) [141,[165][166][167]. No dedicated searches for non-resonant (outside the Z peak) t → cµ + µ − decays have been performed yet. In ref. [168], an upper bound on B(t → cµ + µ − ) was estimated by using the experimental bounds on B(t → cZ) and B(Z → µ + µ − ) = 3.37% [132]. With such an approach, an upper bound B(t → cµ + µ − ) < 4.4 × 10 −6 can be derived from the current bound B(t → cZ) < 1.3 × 10 −4 set by the ATLAS experiment with an integrated luminosity of 139 fb −1 at 13 TeV [141]. In this way, constraints on the Z parameters can be derived from the bound B(t → cµ + µ − ) < 4.4 × 10 −6 . However, we find that such a constraint is at least one order of magnitude weaker than that obtained from the low-energy flavour processes discussed in the last subsection. Within the effective field theory (EFT) framework, by using different signal regions of the LHC searches for the rare FCNC decay t → cZ, an improved approach has been developed in ref. [169]. However, in the case when m Z is not far from m t , the framework with four-fermion effective operators is not appropriate to describe the Z contributions to the decay t → cµ + µ − . m Z < m t In the case of m Z < m t , the t → cµ + µ − decay involves the resonant Z contribution, and is expected to provide strong constraints on the Z parameters. However, similar to the case of m Z > m t , no experimental searches for the t → cµ + µ − decay mediated by the Z resonance has been performed yet. Detailed analysis of the signal shape could be used to derive constraints from the current B(t → cZ) bound, as performed in ref. [169]. Especially, for |m Z − m Z | < 15 GeV, the signal regions of the t → cZ and t → cZ decays can be largely overlapped, and stringent constraints on the Z parameters could be therefore derived. We leave these possibilities for further work, and concentrate on the mass region 105 GeV < m Z < m t to avoid the potentially strong bound from B(t → cZ) [141]. As discussed in subsection 3.6.2, in the case of m Z < m t , the decay t → cZ can contribute to the top-quark width. Compared to the SM prediction Γ SM t = 1.3 GeV [170], the current measurement Γ t = (1.42 +0. 19 −0.15 ) GeV [132] leaves O(20%) room for the Z effects. We show in figure 10 the allowed regions in the plane (|X L ct (23) |, m Z ) in the scenario I (II). For scenario I, it is noted that the top-quark width provides a unique bound on the coupling X L ct . For scenario II, the bound on X L 23 is much weaker than that obtained from the B s −B s mixing. Predictions As mentioned in the last subsection, experimental searches for the t → cµ + µ − decay off the Z pole have not been performed yet. Using the allowed parameter space derived in the global fit, we can make prediction on the branching ratio B(t → cµ + µ − ) in the two Z scenarios as a function of the Z mass, which is shown in figure 11. Although there are several studies of the expected sensitivities to the t → cZ decay at the LHC [171] and other future colliders [172][173][174], to our knowledge, detailed analysis of the t → cµ + µ − decay at the LHC has not been performed yet. In order to estimate the current (future) experimental sensitivity to B(t → cµ + µ − ), we adopt as a benchmark the product of B(Z → µ + µ − ) = 3.37% [132] and the current (future projected) experimental limit on B(t → cZ). With the current ATLAS limit B(t → cZ) < 1.3 × 10 −4 based on the full Run-2 data [141] and the future expected sensitivity 5 × 10 −5 at the HL-LHC with 3 ab −1 [171], the current and the future sensitivity to B(t → cµ + µ − ) are estimated to be 4.4 × 10 −6 and 1.7 × 10 −6 respectively, as shown by the dashed and the dotted line in figure 11. Here we make the following two observations: • In the case of m Z < m t , the branching ratio of t → cµ + µ − decay in the two Z scenarios is strongly enhanced by the resonance effect, which is several orders of magnitude higher than the one in the case of m Z > m t . In the mass region 105 GeV < m Z < m t , the predicted B(t → cµ + µ − ) in scenario I (II) is higher (lower) than the current and future estimated bounds. Therefore, experimental searches for the t → cµ + µ − decay are expected to probe a Z mass window of [105 GeV, m t ]. • In the case of m Z > m t , the predicted B(t → cµ + µ − ) in scenario I is compatible with the estimated sensitivities, while the one in scenario II is several orders of magnitude lower the estimated future sensitivity. Therefore, direct searches for the decay in scenario II with a heavy Z boson could be very challenging for the LHC and its high-luminosity upgrade. Besides the t → cµ + µ − decay, which should be searched in pp → tt production, the associated production pp → tZ could also be considered and may provide more relevant constraints for a heavy Z boson. Recently, the associated production of a single top with dileptons has been investigated in the EFT framework [175], and a 95% CL bound on the scale of the effective operator (tγ µ P L c)(μγ µ P L µ), Λ tcµµ = 1.1 (1.5) TeV, has been obtained at the LHC with an experimental sensitivity, which will be our future work. We also encourage our experimental colleagues to carry out relevant searches for such a Z boson. An important feature of our Z scenarios is that the Z contributions to all the FCNC decays are controlled by the same product, X L ct λ L µµ /m 2 Z in scenario I or X L 23 λ L 22 /m 2 Z in scenario II. Therefore, all the B-and K-meson processes are strongly correlated. As an illustration, we show in figure 12 the correlations among some observables. From these plots, we make the following two observations: • The branching ratio B(K L → µ + µ − ) is suppressed by the Z effects in scenario I. However, its SM value remain almost unchanged in scenario II, due to the strongly constrained Z couplings. For the B 0 → K * 0 µ + µ − decay, the situation is opposite: the branching ratio is enhanced in scenario II, while remaining unchanged in scenario I. These correlations can be used to distinguish the two Z scenarios. In addition, the Z effects on the branching ratios of K S → µ + µ − , K L → πνν and K + → π + νν decays are all found to be negligible in the two scenarios and will be not shown here. • The correlations between R S.I S.II Exp. SM Figure 12: Correlations between the observables of B-and K-meson decays in the two Z scenarios. The 1σ range of the experimental measurements and the SM predictions are shown, except the data on B(B 0 → K * 0 νν), which corresponds to the 90% CL limit. and C 10 are numerically almost the same in the two scenarios. As expected, the Z effects make R , P 5 , and B(B s → φµ + µ − ) move closer to the experimental measurements. With high-precision measurements at the HL-LHC and Belle II [176], these interesting correlation could provide further insights into our Z scenarios. In the case of m Z m t , the Z contributions to both the t → cµ + µ − and b → sµ + µ − processes are controlled by the same products, X L ct λ L µµ /m 2 Z in scenario I or X L 23 λ L 22 /m 2 Z in scenario II, since they are described by the effective tcµ + µ − operator. Thus, we show in figure 13 the correlation between R S.II SM NP R exp K * : 1σ R exp K * : 2σ Figure 13: Correlation between R K * and B(t → cµ + µ − ) in the Z scenarios I (left) and II (right) for m Z = 1 TeV with real (gray) and complex (red) couplings. that B(t → cµ + µ − ) approaches to its maximum value when R Conclusions Motivated by the recent anomalies in the b → s + − transitions, we have considered a phenomenological Z scenario (denoted as scenario I), in which a heavy Z boson couples only to tc and µ + µ − with left-handed chirality. The Z effects on the b → sµ + µ − processes automatically induce opposite contributions to the effective operators O 9 and O 10 , which is favored by the model-independent analyses of the b → s + − anomalies. We have performed a global fit to all relevant experimental data. It is found that such a minimal Z scenario can address the current b → s + − anomalies, while simultaneously satisfying other flavour and collider constraints. The mass of the Z boson can be less than 1 TeV. In the region 105 GeV < m Z < m t , the t → cµ + µ − decay is significantly enhanced by resonance effects and can serve as a sensitive probe of such a Z boson. As an important feature of this scenario, all the low-energy flavour observables are controlled by the product X L ct λ L µµ /m 2 Z . We have found interesting correlations among the various flavour observables, which could provide further insights into this scenario. We have also considered an extended scenario (denoted as scenario II), in which the Z boson interacts with the SU L (2) fermion doublets with analogous couplings as in scenario I. Due to tree-level Z contributions, the Z couplings suffer from severe constraints from the b → s + − processes and the B s −B s mixing, which makes the collider signals of t → cµ + µ − decay below the estimated sensitivity at the HL-LHC. The correlations between flavour observables in scenario II are different from that observed in scenario I, and can be therefore used to distinguish between the two scenarios. Our scenarios can be modified by replacing tcZ with tūZ coupling. Then the K-meson rare decays could become more relevant due to the large CKM factors involved, and the tZ associated production at the LHC may play crucial role in searching for such a Z boson. A right-handed µ + µ − Z coupling can also be taken into account to simultaneously accommodate the (g − 2) µ anomaly. In addition, if the µ + µ − Z coupling is replaced by the e + e − Z coupling, the direct production of the Z boson at e + e − collider should then be taken into account. It is noted that, although our Z scenarios are not realistic, our studies have shown that a vector boson with top-quark FCNC interactions can explain the current b → s + − anomalies and may provide new avenues for model buildings. We also encourage our experimental colleagues to carry out relevant searches for such a Z boson at the LHC and its high-luminosity upgrade.
2021-12-30T02:16:07.020Z
2021-12-28T00:00:00.000
{ "year": 2021, "sha1": "dac72eddf0c2755d3f3e511a1d394d2525898bd8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dac72eddf0c2755d3f3e511a1d394d2525898bd8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
265439108
pes2o/s2orc
v3-fos-license
The Role of Apolipoproteins in the Commonest Cancers: A Review Simple Summary Apolipoproteins (APOs) are crucial components in our blood that are responsible for fat management. Recent scientific discoveries suggest a provocative link between APOs and a range of cancers, including, but not limited to, breast, lung, and prostate cancer. The specific role of some APOs in causing cancer remains enigmatic. In this review, we summarize evidence from the literature supporting the potential involvement of APOs in the onset of cancer. We also highlight promising avenues for treatment through the inhibition of APOs. Abstract Apolipoproteins (APOs) are vital structural components of plasma lipoproteins that are involved in lipid metabolism and transport. Recent studies have reported an association between apolipoprotein dysregulation and the onset of a variety of human cancers; however, the role of certain APOs in cancer development remains unknown. Based on recent work, we hypothesize that APOs might be involved in the onset of cancer, with a focus on the most common cancers, including breast, lung, gynecological, colorectal, thyroid, gastric, pancreatic, hepatic, and prostate cancers. This review will focus on the evidence supporting this hypothesis, the mechanisms linking APOs to the onset of cancer, and the potential clinical relevance of its various inhibitors. Introduction Apolipoproteins (APOs) are proteins that bind to lipids to form lipoproteins, and are mainly synthesized in the liver and intestine [1].APOs are conventionally classified as either insoluble and soluble [2].The lipid component of lipoproteins is insoluble in water; insoluble APOs are constantly attached to the lipoprotein molecule and cannot endure in the plasma [2].However, due to their amphipathic nature, apolipoproteins and other amphipathic molecules can surround lipids to synthesize a water-soluble lipoprotein that can be carried through blood or lymph [2]. Figure 1 illustrates the general structure of apolipoproteins.APOs act as ligands for cell membrane receptors, enzyme cofactors, and lipoproteins' structural components by acting as lipid carriers [3,4].Based on the densities of the formed lipoproteins, they are divided into five types, including chylomicrons (CMs), very-low-density lipoprotein (VLDL), low-density lipoprotein (LDL), intermediate-density lipoprotein (IDL), and high-density lipoprotein (HDL) [5,6].A graphical representation of an apolipoprotein illustrates its unique structure, which confers an amphipathic property to the molecule.This enables it to interact with both the lipids in the core of lipoproteins and the watery plasma environment.As a result, apolipoproteins act as biochemical keys, granting lipoprotein particles access to specific locations for the transportation, reception, or modification of lipids.Additionally, apolipoproteins play a role in stabilizing the structure of lipoproteins. As shown in Table 1, the level of APOs in different cancers has been studied to identify them as possible biomarkers.In this review, we recapitulate the current literature on the cellular mechanisms involving APOs in different cancers.We will also discuss the many roles played by APOs in the onset and progression of cancer and their potential to act as possible cancer biomarkers.A graphical representation of an apolipoprotein illustrates its unique structure, which confers an amphipathic property to the molecule.This enables it to interact with both the lipids in the core of lipoproteins and the watery plasma environment.As a result, apolipoproteins act as biochemical keys, granting lipoprotein particles access to specific locations for the transportation, reception, or modification of lipids.Additionally, apolipoproteins play a role in stabilizing the structure of lipoproteins. As shown in Table 1, the level of APOs in different cancers has been studied to identify them as possible biomarkers.In this review, we recapitulate the current literature on the cellular mechanisms involving APOs in different cancers.We will also discuss the many roles played by APOs in the onset and progression of cancer and their potential to act as possible cancer biomarkers. Functions of Apolipoproteins APOs are physiologically expressed in both normal and malignant cells; consequently, the following sections will discuss the function of APOs in both contexts. Functions of Apolipoproteins in Normal Cells APOs play a vital role in normal human vascular biology, lipoprotein metabolism, and lipid transport [11].Plasma APOs bind to the lipoprotein surface and stabilize its structure, and also act as a cofactor for enzymes, regulate enzyme activity, and induce lipid metabolism [3,4,11].APOs' configuration and content influence lipoprotein formation and metabolism.APOs bind to membrane lipoprotein receptors and regulate the cellular uptake of lipoproteins.For example, while APOB-100 and APOE bind to the LDL (APOB/E) receptor, APOE binds to the LDL receptor-related protein (LRP) of the liver and extrahepatic tissues, where they are responsible for the redistribution of cholesterol among cells for use in membrane biosynthesis and as a precursor for steroid production [12,13].On the other hand, APOC2 binds to triglycerides (TGLs), chylomicrons (CMs), very-low-density lipoproteins (VLDLs), low-density lipoproteins (LDLs), and high-density lipoproteins (HDLs) in plasma [14]. The three main stages of lipid metabolism include the distribution of triglycerides between organs (fuel transport pathway), the maintenance of the extracellular cholesterol pool (overflow pathway), and HDL metabolism (cholesterol transport reversal) [11]. Lipoproteins can be metabolized via two pathways: an exogenous and an endogenous pathway (Figure 2).The exogenous pathway delivers triglycerides to the peripheral tissues via chylomicrons and VLDLs [8].The enterocytes reintegrate triacylglycerols, phospholipids, and cholesterol on APOB-48 into chylomicron particles; these particles are released into the lymph and reach the plasma via the thoracic duct [8].Precursor chylomicrons primarily consist of APOB-48 in addition to APOA1, APOA2, APOA4, APOA5, APOC1, APOC2, APOC3, and APOE, which form the surface of the chylomicrons [15].At peripheral tissues, lipoprotein lipase (LPL) on blood vessel walls breaks down chylomicron triacylglycerols into fatty acids and glycerol for tissue absorption and use or storage.Post lipid release, chylomicrons shrink, becoming remnants that the liver absorbs via LDL receptors and LRP.APOE and APOB48 on remnants aid liver uptake, while APOA and APOC return to HDL in the blood.LDL's role is to transport cholesterol to cells, not to metabolize chylomicron fats [16].The exogenous pathway is a process for delivering triglycerides to peripheral tissues using chylomicrons and VLDLs.Chylomicron particles, consisting of APOB-48 and various other proteins, are formed by enterocytes, released into the lymph, and eventually enter the bloodstream.In peripheral tissues, chylomicron triglycerides are broken down, and the remnants are taken up by the liver, while APOA and APOC return to HDL.In the endogenous pathway, the liver produces triglycerides carried to peripheral tissues by VLDL.These VLDL particles initially contain APOB100 and acquire additional proteins and cholesteryl esters from HDL.In peripheral tissues, VLDL triglycerides are partially broken down into VLDL remnants, which are either absorbed by the liver or converted into LDL. In contrast, in the endogenous pathway, hepatocytes produce triglycerides de novo, which are transported to the peripheral tissues by the VLDL particles [15].APOB100 is the only component of precursor VLDLs.APOB lipidation is induced by the microsomal triglyceride transfer protein.Once VLDL enters the plasma, it acquires APOs (APOA1, APOA2, APOA4, APOC1, APOC2, APOC3, and APOE) as well as cholesteryl esters from HDL [15].Like chylomicrons, in the peripheral tissues, VLDL triglycerols are partially hydrolyzed by lipoprotein lipase (LPL) to produce VLDL remnants that only bind to APOE and are thus absorbed by the liver or further digested by hepatic triglyceride lipase, transforming them into LDL [17][18][19]. Lastly, reverse cholesterol transport and cholesterol recycling are responsible for the metabolism of HDL and are involved in the removal of cholesterol from peripheral cells to the liver [20][21][22].The compiled HDL on APOA is covalently modified and converted into precursor HDL, and released as phospholipid-rich, disc-shaped particles by the liver and intestine [15].Nascent HDL particles absorb free cholesterol from cells by binding to ATP-binding cassette transporter A1 (ABCA1) along with APOA1 and APOA4 [15].The The exogenous pathway is a process for delivering triglycerides to peripheral tissues using chylomicrons and VLDLs.Chylomicron particles, consisting of APOB-48 and various other proteins, are formed by enterocytes, released into the lymph, and eventually enter the bloodstream.In peripheral tissues, chylomicron triglycerides are broken down, and the remnants are taken up by the liver, while APOA and APOC return to HDL.In the endogenous pathway, the liver produces triglycerides carried to peripheral tissues by VLDL.These VLDL particles initially contain APOB100 and acquire additional proteins and cholesteryl esters from HDL.In peripheral tissues, VLDL triglycerides are partially broken down into VLDL remnants, which are either absorbed by the liver or converted into LDL. In contrast, in the endogenous pathway, hepatocytes produce triglycerides de novo, which are transported to the peripheral tissues by the VLDL particles [15].APOB100 is the only component of precursor VLDLs.APOB lipidation is induced by the microsomal triglyceride transfer protein.Once VLDL enters the plasma, it acquires APOs (APOA1, APOA2, APOA4, APOC1, APOC2, APOC3, and APOE) as well as cholesteryl esters from HDL [15].Like chylomicrons, in the peripheral tissues, VLDL triglycerols are partially hydrolyzed by lipoprotein lipase (LPL) to produce VLDL remnants that only bind to APOE and are thus absorbed by the liver or further digested by hepatic triglyceride lipase, transforming them into LDL [17][18][19]. Lastly, reverse cholesterol transport and cholesterol recycling are responsible for the metabolism of HDL and are involved in the removal of cholesterol from peripheral cells to the liver [20][21][22].The compiled HDL on APOA is covalently modified and converted into precursor HDL, and released as phospholipid-rich, disc-shaped particles by the liver and intestine [15].Nascent HDL particles absorb free cholesterol from cells by binding to ATP-binding cassette transporter A1 (ABCA1) along with APOA1 and APOA4 [15].The major APO component of HDL, APOA1, triggers lecithin cholesterol acyltransferase (LCAT), which induces the esterification of free cholesterol and transforms HDL3 particles to larger particles (HDL2) through the accumulation of apolipoproteins (APOA, APOC, and APOE), cholesteryl ester, and triglycerides [23].The reverse cholesterol transport can occur via three different pathways.First, HDL2 particles with multiple APOE copies are absorbed into the liver by the LDL receptor [20].The second pathway involves the scavenger receptor B1-mediated absorption of cholesteryl esters from HDL by the liver [24].Lastly, the third plausible pathway is the transfer of the cholesteryl esters from HDL to triglyceride-rich lipoproteins by the cholesteryl ester transfer protein [20]. Additionally, macrophages express APOC2, and since macrophages consume large amounts of energy, APOC2 aids in the transport of lipids into macrophage cells.Enhanced STAT1 protein synthesis induces the overexpression of APOC2, as shown in studies on mice [25]. The disruption of APO concentrations, regulating metabolism, enzyme functions, or lipoprotein production, can significantly affect the antiatherogenic properties of HDL and lead to the onset of cardiovascular disease, diabetes mellitus, and obesity [26,27]. Role of Apolipoproteins in Cancer Lipid metabolism is considered one of the significantly affected metabolic pathways in cancer [28].Additionally, several studies have reported a correlation between lipids and APOs in cancer onset and development.The following sections will discuss the presence and roles of different APOs in the commonest human cancers. Role of Apolipoproteins in Breast Cancer The APOA subtype APOA1 has a tumor-suppressive role in breast cancer and plays a role in inducing apoptosis, thus inhibiting the progression of cancer cells [29].APOA is an essential apolipoprotein that plays a significant role in reverse cholesterol transport [30].In cancer cells, the interaction between complement component 1q subcomponent binding protein (C1QBP) and APOA leads to the binding of C1QBP to APOA, inhibiting its expression and weakening APOA's antioxidation ability, leading to carcinogenesis [31].Additionally, another study has established that APOA1 functions as a cofactor for lectin cholesterol acyltransferase (LCAT), which is a key participant in lipid metabolism.This finding is significant, as lipid metabolism has been extensively linked to cancer in numerous earlier studies.Specifically, the relationship between lipoproteins or lipids and cancer risk has been investigated, and correlations have been established [30].Studies have shown reduced APOA1 expression in the serum of breast cancer patients, and APOA1 gene mutations are linked with an increased risk of developing breast cancer [32][33][34].Thus, the normal expression of APOA1 causes cancer cell apoptosis (Figure 3).Nouri et al. [35] conducted a meta-analysis and reported that APOA1 was linked to an increased risk of intraocular metastasis from breast cancer.Contrary to the role of APOA1 as a tumor suppressor in breast cancer, an in vitro study by Cedo et al. [36] demonstrated that APOA1-containing HDL triggered breast tumor development in PyMT mice plausibly due to low oxLDL and 27-hydroxycholesterol levels. APOA1 gene mutations are linked with an increased risk of developing breast cancer [32][33][34].Thus, the normal expression of APOA1 causes cancer cell apoptosis (Figure 3).Nouri et al. [35] conducted a meta-analysis and reported that APOA1 was linked to an increased risk of intraocular metastasis from breast cancer.Contrary to the role of APOA1 as a tumor suppressor in breast cancer, an in vitro study by Cedo et al. [36] demonstrated that APOA1-containing HDL triggered breast tumor development in PyMT mice plausibly due to low oxLDL and 27-hydroxycholesterol levels.Mutations in the APOB gene, including 7673CT as well as 12,669 GA, are significantly associated with an increased risk of breast cancer and are frequently present in menopausal females [34].Furthermore, enhanced APOB expression is considered a statistically significant risk factor for the intraocular metastasis of breast cancer [34].However, in this study, APOA1 proved to be a more effective biomarker for distinguishing intra-ocular metastasis compared to APOB.According to a recent study, the total cholesterol to APOB ratio did not distinguish between breast cancer patients and control patients [37].However, the level of APOB was higher in triple-negative breast cancer patients in comparison to other molecular types [37].Yet their APOB levels were comparable in all stages of breast cancer [37]. Another study revealed that an increase in the APOB to APOA1 ratio is associated with an increase in the severity of breast cancer, but this was not statistically significant [38]. In another study, plasma levels of APOC1 were reduced in breast cancer patients [39].The study further revealed that administering an APOC1 peptide to mice with xenografts exhibited an anti-tumor effect, underscoring the significance of APOC1 in breast cancer development [39].Despite this crucial discovery, which is echoed in human cases, the specific role of APOC1 in human breast cancer development is still not well-defined.APOC1 testing could also differentiate between triple-negative breast cancer patients and non-triple-negative breast cancer patients [40].In addition, it has been proven that breast cancer patients have decreased APOC1 and APOC2 but increased APOC3 levels [41]. APOD is considered carcinogenic due to its presence in the breast primary cyst fluid content, as these cysts increase the risk of developing breast cancer by threefold [42].Serum levels of APOD peak when the tumor is benign but reduces with more invasive and metastatic forms of breast tumors [43].Changes in APOD concentration are influenced by breast cancer metastasis and invasion, but older breast cancer patients also have increased levels compared to younger patients [44].A recent study concluded that the APOD gene was significantly reduced in breast cancer patients [45] and displays an anti-tumor effect by suppressing MAPK, leading to a restriction in cell mitosis [46]. Even though the function of APOE in cancer progression is unclear, it can inhibit proliferation due to its high-affinity interaction with proteoglycan and heparin in the cancer tissue [47].The association between breast cancer and APOE was controversial in the beginning due to a lack of associations between them; however, research has revealed that patients with one or two copies of the e4 allele along with elevated triglyceride levels were at a fourfold risk of developing breast cancer in comparison to those with low triglyceride concentrations [48,49].Additionally, researchers have documented contradicting reports between APOE polymorphisms and breast cancer risk; while some studies have reported an association [48,[50][51][52][53][54], other studies did not report any link between APOE polymorphism and breast cancer risk [55,56].A meta-analysis concluded that, among Asians, carriers of the E4E4, E4E3, and E4E2 genotypes were associated with a high breast cancer risk, compared to the E3E3 genotype [57].Likewise, in Taiwan, studies reported a risk of breast cancer in females with the APOE genotype; neither the APOE2 nor APOE4 alleles showed a notable correlation with markers of cell growth [51,52,58].On the other hand, Caucasians had no association between breast cancer and APOE2, APOE3, or APOE4 [57].Moreover, has been APOE positively correlated with breast cancer progression and invasion [47]. APOH, a multifunctional apolipoprotein, has been detected in the sera of breast cancer patients [59,60].Although it was previously established that APOH is elevated in breast cancer, Chung et al. identified that a novel APOH fragment, 3808 Da, was found to be elevated in breast cancer sera [59].Elevated levels of APOJ were also found in breast cancer patients [61,62]; Yom et al. (2009) [62] reported the overexpression of APOJ in early-stage invasive breast cancer, indicating a role of APOJ in the initiation of breast cancer tumorigenesis.Moreover, the study also documented a high expression of APOJ in patients with <T2 stage breast cancer, thus suggesting the immunostaining of APOJ as a predictive tool in addition to its role as a prognostic factor for recurrence [62].In vitro studies have shown APOJ knockdown to enhance sensitivity to chemotherapeutic drugs, as well as reduce cell proliferation and metastasis [63][64][65]. Although APOLs are involved in various cancers, their role in breast cancer is not well documented [7,[66][67][68][69].Moreover, APOL3 has been found to regulate neuronal calcium sensor 1 (NCS-1), which plays a critical role in promoting the metastasis and survival of breast cancer cells in vitro [70,71]. Although APOM is not yet confirmed to be increased or decreased in breast, cervical, and ovarian cancers, some mechanisms explain the plausible inhibition of the tumor growth through APOM [47].APOM carries and stabilizes the level of sphingosine-1-phosphate, a molecule responsible for reducing cancer invasion [72], thus suggesting the underlying mechanism underpinning APOM-reduced breast cancer growth.A recent study illustrated that APOM expression is statistically significantly lower in breast cancer tissue as compared to normal tissue [73].In vitro data also showed APOM to suppress breast cancer cell proliferation, migration, and invasion [73].Table 2 provides a concise overview of the functions of different apolipoproteins in relation to breast cancer. Apolipoproteins in Gynecological Cancers Studies have reported a role of APOA1 in gynecological cancers.The loss of APOA1, APOA2, and APOA4 was reported in ovarian cancer patients [74].APOA1 was found to have a suppressive effect on ovarian cancer [74].In vivo experiments demonstrated the induction of APOA1 in mice to suppress palpable tumors and metastasis, further improving the survival rate by modulating the immune system and altering the host environment.APOA1 changes the phenotypic expression of the macrophages from pro-tumor (M2) to anti-tumor (M1), and blocks tumor-associated angiogenesis by downregulating MMP-9 expression [75].In ovarian cancer patients, low serum APOA1 levels were detected, suggesting early-stage ovarian cancer, with a sensitivity of 54% and a specificity of 98% [76].The serum level of APOA4 was noted to be reduced in the serum of patients with ovarian cancer [74]. Additionally, a significant association was reported between increased APOB and high-grade ovarian cancer [77].This study involved samples from newly diagnosed highgrade ovarian cancer cases and suggested that increased APOB levels might be indicative of a favorable prognosis.Similarly, levels of APOC3 were significantly higher in patients with malignant ovarian cancer in comparison to benign ovarian cyst samples [78].Despite the limited sample size, the association with APOC3 reached statistical significance, with a p-value of 0.04.APOD is frequently linked with HDL in the plasma and is associated with favorable prognosis in cancer [79,80].Concordantly, in epithelial ovarian carcinoma, the overall survival of patients was higher in APOD-positive tumors as compared to APODnegative tumors [80].Furthermore, APOD levels did not show a significant correlation with the presence of ovarian cancer to warrant its use as a diagnostic indicator, given that only 18 out of the 68 tested samples exhibited positive staining [80].Yet, a notable correlation was found with prognosis for tumors that were larger than 1 cm in size.Contrary to the role of APOD in ovarian cancer, the overexpression of APOE was found in the sera of ovarian cancer patients in comparison to healthy individuals [81][82][83]; this overexpression was vital for the growth and survival of ovarian cancer [82].Chen and colleagues [82] inhibited APOE expression in vitro and reported cell cycle arrest in the G2 phase as well as the induction of apoptosis, further supporting the role of APOE in ovarian cancer cell proliferation and survival.In addition, the upregulated expression of APOJ has been noted in the plasma of ovarian cancer patients and indicated as an early diagnostic and predictive marker for adverse outcomes [84].A large-scale serial analysis in ovarian cancer tumors found that the APOJ gene was upregulated in malignant samples as compared to non-malignant samples [85].The most recent analysis studied a multiple biomarker combination including APOA1 and APOA2 for ovarian cancer [86].The results showed that the most optimal biomarker combination was a panel of five markers: CA 125, HE4, CA 15-3, APOA1, and APOA2, giving a sensitivity of 93.71% and a specificity of 93.63% for detecting ovarian cancer [86]. In cervical cancer research, APO expressions have been identified, mirroring the patterns observed in ovarian cancer.An investigation into APOA1 expression among cervical cancer patients, as compared to a control group, noted a significant decline in APOA1 levels in the patients, positioning it as a possible biomarker for cervical cancer [87].However, this was an initial study with a small sample size, indicating that further investigation with a larger cohort is required for substantiation.Regarding post-treatment prognosis in cervical cancer, APOC2 might be a prospective marker, given the absence of a substantial difference between the control group and asymptomatic patients [88].Conversely, in cases of cervical cancer leading to mortality, APOC2 levels were significantly lower than in asymptomatic cases [88].The small scale of the study, which included only 9 controls and 28 cervical cancer cases, underscores the need for broader research efforts.Similarly, in the context of ovarian cancer, a separate study identified that genes associated with the pathogenesis of invasive cervical cancer, including the APOD genes, were downregulated in affected patients [89]. Research on APOs in endometrial cancer is sparse.One study investigated whether APOA and APOB could be considered as independent risk factors for lymphovascular space invasion in type 1 and type 2 endometrial cancer [90].The findings indicated that APOB is an independent risk factor exclusively for type 1 endometrial cancer [90].Additionally, a separate study examined APOD expression in endometrial cancer tissues and found that only 34% of the samples exhibited positive APOD expression, leading to the conclusion that there was no significant association [91].Table 3a (ovarian cancer) and b (cervical cancer) provide a summary of the clinical impact, action, consequence, control, prognostic significance, and origin of various apolipoproteins in the context of gynecological cancers. Apolipoproteins in Lung Cancer Several studies have pointed towards a correlation between APOs and lung cancer.Studies have shown a loss of APOA1 and APOB expression in lung cancer patients [92].The increased APOB/APOA1 ratio correlates with a higher incidence of lung cancer in both males and females [93,94].This association was confirmed in another study which was more specific to small-cell lung cancer (SCLC) [95].It was also found that the rate of oxidative stress is elevated in patients with a higher APOB/APOA1 ratio, adding to the speculation that these apolipoproteins can increase the incidence of lung cancer [94,95].While APOA1 is recognized for its role in cardiovascular disease, an in vitro study carried out in mice proved that APOA1 has anti-tumorigenic roles [96]; mice with the human APOA1 transgene (A1Tg) experienced hindered cancer growth and progression [97].However, the findings of this study are particular to APOA1 and should not be extended to other members of the APOA protein family, including APOA2. The underlying mechanism responsible for the anti-tumorigenic role of APOA1 is that APOA1 can convert cancer-linked macrophages from being pro-tumor M2 to becoming the anti-tumor M1 phenotype [97].APOA1 has been shown to reduce lung cancer through its immunomodulatory mechanisms and anti-inflammatory characteristics by inhibiting the neo-angiogenesis of lung tumors while also reducing enzymes that enable cancer metastasis [75,98].Contrary to previous findings regarding APOA1 expression in lung cancer, a recent study found that APOA1 levels were increased in patients with idiopathic pulmonary fibrosis-related lung cancer, resulting in dyslipidemia [99].Another study determined that APOA1 levels were significantly elevated in SCLC, despite not being markedly increased in non-small-cell lung cancer (NSCLC), and found that reduced APOA1 levels correlated with an increased recurrence of SCLC [100].These contrasting findings highlights the need for a nuanced approach to researching APOA1 in lung cancer.It is evident that APOA1's role is not uniform across different lung cancer contexts and that its function may be influenced by the underlying pathology of the lung condition.Future research must consider these subtleties to fully elucidate APOA1's potential as a biomarker and therapeutic agent in lung cancer. While there is a substantial number of studies exploring the function of APOA1 in relation to lung cancer, research on the role APOA2 in lung cancer is scarce.Nevertheless, a study was conducted to assess tumor and inflammatory markers and their role in diagnosing lung cancer, such as APOA2, an inflammatory marker that was significantly differentiated between non-small-cell lung cancer (NSCLC) patients and controls, with a sensitivity of 89% [101].When combined with other inflammatory markers and tumor markers, APOA2 was successful in diagnosing early-stage lung cancer patients, including NSCLC [101].This observation applied to SCLC as well; however, the use of APOA2 as an early diagnostic marker requires further validation.On the other hand, the role of APOA4 in lung cancer is contradictory depending on the lung cancer subtype.While enhanced APOA4 expression was reported in squamous cell carcinomas of the lung [102], in adenocarcinomas, a loss of APOA4 was reported in serum [103], which is comparable with previous findings from similar studies.These contrasting findings around APOA2 and APOA4 within lung cancer subtypes are a testament to the molecular diversity of the disease.They reinforce the need for a stratified approach in cancer research, where the nuances of each subtype can significantly influence the course of diagnosis and treatment.Moreover, these insights into APOA proteins could help to unravel the broader complexities of cancer pathophysiology, potentially guiding targeted therapies and precision medicine.It is a growing area of research that promises to refine our understanding of cancer biology and improve patient outcomes through more personalized diagnostic tools. The role of APOB is controversial in lung cancer.As previously stated, studies pointed towards the increased incidence of lung cancer associated with increased APOB serum levels [93].While this study is notable for its extensive sample size, it does carry a set of limitations.These limitations include the potential for reverse causation bias, the lack of adjustments for lipid-lowering medications, which could ultimately impact apolipoprotein levels, and a cohort of participants that may not be entirely representative of the broader population.However, other studies showed that the downregulation of APOB was correlated with an increased risk of cancer [94,104].In addition, it was found that APOB has varying genotype expression in different cancers, with some genotypes associated with favorable outcomes and others resulting in inferior survival rates in NSCLC [104].While the mechanism that correlates APOB and NSCLC is unclear, it is hypothesized that the association is due to APOB's role in regulating cholesterol transport and metabolism, which modulate the development of NSCLC [104]. Much like the in research that was carried out on the role of APOA1 in SCLC, the expression of APOC3 was significantly decreased in SCLC patients as compared to NSCLC patients and normal lung tissues [100].The significant loss of APOC3 expression in SCLC tissues suggests the use of APOC3 as a predictive marker for SCLC [100].Additionally, the expression of APOC3 was remarkably elevated in patients with recurrence [100]. APOE, known to be implicated in cardiovascular and neurological diseases, is involved in tumorigenesis, cancer cell proliferation, and metastasis, and is associated with amplified oxidative stress [105,106].However, its role in lung cancer remains unclear.In one study, APOE was found to be upregulated in patients diagnosed with NSCLC by 1.6-fold; however, the use of APOE as a candidate biomarker remains insignificant [107].This finding was supported by an in vivo study in which APOE knockdown inhibited the proliferation and metastasis of lung cancer cells [108].In addition to the correlation between APOE upregulation and increased lung adenocarcinoma frequency, APOE is associated with a higher incidence of malignant pleural effusions (MPEs), a complication of lung adenocarcinoma, compared to lung cancers without MPEs [109]. APOH is yet another protein with inflammatory effects.A correlation between increased APOH and NSCLC has been established; however, the underlying mechanism is still nascent [110].Likewise, APOH was upregulated in papillary lung adenocarcinomas in mice; however, in mice with atypical adenomatous hyperplasia (AAH) of the lung, their APOH was downregulated by twofold.It was discovered that APOH inhibits angiogenesis by suppressing endothelial cell growth.It is therefore suggested that its downregulation in AAH limits its inhibitory effects on angiogenesis, and this plays a role in promoting cancer proliferation [103,111].The research additionally discovered that APOM expression is suppressed in the AAH tissue [103].The significance of this finding is due to the role of APOM as a primary carrier for sphingosine-1-phosphate, which is a signaling molecule responsible for inhibiting ceramide; the inhibition of ceramide leads to suppressed apoptosis and increased cell proliferation, leading to cancer [72].Therefore, blocking the inhibitor of ceramide would ensure that ceramide retains its apoptotic effects, thus reducing the risk of tumor development [103].Therefore, APOM could potentially play a role in a tumor-suppressing mechanism, yet this requires further studying [103].In contrast to previous studies, a recent study by Zhu and colleagues [112] reported that the upregulation of APOM stimulates cell proliferation, invasion, and tumor development in NSCLC by inducing sphingosine-1-phosphate, leading to the activation of the ERK1/2 and PI3K/AKT signaling pathways. The function of APOL and its subtypes remains understudied in the current literature.It was established that APOL2 lacks apoptotic potential [113]; however, in human bronchial epithelium, APOL2 has exhibited anti-apoptotic ability [114].In human lung cancer tissue, APOL2 expression was shown to be augmented [114]. Apolipoproteins in Colorectal Cancer (CRC) Colorectal cancer (CRC) is a complex and heterogeneous disease that arises from the accumulation of genetic and epigenetic alterations in colon epithelial cells.Despite advances in diagnosis and treatment, CRC remains a leading cause of cancer-related death worldwide.Recent studies have suggested that apolipoproteins may also have significant functions in the development and progression of CRC. A recent retrospective study highlights the multifaceted role of serum APOA-I in colorectal cancer (CRC).The link between reduced serum APOA-I levels and larger tumor sizes, along with more advanced TNM stages [30], is indicative of a more extensive cancer spread, which points to APOA-I's potential involvement in the disease's progression.Furthermore, the association with biomarkers of systemic inflammation [30] underscores the complexity of APOA-I's function, suggesting that it may influence both lipid metabolism and inflammatory pathways in cancer development and progression.These insights suggest that serum APOA-I levels could serve as a biomarker reflecting various aspects of CRC pathogenesis, including lipid dysregulation and inflammation, and might be relevant for understanding the mechanisms of cancer development, offering potential prognostic value and possibly informing therapeutic strategies.Nevertheless, this study was limited to patients from single cancer center, and all the patients were Chinese, so it has selection and sampling bias.Therefore, the conclusion may not be suitable for extrapolation to Western populations due to a lack of internal validity.A different study using a prospective cohort study design assessed the serum lipoprotein levels at baseline and then over repeated assessments that explored the correlation between different apolipoprotein concentrations and tumor subsites of colorectal cancer (CRC) patients; it was observed that there is no statistical significance between the concentration of APOA and early-, middle-, or late-onset CRC.Additionally, a negative correlation between APOA and cancer in the hepatic flexure was found, with high APOA associated with a lower risk of hepatic flexure cancer.However, this study was limited to its use of the baseline concentration for its main analysis, and therefore, a short-term variation subsample assessment indicated that time-dependent variation in lipids was unlikely to have a substantial impact on the results [115].These findings suggest a possible anti-tumor impact of APOA, but further research is required to determine the underlying mechanisms.Another recent study reported that ApoA1 and ApoA1-binding protein suppress colorectal cancer (CRC) cell proliferation and metastasis, creating a synergistic effect against CRC migration and angiogenesis by increasing cholesterol efflux and damaging the correct distribution of invasion [116].Therefore, increased APOA levels are a favorable factor in metastatic colorectal cancer (mCRC) for overall survival.Therefore, APOA mimetic peptides are proposed as therapeutic molecules that mimic APOA's structure and function [7].APOA mimetic peptide 4F (L-4F) exerts an antiinflammatory effect by reducing levels of tumor necrosis factor (TNF-α) and interleukin-6, which have been shown to Inhibit cancer development both in vitro and in vivo [31]. Studies have also highlighted the significance of APOB in colorectal cancer (CRC) development.Yang et al. reported a positive correlation between high circulating APOB and CRC risk, particularly in men, which could be linked to APOB's role as a lipid carrier for cholesterol and triglycerides into extrahepatic tissues [30].Moreover, it was found that the glycated APOB form was more prevalent in CRC and adenoma tissues than non-cancerous tissues, suggesting a potential role for APOB in dysplastic and neoplastic development [116].However, the role of APOB in CRC development is still controversial.It was proposed that APOB might be downregulated in tumors due to inactivating mutations in the APOB gene [31].As APOB synthesis and secretion require abundant energy, tumor cells may conserve energy for their proliferation by inactivating the APOB gene.Other studies have even identified no association between APOB and tumor stage, which further complicates the role of APOB in CRC development. Nonetheless, APOB has found a medical application as a predictor of survival in CRC patients after radical surgery, predicting patient outcomes, and could potentially be used as a therapeutic target [116]. Other findings have revealed that an elevated APOB/APOA1 ratio was associated with worse survival in mCRC patients and was identified as an independent prognostic factor for overall survival in mCRC, with higher levels causing shorter overall survival.The proposed explanation is that ApoA-I is negatively associated with tumor-induced systemic inflammation, while elevated ApoB indicates a higher systemic inflammatory marker, and so a high ApoB/ApoA-I ratio, which is considered atherogenic, may contribute to tumor necrosis.However, this is merely a proposal; the precise process was not detailed in the study [30].Although, another study which assessed this prognostic factor found that a high APOB/APOA1 ratio was found to predict poorer survival in patients with metastatic CRC to the liver, as well as in patients with advanced rectal cancer [116]. APOC1 is a secretory protein that is commonly associated with very-low-density lipoproteins (VLDLs) and low-density lipoproteins (LDLs).APOC1 promotes cell proliferation and migration in CRC through the P38-MAPK signaling pathway.Specifically, APOC1 promotes the cell phosphorylation of P38, leading to an increased capacity for the G2/M phase and decreased cells in the G0/G1 phase [117].These results suggest that APOC1 promotes the progression of the cell cycle and may serve as a predictive marker for clinicopathological significant events in CRC.The underlying mechanism for this effect remains unknown, and more research is needed to fully elucidate the role of APOC1 in CRC and its potential use as a diagnostic or prognostic marker.According to additional research, APOC1 overexpression aids in the progression of liver metastases in colorectal cancer [116]. The behavior of APOD is highly unusual.The downregulation of APOD mRNA expression, caused by DNA methylation to its promoter gene, correlates with decreased protein expression [116].One phenomenon that has been observed is its ability to react to oxidative stress that is exacerbated by an increase in the concentration of reactive oxygen species (ROS), which is commonly observed as cancer progresses through its stages, and this is positively correlated with the APOD concentration.The conundrum it raises is that, although APOD is inversely connected with tumor advancement, its action is the opposite of the ROS function, which has been shown to cause cancer to advance.To test this, a paraquat-triggered oxidative stress condition was created, and the exogenous introduction of APOD to CRC cells enhanced tumor suppression through apoptosis [118].Furthermore, the downregulation of APOD is linked to lymph node metastasis, advanced disease stages, and a worse prognosis [7].According to a recent study, a reduction in APOD levels was linked to the initial stages of cancer development, specifically stages I and II, but not to later stages [118].Hence, the findings suggest that APOD levels can be utilized as an early diagnostic marker for cancer initiation rather than a marker for tumor progression after initiation. APOE function has been seen to be highly controversial between studies.Whilst some studies associate APOE with tumor progression, another study proposes APOE as a potential protective factor.The study which proposed APOE as a protective factor associated the APOE gene being silenced with an increased susceptibility to inflammationrelated tumorigenesis [116].On the other hand, a different study suggested that APOE activation was seen to restrict the immune system suppression of cancer cell proliferation, thus promoting cancer growth and metastasis.Nevertheless, these studies all showed a significant APOE level upregulation in CRC.Further patterns have also been identified, like APOE levels being significantly higher when a tumor has metastasized to the liver.One study specified that this upregulation occurs only in primary CRC, not in stage II of CRC [116].A murine model using wild-type mice showed that APOE upregulation was also associated with enlarged tumor sizes.The proposed mechanisms through which APOE accelerated cancer is in relation with intracellular adhesion and junctions, thereby decreasing cell contact inhibition and polarizing normal cells to tumor cells through the PI3K/Akt/mTOR pathway [118].APOE's polymorphism has also been studied.There are three different APOE alleles-APOE-ε2 (cys112, cys158), APOE-ε3 (cys112, arg158), and APOE-ε4 (arg112, arg158)-that differ only due to two amino acids.APOE-ε4 is associ-ated with reduced proximal colorectal neoplasia in the forms of adenoma and carcinoma; however, investigation into distal neoplasms have shown no significant difference [31].Yet these findings still need further investigation, as another study involving Japanese males could not identify these correlations between APOE-ε4 and proximal adenomas, suggesting that APOE is affected by other factors other than genes, for example, ethnicity in this case.Other factors that may determine the potential association between the genotypes of APOE and colonic cancer include racial variation, genetic background, diet, and physical training, which have likely led to a discrepancy in findings on the carcinogenicity of the allele ε4.APOE-ε3 has also shown an inverse correlation between concentration and colon cancer; a deficiency in APOE-ε3 leads to colon cancer, which has been especially observed populations over 50 years of age [116].APOE serum levels have been proposed as a diagnostic marker for metastatic CRC under chemotherapy and bevacizumab treatment.However, further research is needed to fully understand the potential of APOE as a biomarker for clinical outcomes. In one study, the high expression of APOH in one group led to a worse prognosis compared to the low-expression group.Despite this finding, the underlying mechanisms of APOH in CRC remains unknown and requires further investigation [119]. Clusterin, also known as apolipoprotein J (APOJ) is a heterodimeric glycoprotein that is essential for clearing away dead cells and apoptosis.APOJ is significantly increased in colon cancer and contributes to multistage colorectal carcinogenesis.Additionally, research has demonstrated that APOJ stimulates colon cancer metastasis and tumor invasion via the p38/MAPK/MMP9 pathway.It has been discovered that APOJ is a cytoprotective chaperone protein that promotes the folding of released proteins and can be activated by stress.Its three isoforms take part in both pro-and anti-apoptotic activities [7].However, despite its significant role in CRC, the mechanisms underlying the pro-and anti-apoptotic activities of APOJ in CRC remain largely unknown. Studies have proposed conflicting findings regarding APOM expression levels and its underlying mechanisms in CRC.It was reported that APOM expression was significantly reduced in CRC tissue compared to adjacent normal tissue [7].The authors further investigated the potential role of APOM in regulating the epithelial-mesenchymal transition (EMT), a process involved in tumor invasion and metastasis.The study found that the overexpression of APOM in CRC cells inhibited EMT by decreasing the expression of EMT-related transcription factors and matrix metalloproteinases (MMPs).These results suggested a potential tumor suppressor role for APOM in CRC through the inhibition of EMT.In contrast, a study investigating the effect of APOM on CRC cell proliferation and apoptosis found that the upregulation of APOM was associated with lower apoptosis rates and higher tumor growth in Caco-2 cells [120].The suggested mechanism includes the increased expression of ribosomal protein S27A (RPS27A), which has been observed to promote cell growth and invasion, regulate the cell cycle, and impede programmed cell death via various pathways in both living organisms and lab conditions [120].According to certain reports, RPS27A can interact with the mouse double minute 2 (MDM2) gene, which is a primary negative regulatory factor of the p53 protein.This interaction suppresses MDM2, which, in turn, results in the activation of p53, inducing cell cycle arrest [120].In response to ribosomal stress, MDM2 ubiquitinates RPS27A, leading to its proteasomal degradation, thus creating a mutual-regulatory loop.These findings suggest that APOM may act as an oncogene in CRC through the RPS27A-MDM2-p53 pathway.These contrasting findings suggest a controversial role for APOM in CRC tumorigenesis and progression.APOM expression levels and its underlying mechanisms may differ depending on the stage and grade of CRC.Further investigation is needed to fully elucidate the role of APOM in CRC and its potential use as a therapeutic target.Table 5 details the functions, regulatory pathways, clinical outcomes, prognostic importance, and origins of various apolipoproteins in colorectal cancer. Apolipoproteins in Pancreatic Cancer The literature suggests that different subtypes of APOA play a role in the pathogenesis and potential treatment of pancreatic cancer.APOA1 was found to be highly expressed in tumor tissue compared to non-tumor tissue, suggesting its potential use as a sensitive and specific marker for early-stage pancreatic neoplasms [31].Using advanced mass spectrometry-based techniques, it was shown that using a panel of APOA1, APOE, APOL1, and trypsin inhibitor heavy chain H3 (ITIH3) can provide a sensitivity of 95% and specificity of 94.1% in the diagnosis of pancreatic cancer [1].In addition, it was found that low levels of APOA1 can be used to differentiate type 2 diabetes secondary to pancreatic cancer from common type 2 diabetes mellitus [31].A study used mass spectrometry analysis to identify TRIM 15, an E3 ubiquitin ligase, as a potential target for treatment that is a binding partner for APOA1 [121,122].Downregulating the expression of TRIM15 led to increasing the levels of APOA1, which was found to suppress the metastasis of pancreatic cancer.However, this study was focused on genetic and biochemical analysis, and it did not address the potential limitations of the downregulation of TRIM 15, considering its important roles in tumor suppression and cell apoptosis. Patients with pancreatic cancer were found to have a decrease in the plasma concentration of a specific isoform of APOA2 called APOA2-ATQ/-AT [123].A study investigated the effect of chemoradiotherapy (CRT) on the levels of APOA2 and revealed that the distribution of APOA2 isoforms in pancreatic ductal adenocarcinoma (PDAC) patients underwent significant changes before and after CRT, which were not linked to the treatment effectiveness, but rather to alterations in the pancreas morphological characteristics.APOA2-ATQ/AT was deemed to potentially be a useful marker for detecting PDAC, but not for the efficacy of CRT at different stages of PDAC [124].The findings of a retrospective study indicate that low serum levels of APOA2-ATQ/AT can be a potential biomarker for identifying intraductal papillary mucinous neoplasm (IPMN) patients at high risk of developing PDAC.The study further showed that APOA2-ATQ/AT is more sensitive than the commonly used CA 19-9 serum marker in detecting patients with potentially curable IPMNs [125].Prospective studies should be performed to validate these findings, especially since there were discrepancies in age among the diseased and control groups.Another study showed that diabetic patients with pancreatic cancer had significantly lower plasma levels of APOA4 compared to cancer-free diabetic patients.In addition to that, it was reported that the expression of APOA4 RNA did not differ significantly among the various stages of pancreatic cancer.Nonetheless, higher levels of APOA4 in the tumor tissue appeared to be associated with lower overall survival [126].The mechanism of reduction of APOA4 expression in diabetes patients with pancreatic cancer was not investigated in this study.Furthermore, no justification was provided for the reduced levels of APOA4 in this subset of patients with diabetes.Further research is required to address these gaps, and to explore the relationship between carcinogenesis and APOA4 in non-diabetic pancreatic cancer patients. While the relationship between APOB and pancreatic cancer has not been studied, the APOB mRNA editing catalytic subunit (APOBEC3C) was found to be the most expressed APOBEC enzyme in PDAC.Several studies found that higher levels of APOBEC3C expression were associated with shorter overall survival in PDAC patients.This is because high levels of APOBEC3C expression can result in focal hypermutation and increased tumor plasticity, making tumors more adaptable to chemotherapy and other evolutionary pressures, which increases the likelihood of developing new phenotypes and leads to worse outcomes for patients [127]. Earlier research has identified that individuals with a neoplastic pancreatic epithelium have heightened levels of APOC1 expression [7].APOC2 has been shown to enhance cell growth and invasion in pancreatic cancer cell lines, indicating its potential as a predictor for cell survival [31].Further research is needed to investigate the role of APOC in the pathogenesis and therapy of pancreatic cancer.APOE was found to contribute to immunosuppression and inhibiting the apoptosis of malignant cells in pancreatic cancer.APOE contributes to immune suppression in pancreatic cancer by activating the NF-κB signaling pathway.This triggers the production of CXCL1, which is a protein that plays a role in immune suppression by recruiting immune cells that hinder the immune response to the tumor.It was concluded that blocking the production of CXCL1 may reverse APOE-mediated immune suppression.Furthermore, APOE expression was found to be higher in pancreatic cancer tissue than in normal pancreatic tissue, and that elevated APOE levels are linked to worse outcomes for pancreatic cancer patients [128,129].In addition, APOE2 has been found to aid pancreatic cancer cells in avoiding mitochondrial apoptosis by regulating the mitochondrial localization and expression of BCL-2 through the activation of the ERK1/2/CREB signaling cascade [130].A study found that pancreatic cancer cell proliferation is associated with increased expression of APOE2-LRP8, a ligandreceptor pair that promotes tumor progression.The APOE2-LRP8 axis appears to be a dominant biological cascade in this process, inducing the expression of p-ERK1/2 and c-Myc, both of which are involved in the cell cycle and promote tumor growth.The study suggests that targeting the APOE2-LRP8 axis could be a promising therapeutic strategy for pancreatic cancer [131].These findings suggest that ApoE2 could be a useful prognostic marker and a potential target for developing novel therapies against pancreatic cancer, yet further research is needed. APOJ, or Clusterin (CLU), is a well-studied apolipoprotein in the context of pancreatic cancer.Multiple intracellular proteins control the expression of CLU at either the mRNA or protein level, either directly or indirectly, to manage cell growth and proliferation [132].A study highlighted that CLU expression in pancreatic cancer is regulated by HSF1, a stress-induced master regulator that is known to play a key role in converting fibroblasts into cancer-associated fibroblasts (CAFs) in pancreatic cancer and other cancers [133].CLU participates in the modulation of various signaling pathways related to cell proliferation, such as ERK, AKT, and NF-κB, and is also involved in receiving and interpreting extracellular signals.A reduction in the level of CLU can lead to cell proliferation, epithelialto-mesenchymal transition (EMT), and decreased sensitivity to gemcitabine chemotherapy, ultimately resulting in the progression of the disease and poor prognosis [134].Additionally, CLU is linked to an early resistance to MEK inhibitors, a type of cancer treatment targeting the mitogen-activated protein kinase (MAPK) signaling pathway.Therefore, CLU could play a crucial role in the development of a novel therapeutic approach for PDAC [135]. APOL1 appears to play a complex dual role in the development and progression of pancreatic cancer.In vitro, the inhibition of APOL1 significantly reduced cell growth and caused cell cycle arrest and apoptosis, while also decreasing cell proliferation in vivo, as demonstrated by the smaller tumor size in a pancreatic cancer mouse model [136].However, APOL1 also inhibits cell proliferation by activating the NOTCH1 signaling pathway.When activated, NOTCH1 induces the expression of several genes that are involved in regulating the cell cycle and promoting apoptosis, which in turn inhibits cell proliferation.Therefore, the ability of APOL1 to activate NOTCH1 signaling is thought to play a critical role in regulating cell proliferation and apoptosis in pancreatic cancer cells [136].Given that this study was performed on mouse models, further research using human cells is required to investigate the effects of APOL1 in the pathogenesis of pancreatic cancer and to address its potential as a biomarker.In addition to that, there is controversy about whether APOL1 is upregulated or downregulated in pancreatic cancer, and more research is needed.Table 6 efficiently summarizes the roles, control mechanisms, clinical impacts, prognostic relevance, and derivation of different apolipoproteins in pancreatic cancer. Apolipoproteins in Hepatic Cancer APOA, specifically APOA1, has been highlighted as an important diagnostic biomarker of hepatic cancer.It has been discovered that APOA1 can be used to differentiate between individuals who are healthy and those who have liver disease, particularly cirrhosis and hepatocellular carcinoma (HCC) [137].It was found that APOA1 along with other proteins including ISY1, SYNE1, MTG1, and MMP10 are highly expressed during the early stages of HCC.These proteins form a network that interacts with key regulators of lipid metabolism as well as splicing pathways, suggesting their potential role in the development of HCC [138].In a study conducted on patients with HCC undergoing trans-arterial chemoembolization (TACE), the neutrophil-to-APOA1 ratio (NAR) was identified as an independent predictor of overall survival.NAR was found to be indicative of the levels of lectin-type oxidized low-density lipoprotein receptor-1-positive polymorphonuclear myeloid-derived suppressor cells in circulation (LOX-1 + PMN-MDSCs), which are immune cells that can suppress the immune response and promote cancer progression [139].APOA1 is therefore a potential target for treatment.It has been indicated that ellagic acid, a polyphenol present in nuts and fruits, could potentially have therapeutic effects in reducing the risk of hepatic cancer and cardiac disease by regulating the levels of APOA1 [140].While this has been investigated in vitro, the detailed mechanism underlying the effects of ellagic acid on lipoprotein metabolism was not addressed, suggesting a need for further research. Hepatic cancer and liver metastasis have been found to be associated with increased levels of APOB, which indicate its potential significance in tumorigenesis and disease management.The development of HCC has been linked to mutations in the APOB gene.A truncation of the APOB protein resulting from the mutation may increase the risk of HCC, especially in hypocholesterolemia patients [31].It was found that certain single nucleotide polymorphisms (SNPs) in the APOBEC3 gene family were associated with the progression of chronic hepatitis B and the development of HCC in a Chinese population [141].Moreover, the expression of APOBEC3G was significantly higher in liver metastases than in primary liver tumors or non-cancerous liver tissue, suggesting that APOBEC3G could be a potential biomarker for predicting liver tumor metastasis [31].A study found that the APOB/APOA1 ratio is a specific predictor of liver metastasis in rectal cancer patients [142].Nonetheless, this study was retrospective and only included patients with locally advanced rectal cancer who received chemoradiotherapy followed by surgery, which may not represent the entire population of rectal cancer patients.APOB was also highlighted as a useful tool to predict resistance to treatment, where serum lipid levels of APOB as well as APOA1 were found to be useful in predicting the response of patients with advanced intrahepatic cholangiocarcinoma to PD-1 inhibitor treatment [143]. The prognostic value of APOC has been investigated in multiple studies.A study found that APOC1, APOC2, APOC3, and APOC4 were expressed differently in tumor and non-tumor tissues in hepatocellular carcinoma.APOC1 and APOC4 were associated with overall survival, and APOC3 was associated with both overall survival and recurrence-free survival [144][145][146].A progressive increase was observed in the expression of APOC1 from normal tissue to primary tumor tissues and liver metastatic tumor tissues in colorectal cancer, suggesting that APOC1 could have a significant role in the pathology of liver metastasis in colorectal cancer.It is possible that it can trigger the transformation of tumorassociated macrophages (TAMs) into M2-like cells, which can subsequently contribute to immune evasion and angiogenesis in tumor cells [146,147]. APOJ has been found to be a more effective biomarker for diagnosing HCC compared to other used markers, including alpha-fetoprotein (AFP), pCEA, and CD10.When combined with AFP, APOJ further improves diagnostic accuracy.Furthermore, APOJ performed better than both pCEA and CD10 in differentiating liver malignancies from benign hepatocellular masses [148,149].Regarding APOJ, it has also been found that the combination of filamin-A and APOJ genes could potentially serve as a useful marker for hepatocellular carcinoma [150].Nonetheless, this has to be investigated in vitro to determine whether an elevated expression of filamin-A and APOJ is specific to HCC.In terms of its role in pathogenesis, it was suggested that APOJ may play a role in advancing the progression of HCV-related HCC by modulating autophagy, and therefore, could be a promising target for treatment [151].It was shown that reducing the levels of APOJ in the bloodstream resulted in reducing resistance to treatments such as sorafenib/doxorubicin, while increasing the levels of APOJ leads to increasing metastasis and tumor growth [152].Table 7 effectively encapsulates the functions, regulatory processes, clinical consequences, prognostic significance, and sources of various apolipoproteins in hepatic cancer.Linked with resistance to sorafenib/doxorubicin. Apolipoproteins in Prostate Cancer APOA1 expression is upregulated in prostate cancer.APOA1 is regulated by MYC, a frequently amplified oncogene in late stages.Therefore, it can predict prognosis and recurrence.Patients at risk of metastasis or neuroendocrine prostate cancer would benefit from this.Since its expression increases with disease progression, it is suggested that the source of APOA1 is the tumor cells [153].Similarly, APOA2 was proven to be overexpressed in prostate cancer, specifically the 8.9-kDa isoform of APOA2 [154].One study reported an increase in serum APOC1 protein levels in patients during disease progression, suggesting an association with prostate cancer progression.However, the exact role of APOC1 in prostate cancer pathogenesis remains unclear.An immunohisto-chemical analysis revealed that APOC1 was predominantly found in the cytoplasm of hormone-refractory cancer cells [155].APOC1 mediates the cell survival, cell cycle distribution, and apoptosis of prostate cancer via activating the survivin/Rb/p21/caspase-3 signaling pathway [156].The malignant transformation of the prostate is associated with an increased expression of APOD.APOD immunoreactivity was observed in areas of high-grade prostatic intraepithelial neoplasia (HGPIN) in 82% of prostatectomy specimens.The expression of APOD in HGPIN suggests its potential role as a cellular marker for HGPIN and prostate cancer [157].Prostate tumor cells secrete increased amounts of APOE, which binds to TREM2 on neutrophils, inducing senescence.The increased expression of APOE and TREM2 in prostate cancer correlates with poor prognosis.APOE is believed to be produced by prostate tumor cells [158].Certain single-nucleotide polymorphisms in the APOL3 region on chromosome 22q12 increased susceptibility to hereditary prostate cancer [69].High levels of APOJ are found in prostate cancer, correlating with tumor grade, and potentially contributing to treatment resistance.Therefore, small interfering RNA (siRNA) oligonucleotides targeting APOJ silence its gene expression, resulting in a significant increase in chemosensitivity [159].Table 8 provides a compact summary of the activities, control systems, clinical effects, prognostic value, and origins of different apolipoproteins in prostate cancer. Apolipoproteins in Gastric Cancer APOA levels in gastric cancer are controversial.Some studies have shown that APOA levels become elevated, especially in the early stages of gastric cancer.A mouse model study showed that high levels of circulating APOA-1 were associated with an increased tumor burden.Post-gastrectomy, their APOA levels dramatically decreased; hence, it is believed that APOA-1 is secreted by the tumor cells.Further research is needed to understand the specific role and mechanism of APOA-I in gastric cancer cells [160].However, one study suggested that APOA could distinguish between chronic gastritis and gastric cancer; APOA levels were found to be higher than normal in chronic gastritis and lower than normal in gastric cancer [161]. APOA2 can also be used as a prognostic indicator for gastric cancers high in Claudin-6.The APOA2 gene is highly expressed in gastric cancers with high Claudin-6 levels, affecting cholesterol metabolism.Hence, APOA2 is suggested to be an effective prognostic marker for such cancers [162]. Regarding APOC, contradictory results have been found on its role in gastric cancer.Some studies have shown that lower APOC1 and APOC3 levels signify a poorer prognosis [163,164].Other studies claim that APOC1 is overexpressed in gastric cancer [165].In gastric cancer patients with peritoneal metastasis, APOC2 was over-expressed and was associated with poor prognosis.This is because APOC2 promotes the CD36-mediated PI3K/AKT/mTOR signaling pathway.The over-activation of mTOR increases cell survival and cell cycle progression.The inhibition of APOC2 has been shown to delay tumor progression [166].In one study, it was shown that APOA2, APOC1, and fibrinogen a-chain were distinguishing biomarkers that could diagnose gastric cancer.The sources of these biomarkers are unknown [167].Another study demonstrated that zinc finger protein 460 can promote APOC1 transcription, accelerating the epithelial-mesenchymal transition (EMT) and the development of gastric cancer [168]. APOE upregulation was associated with shorter survival times for people with gastric cancer.Increased APOE levels are strongly associated with the risk of muscular invasion.Therefore, it could be used to predict gastric tumor invasion [169,170].In gastric cancers, the overexpression of APOJ is associated with tumor progression and metastasis [171].Table 9 offers a brief encapsulation of the roles, regulatory frameworks, clinical outcomes, prognostic implications, and sources of various apolipoproteins in gastric cancer. Apolipoproteins in Thyroid Cancer Thyroid cancer is associated with apolipoproteins.APOA is a well-documented apolipoprotein in thyroid cancer.APOA1, APOA2, and APOA4 were found to be downregulated in female patients with thyroid cancer [172,173].However, APOA1 was greater in the subset of patients with papillary thyroid cancer (PTC) metastasis [172].APOA1 was found to be one of the three most vital genes regulating the lipid proteomic profiles in humans [172].Similarly, APOA1 and APO4 were implicated in the LXR/RXR activation pathway, which mediates cholesterol metabolism and excretion [172,174].As such, its reduced levels in thyroid cancer lead to a dysregulated lipid profile [172].This dysregulated lipid profile is said to alter gut microbiota symbiosis, which increases the risk of thyroid cancer progression [175].Nevertheless, the way in which lipid breakdown is significant in cancer remains understudied and requires further research to facilitate the development of new cancer therapeutics [172].One study suggested that HDL-C has anti-inflammatory and antioxidant properties, mediated mostly by APOA1, which could play a possible role in cancer mediation [176].Further studies confirmed that reduced APOA1 levels were associated with a worse prognosis and aggressive thyroid tumor characteristics [176].Nevertheless, the previous findings contradict with a recent study that found no association between APOA1, APOB, or APOB/APOA1 levels and the risk of thyroid cancer [177].In fact, it was found that elevated HDL-C and cholesterol levels, which are regulated by APOA1, were associated with a reduced thyroid cancer risk [177].Other contradicting findings stated that APOA1 and APOA4 were overexpressed in PTC, the most prevalent subtype of thyroid cancer, when a proteome analysis was conducted on PTC cells [178,179]. A recent study found that APOA1 had an inverse relationship with tumor size in male PTC patients, especially in the younger age group after adjusting for age [180].This study claimed that there was no association between rate of lymph node metastasis in PTC patients and serum APOA1 [180].With regard to medullary thyroid carcinoma (MTC), it was discovered that APOA4 expression is increased, possibly identifying it as a biomarker for MTC diagnosis [181].Moreover, in patients undergoing management for differentiated thyroid cancer (DTC), their APOA1/2 ratio levels remained increased in an evident hypothyroid state post-RAI therapy [182].While APOA2 itself did not show any statistically significant findings in the hypothyroid state following treatment, the APOA1 levels decreased [182].It is important to note that APOA1 and APOA2 returned to their baseline levels with levothyroxine therapy [182].This concludes that in DTC, alterations in thyroid hormones are associated with the modulation of plasma levels of apolipoproteins. The association between cholesterol levels and thyroid cancer has also been studied [183].It was found that serum LDL cholesterol and APOB levels were significantly lower in patients with more aggressive tumors [183].APOB levels were also lower in patients with high-risk PTC tumors, as well as in individuals with poorly differentiated thyroid cancer (PDTC) and anaplastic thyroid cancer (ATC) [183].This suggests that the role of lipids in aggressive thyroid cancer progression is mediated through APOB.Regardless of the previous findings, another study established that APOB is not associated with tumor size or the rate of lymph node metastasis in male PTC patients [180]. When potential biomarkers were investigated in relation to PTC through a support vector machine, APOC1 and APOC3 were found to be downregulated [184,185].Through further investigation, it was discovered that these two APOs decrease as the cancer stage increased, further proving their downregulation in PTC.This finding could be utilized as a method of non-invasive diagnosis and staging for PTC in patients.The proposed pathway through which APOC1 and APOC3 are downregulated is through orphan nuclear hormone receptor superfamily receptors [184].These orphan members bind to hormone response elements (HREs) and dramatically augment or suppress APOC3 activity [184].Other studies have discovered that retinoid X receptor alpha (RXRalpha) and thyroid hormone receptor beta (T3Rbeta) can inhibit APOC3 when T3 is present [184].On the other hand, it was established that liver X receptor β (LXRβ) was overexpressed in thyroid cancer [186].The significance of this finding in relation to apolipoproteins is that APOC1 and APOC2 are transcriptional targets genes of LXRβ [186].Therefore, the upregulation of LXRβ in thyroid cancer results in an increased expression of APOC1 and APOC2.When compared with normal cell lines, APOC1 showed a statistically significant overexpression, validating the previously mentioned findings [186]. While the association between APOD and thyroid cancer is understudied, one study found that APOD was downregulated in DTC [187].In addition, APOD was associated with a higher risk score and was discovered to be a harmful factor that correlates with the recurrence of DTC [187].The effect of its downregulation is perhaps related to APOD's regulation by the P53 tumor suppressor family.As such, it is suggested that a decrease in APOD levels increases tumor proliferation, as proven by its ability to suppress tumor growth in other cancers such as breast, prostate, and colorectal cancers [187]. APOE is another well-documented apolipoprotein in thyroid cancer.It has been found that APOE expression is upregulated in thyroid cancer [181,186,[188][189][190][191][192][193].APOE gene expression was analyzed and revealed that it was significantly overexpressed in thyroid carcinomas, notably PTC [188,189,191,192].Several databases validated this finding, including the TIMER, GEPIA, and Oncomine databases [188,189].In normal cell lines, immunohistochemistry staining fails to identify APOE.However, the human protein atlas database proved that immunohistochemistry staining was able to detect sufficient levels of APOE in PTC cell lines [188].In this study, it was found that levels of APOE expression declined with increased age in patients with PTC [188].In addition to this finding, data showed that decreased APOE expression in PTC is associated with a statically significant decreased overall survival and found no correlation with disease-free survival [188].This finding conflicts with other papers that found a statistically significant correlation between APOE and disease-free survival in PTC patients [189,192,194].Further investigations concluded that mRNA levels of APOE were positively associated with the TNM staging of PTC [189,194].Again, this was conflicting with another paper, which concluded that a decreased expression of APOE was associated with a higher TNM stage [194].When evaluating the effects of different APOE single-nucleotide polymorphisms (SNPs), it was found that the APOE-rs429358 SNP had a positive correlation with an increased risk of PTC, whereas the SNP APOE-rs7412 had a negative correlation [191].When stratified for age, APOE-rs429358 had a significant association in females only, while APOE-rs7412 was associated with both males and females [191].This difference in findings suggests that, in addition to APOE's role in ferroptosis and tumor modulation, the function of APOE-rs429358 specifically includes modulating hormonal balance in female PTC patients.However, it could also be due to the small sample size of male participants in this trial and as such, further studies are needed to validate these results [191].The previous findings indicate that the risk of thyroid cancer and its pathogenesis can vary depending on gene polymorphisms of associated proteomic biomarkers. An analysis of the APOE gene indicated that APOE is mostly involved in regulating cholesterol metabolism and the PPAR signaling pathway [192,195].However, the role of APOE in PTC primarily revolves around modulating the inflammatory response in this subset of cancer patients [188,189,191].This finding was confirmed in a further analysis that established a positive correlation between APOE expression and B cells, cytotoxic T lymphocytes, neutrophils, and dendritic cells [188,189].Likewise, APOE is said to play a role in the activation of several cell pathways associated with cancer progression, including ferroptosis, apoptosis, DNA damage response, and intracellular signaling pathways such as PI3K/AKT, RTK, and TSC/mTOR [188,191].APOE is a ferroptosis-related gene [191].Ferroptosis is a recently discovered type of programmed cell death that is notable for its possible effect on the inflammatory response and its role in tumor suppression [191,196].As such, APOE is a possible immunotherapy target in PTC patients due to the positive association with immune cell infiltration in PTC.Considering its significant elevation in PTC, APOE could be a prospective biomarker in the diagnosis of PTC [188,189]. Figure 4 illustrates the role of glycolysis in PTC progression [193].It show that tumorigenesis of PTC is modulated through several mechanisms, including through the enzyme alpha-ketoglutarate-dependent dioxygenase (FTO) and its target gene, APOE [193,197].FTO expression inhibits glycolysis in PTC and facilitates N6-methyladenosine (m6A) alteration, which is a common nucleic acid modification that ultimately affects cellular functions [197].The m6A modification is associated with regulating tumor formation and proliferation [193].It was established that FTO expression is reduced in PTC [193].Reduced levels of FTO resulted in an elevation in APOE mRNA m6A alteration, which increased APOE mRNA stability [193].This led to an increase in APOE expression.APOE promotes glycolysis in PTC through the IL-6/JAK2/STAT3 downstream signaling pathway [193].As such, FTO suppresses PTC growth through its downstream gene target, APOE [193]. for its possible effect on the inflammatory response and its role in tumor suppression [191,196].As such, APOE is a possible immunotherapy target in PTC patients due to the positive association with immune cell infiltration in PTC.Considering its significant elevation in PTC, APOE could be a prospective biomarker in the diagnosis of PTC [188,189]. Figure 4 illustrates the role of glycolysis in PTC progression [193].It show that tumorigenesis of PTC is modulated through several mechanisms, including through the enzyme alpha-ketoglutarate-dependent dioxygenase (FTO) and its target gene, APOE [193,197].FTO expression inhibits glycolysis in PTC and facilitates N6-methyladenosine (m6A) alteration, which is a common nucleic acid modification that ultimately affects cellular functions [197].The m6A modification is associated with regulating tumor formation and proliferation [193].It was established that FTO expression is reduced in PTC [193].Reduced levels of FTO resulted in an elevation in APOE mRNA m6A alteration, which increased APOE mRNA stability [193].This led to an increase in APOE expression.APOE promotes glycolysis in PTC through the IL-6/JAK2/STAT3 downstream signaling pathway [193].As such, FTO suppresses PTC growth through its downstream gene target, APOE [193].[193,197].Reduced FTO levels lead to increased APOE expression, which promotes glycolysis through the IL-6/JAK2/STAT3 signaling pathway.Thus, FTO acts as a suppressor of PTC growth [193]. Similarly, cell lines from MTC were investigated to identify possible proteomic changes [181].Using matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI-MSI), it was found that in MTC, APOE was expressed within the tumor's amyloid components.This finding suggests that APOE could also pose as a new biomarker for the diagnosis of MTC [181]. Despite the previous findings, Ma et al. found that APOE did not correlate with tumor size in male PTC patients [180].The reason for this contradiction is uncertain and requires further studies to validate these findings.Ma et al. also found that there was no association between the rate of lymph node metastasis in PTC patients and serum lipid biomarkers, including APOE [180].Similarly, Ito et al. observed that APOE was downreg- [193,197].Reduced FTO levels lead to increased APOE expression, which promotes glycolysis through the IL-6/JAK2/STAT3 signaling pathway.Thus, FTO acts as a suppressor of PTC growth [193]. Similarly, cell lines from MTC were investigated to identify possible proteomic changes [181].Using matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI-MSI), it was found that in MTC, APOE was expressed within the tumor's amyloid components.This finding suggests that APOE could also pose as a new biomarker for the diagnosis of MTC [181]. Despite the previous findings, Ma et al. found that APOE did not correlate with tumor size in male PTC patients [180].The reason for this contradiction is uncertain and requires further studies to validate these findings.Ma et al. also found that there was no association between the rate of lymph node metastasis in PTC patients and serum lipid biomarkers, including APOE [180].Similarly, Ito et al. observed that APOE was downregulated in papillary and follicular thyroid carcinomas, whereas it was significantly overexpressed in anaplastic thyroid carcinoma based on immunohistochemical staining [195].This finding proposes that APOE is an independent biomarker of anaplastic thyroid carcinoma.However, due to the conflicting, more recent findings, the previous conclusion requires further analysis to confirm it. The APOE gene is co-expressed with the APOC1 and APOC2 genes, and is also a transcriptional target gene of LXRβ [192].As such, similar findings were discovered with regard to LXRβ upregulation in thyroid cancer, which consequently leads to an increase in APOE expression [186]. In patients with a diagnosis of DTC and received treatment, it was found that APOE levels remained high post-thyroidectomy and radioactive iodine (RAI) treatment, in addition to the hypothyroid state following RAI therapy [182].These increased APOE levels remained high even following levothyroxine treatment [182].This confirms that changes in thyroid hormone levels correlate with modifications in APO levels. APOL1 remains understudied, with its physiological role remaining unclear [67].It is suggested that APOL1 mediates its effects through apoptosis and autophagy [67].However, it was reported that APOL1 is upregulated in PTC and ATC cell lines, despite the remaining members in the APOL family remaining unchanged [67].Table 10 presents a concise overview of the functions, regulatory mechanisms, clinical impact, prognostic importance, and origins of different apolipoproteins in thyroid cancer. Inhibitors and Mimetic Peptides of Apolipoprotein As outlined in the preceding sections, it is clear that apolipoproteins play diverse roles in the advancement of cancer.According to previous research and data, apolipoproteins and apolipoprotein mimetic peptides are useful as potential therapeutics due to their functions, particularly their anti-inflammatory and antioxidant properties [198].Peptides are valued for their reduced toxicity profile and immunogenicity [198], which makes them suitable as potential therapeutics, as shown in Table 11.Upregulates MnSOD [1,2]. L-4F Represses tumor angiogenesis, tumorigenicity of cell and inflammation. Triggers activation of RNase H and inhibits microsomal triglyceride transfer protein [15]. APOA1 is known for its possible anti-tumorigenic characteristics.Meanwhile, mimetic peptides of APOA1 are known to remodel HDL, encourage the efflux of cholesterol from cells, and stimulate anti-inflammatory pathways, thus restricting tumor progression and improving survival, both in vitro and in vivo [198][199][200].To test the effect of APOA1 mimetic peptides (D-4F, L-4F, and L-5F) in ovarian cancer, cell lines and mouse models were administered the peptides; this study reported a significant decrease in the serum lysophosphatidic acid levels and reduced ovarian cancer cell growth and proliferation [199][200][201].In addition, an in vivo study in mice with ovarian cancer, the APOA1 peptide, L-5F, repressed tumor angiogenesis by inhibiting the vascular endothelial growth factor (VEGF) and basic fibroblast growth factor (bFGF) signaling pathways [202], indicating the use of L-5F as a candidate therapeutic strategy to reduce the size and number of tumor blood vessels.Another study in ovarian cancer analyzed the effect of the APOA1 peptides, L-4F and L-5F, and reported that, while L-4F repressed hypoxia-inducible factor-1α (HIF-1α) gene expression, L-5F suppressed intracellular levels of HIF-1α [203], indicating the role of the peptides in inhibiting angiogenesis and tumor growth. Furthermore, L-4F can also suppress the tumorgenicity of cells and inflammation by reducing inflammatory cells like interleukins and ROS [204,205], supporting L-4F's role in preventing cancer proliferation.On the other hand, research has been carried out to study the mechanism of action of the APOA1 mimetic peptide D-4F in inhibiting tumor progression [206].D-4F was found to eliminate oxidized lipids and limit inflammatory responses [206,207] in addition to upregulating manganese superoxide dismutase (MnSOD), thereby inhibiting cancer proliferation [206,208].These studies indicate a protective role of D-4F against lipid oxidation, and it can be considered a plausible therapeutic agent for inhibiting tumor growth and proliferation [36].Additionally, another study used a different APOA1 mimetic peptide found in transgenic tomatoes called 6F (Tg6F).When given to mice, this study found changes in certain oxidized phospholipids, which in turn affected the expression of specific proteins like Notch and osteopontin (Spp1).These changes led to a decrease in a specific type of immune cell, called monocytic myeloid-derived suppressor cells (MDSCs), in the jejunum and lungs of the mice.These alterations were found to reduce the tumor burden in the lung, suggesting the use of oral APOA1 mimetic peptides as therapeutic agents in the intestine-lung axis [209]. Similarly, Mipomersen, an FDA-approved orphan drug for homozygous familial hypercholesterolemia, was developed as an antisense oligonucleotide inhibitor of APOB-100 synthesis to target and complement a specific mRNA sequence involved in coding APOB-100 [210].The administration of Mipomersen triggers the activation of RNAse H and inhibits the microsomal triglyceride transfer protein, thereby decreasing the levels of newly synthesized APOB [210]. Both elevated and deficient levels of APOC2 are associated with several diseases, making it an excellent target for drug development.One study has shown that using a dipeptidyl peptidase-4 inhibitor for eight weeks significantly decreased APOC2 levels [211].In addition, Anagliptin, a drug used for diabetes, reduced APOC2 mRNA expression in mice [212].As for the APOC-2 mimetic peptide, a first-generation peptide, 18A-CII, was proven to regain lipolysis to normal levels in APOC2-deficient patients [213].Due to the immunogenicity of the first generation, a second-generation mimetic peptide, D6PV, was developed; D6PV has shown a marked decline in TG in mice [214].Another mimetic peptide, C-II-a, indicated reduced TG levels in APOC2-deficient mice [215]. In addition to the commonly studied APOA1 mimetic peptides, other apolipoprotein mimetic peptides, such as APOE mimetic peptides COG112 and OP449, were investigated [216,217].COG112 and OP449 affected tumorigenesis in cancer cells by reducing cell viability due to their anti-inflammatory functions [216,217].It also hinders signaling for pathogen recognition receptors (PRRs), which regulate the immune system and are implicated in cancer, in addition to reducing cell cycle progression [216].Bhattacharjee et al. (2011) studied the anti-cancer role of APOEdp, a dimer peptide derived from the receptor-binding region of human APOE in in vitro (HUVEC cells) and in vivo models (mouse and rabbit); the study showed APOEdp to inhibit tumor growth in HUVEC cells and mice as well as restricted ocular angiogenesis [218], suggesting an anti-cancer role of APOEdp.However, APOJ mimetics suggest that they also effectively inhibit tumorigenesis [198].Despite limited research, studies have shown that mimetic peptides of APOJ can lower lipids that promote tumor growth, thereby potentially slowing down cancer development and progression [198]. Future Research Directions The role of apolipoproteins in the context of cancer has recently emerged as a subject of increasing interest within the scientific community.Nonetheless, it is imperative to acknowledge that notable gaps persist within the existing literature concerning the precise roles and impacts of apolipoproteins in different cancers.The available data often exhibit discrepancies, particularly with regard to the upregulation or downregulation of specific apolipoproteins, thus warranting further investigation and clarification.A substantial portion of the research dedicated to exploring the involvement of apolipoproteins in cancer has been conducted, utilizing murine models or non-human cell lines.Moving forward, a more informative and clinically relevant approach would entail a shift towards investigating the role of apolipoproteins in human cell lines. The utilization of apolipoproteins as diagnostic tools has been well established in neurovascular and cardiovascular diseases.As demonstrated through the findings presented in this paper, apolipoproteins exhibit substantial promise as potential biomarkers for both early cancer detection and cancer prognosis in the foreseeable future.We propose that further dedicated research in this domain, with a particular focus on the prospective development of a biomarker panel consisting of apolipoproteins, would be highly beneficial.Such a panel could serve as an effective screening method for the early detection of silent cancers. In recent times, novel applications of apolipoproteins are emerging within the scientific literature.One noteworthy path involves their potential use as therapeutic agents, as previously discussed.This can be achieved by utilizing apolipoprotein mimetic peptides or through the application of recently developed nanoparticle technologies or alternative methods.Given the considerable treatment challenges posed by numerous forms of cancer, the encouraging outcomes associated with apolipoproteins warrant further indepth exploration.This exploration offers the prospect of uncovering novel treatment modalities or pharmaceutical interventions for cancer. Conclusions This review provides a comprehensive understanding of the multifaceted roles of APOs in the most prevalent cancers worldwide.Their inconsistent behavior, either increasing or decreasing in different tumor tissues, hints at a complex and not fully understood relationship with cancer.While the roles of specific APOs like APOC2, APOD, and APOM are still under investigation, continued research promises to deepen our understanding of how the APO family interacts with cancer.This could ultimately lead to the identification of new targets for cancer therapy. Figure 1 . Figure 1.A graphical representation of an apolipoprotein illustrates its unique structure, which confers an amphipathic property to the molecule.This enables it to interact with both the lipids in the core of lipoproteins and the watery plasma environment.As a result, apolipoproteins act as biochemical keys, granting lipoprotein particles access to specific locations for the transportation, reception, or modification of lipids.Additionally, apolipoproteins play a role in stabilizing the structure of lipoproteins. Figure 1 . Figure 1.A graphical representation of an apolipoprotein illustrates its unique structure, which confers an amphipathic property to the molecule.This enables it to interact with both the lipids in the core of lipoproteins and the watery plasma environment.As a result, apolipoproteins act as biochemical keys, granting lipoprotein particles access to specific locations for the transportation, reception, or modification of lipids.Additionally, apolipoproteins play a role in stabilizing the structure of lipoproteins. Cancers 2023 , 39 Figure 2 . Figure 2.The exogenous pathway is a process for delivering triglycerides to peripheral tissues using chylomicrons and VLDLs.Chylomicron particles, consisting of APOB-48 and various other proteins, are formed by enterocytes, released into the lymph, and eventually enter the bloodstream.In peripheral tissues, chylomicron triglycerides are broken down, and the remnants are taken up by the liver, while APOA and APOC return to HDL.In the endogenous pathway, the liver produces triglycerides carried to peripheral tissues by VLDL.These VLDL particles initially contain APOB100 and acquire additional proteins and cholesteryl esters from HDL.In peripheral tissues, VLDL triglycerides are partially broken down into VLDL remnants, which are either absorbed by the liver or converted into LDL. Figure 2 . Figure 2.The exogenous pathway is a process for delivering triglycerides to peripheral tissues using chylomicrons and VLDLs.Chylomicron particles, consisting of APOB-48 and various other proteins, are formed by enterocytes, released into the lymph, and eventually enter the bloodstream.In peripheral tissues, chylomicron triglycerides are broken down, and the remnants are taken up by the liver, while APOA and APOC return to HDL.In the endogenous pathway, the liver produces triglycerides carried to peripheral tissues by VLDL.These VLDL particles initially contain APOB100 and acquire additional proteins and cholesteryl esters from HDL.In peripheral tissues, VLDL triglycerides are partially broken down into VLDL remnants, which are either absorbed by the liver or converted into LDL. Figure 3 . Figure 3. APOA-1 exerts a tumor-suppressive function in breast cancer by promoting apoptosis, thereby impeding the advancement of cancerous cells. Figure 4 . Figure 4.In PTC, glycolysis is crucial for tumor growth.The enzyme FTO regulates PTC growth by affecting the stability of the APOE gene[193,197].Reduced FTO levels lead to increased APOE expression, which promotes glycolysis through the IL-6/JAK2/STAT3 signaling pathway.Thus, FTO acts as a suppressor of PTC growth[193]. Figure 4 . Figure 4.In PTC, glycolysis is crucial for tumor growth.The enzyme FTO regulates PTC growth by affecting the stability of the APOE gene[193,197].Reduced FTO levels lead to increased APOE expression, which promotes glycolysis through the IL-6/JAK2/STAT3 signaling pathway.Thus, FTO acts as a suppressor of PTC growth[193]. Table 1 . This table illustrates the expression of apolipoproteins (APOs) in different human cancers.↑indicates an increase in APO expression, while ↓ indicates a decreased expression. Table 1 . This table illustrates the expression of apolipoproteins (APOs) in different human cancers.↑indicates an increase in APO expression, while ↓ indicates a decreased expression. Table 4 concisely outlines the roles, regulatory mechanisms, clinical effects, prognostic value, and sources of different apolipoproteins in lung cancer. Table 4 . Apolipoproteins in lung cancer. Table 8 . Apolipoproteins in prostate cancer. Table 11 . Therapeutic agents of APOs.
2023-11-26T16:03:24.221Z
2023-11-24T00:00:00.000
{ "year": 2023, "sha1": "2b3c8032b228e3101b1eb7b9ac00cae2ee22ce0e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/15/23/5565/pdf?version=1700817133", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "751ba22d9052856ad343a2795d0e7b02fc334f23", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232323314
pes2o/s2orc
v3-fos-license
Retroperitoneal extension of massive ulcerated testicular seminoma through the inguinal canal : A case report Summary World Health Organization (WHO) classification distinguishes testicular neoplasms into germ cell-derived (95%) and non-germ cell neoplasms (2). The most frequent germ-cell tumours (GCTs) are seminoma (40-50% of cases). In about 80% of the cases, seminoma presents in a typical form (4). TTs are often localized (68%) and confined to the testis. Locally advanced tumours usually remain confined to the scrotum. Although rare, extension of the primary tumour to the inguinal canal can be observed, mostly among non-germ cell TTs (NGCTTs) (5). To the best of our knowledge, no previous case of large seminoma spreading in the retroperitoneum through inguinal canal has been described. In this study we report the first case of testicular cancer presenting as a voluminous ulcerated testicular mass. World Health Organization (WHO) classification distinguishes testicular neoplasms into germ cell-derived (95%) and non-germ cell neoplasms (2). The most frequent germ-cell tumours (GCTs) are seminoma (40-50% of cases). In about 80% of the cases, seminoma presents in a typical form (4). TTs are often localized (68%) and confined to the testis. Locally advanced tumours usually remain confined to the scrotum. Although rare, extension of the primary tumour to the inguinal canal can be observed, mostly among non-germ cell TTs (NGCTTs) (5). To the best of our knowledge, no previous case of large seminoma spreading in the retroperitoneum through inguinal canal has been described. In this study we report the first case of testicular cancer presenting as a voluminous ulcerated testicular mass. CASE REPORT A 44-year-old man self-referred to the emergency room of our hospital because of a voluminous scrotal mass associated to abdominal and pelvic pain. The patient had no fever, poor nutritional conditions and pale skin. Clinical history included smoke and thyroid goitre. Physical examination showed a voluminous scrotal mass likely with colliquative necrotic phenomena and abdominal extension ( Figure 1A). The abdomen was tense and slightly painful on deep palpation. Laboratory tests showed an anaemia with reduction in red blood cell (RBC) count (3.1 × 10 6 mm 3 ; normal range 4.5-5.3 × 10 6 mm 3 ), haemoglobin (Hgb) of 6.9 g/dl (normal range 13-16 g/dl), Hct of 24 % (normal value 37-49%). Tumour markers were elevated, in particular b-HGC was 4873 mIU/ml (normal range between 0-5 mUI/ml), a-fetoprotein was 33.4 ng/ml (normal values less than 6 ng/ml were evaluated) and LDH was 9047 U/L (normal range 313-618 U/L). Complete blood tests are shown in the Table 1. The patient underwent an abdominal CT scan, showing a voluminous scrotal sac (28 x 13 x 12 cm) with solid tissue sized 16 x 16 cm, occupying the scrotum with extension to the left inguinal canal and to the retroperi-Introduction: Testicular cancers represent about 5% of all urological malignancies and 1-1.5% of all male neoplasms. Most of the testicular cancers are localized (68%) at diagnosis. Bulky masses in the scrotum are rare. We present a rare case of bulky testicular cancer with retroperitoneal spread through the inguinal canal. Case report: A 44-year-old man came to the emergency department referring weakness and the presence of a scrotal mass. At physical examination, a voluminous mass was found, with necrotic phenomena within the scrotum. Abdomen was tense and sore. Abdominal CT scan revealed a bulky testicular mass spreading to the retroperitoneal space through the inguinal canal with node enlargement. Patient underwent orchiectomy with excision of infiltrated scrotum skin. Histologic diagnosis confirmed a typical form seminoma. The patient was then treated with a cisplatin-based chemotherapy, with a partial response. The patient recently relapsed and he is being treated with a new line of chemotherapy and subsequent surgery with or without radiotherapy. Conclusions: We described a rare presentation of testicular cancer. This case highlights the importance of a multidisciplinary approach to rare testis tumour presentation and early diagnosis for testicular cancers. INTRODUCTION Testicular tumours (TTs) represent about 5% of all urological malignancies and 1-1.5% of all male neoplasms (1). The incidence of testicular cancers is 3-6 new cases per 100.000 males in Western countries, with an increase observed in the past 30 years (2), probably as a consequence of pollution. These rare tumours are more frequent between 18 and 35 years and in Scandinavian countries (1,3). Risk factors include the presence of a tumour in the contralateral testicle, Germ Cell Neoplasia in Situ (GCNIS), Klinefelter's syndrome, cryptorchidism or undescended testicle, family history of testicular cancer (2). toneal space. Moreover, there was a pathological involvement of the left inguinal (11 x 7 cm) and iliac-obturator (10 x 6 cm) lymph nodes infiltrating the external iliac vein. In addition, pathologic retroperitoneal lymphatic tissue was documented along the abdominal aorta for a longitudinal extension of about 20 cm, resulting in compression of the inferior vena cava and infiltration of the external iliac vein, left renal vein and left ureter with signs of post-renal obstructive uropathy. No distant lesions to parenchymal organs were detected ( Figure 1D-F). In the context of reduced Hgb, the patient underwent a transfusion and was hospitalized in the urology department. Unilateral orchiectomy with lymph node dissection was performed ( Figure 1B). First, an inguinal incision was made and the enlarged nodes of left inguinal chain were identified. There was no clear distinction between metastatic lymph nodes and the testicular mass. After cautious isolation left inguinal nodes were dissected. Subsequently the inguinal portion of the tumour was also isolated and, after incision enlargement to the scrotum, was removed. Finally, scrotal portion of the mass was resected alongside with the surfacing necrotic skin ( Figure 1C). The right testis and penile shaft were preserved (Figure 2). Histological examination showed a typical seminoma. The neoplasm infiltrated the skin up to ulcerating it and involved lymph nodes (pT4, pN3, pM1, S3). The presence of an intra-tumour phlogistic infiltrate was also revealed. Molecular morphology investigations with immunohistochemical characterization of the tumour showed positivity for Octamer-binding transcription factor (OCT) 3/4, Placental alkaline phosphatase (PLAP), b-HGC, CD117, Leukocyte common antigen (LCA, in the intra tumour inflammatory component) and CD30. Following surgery, the patient received four three-weekly cycles of standard BEP (Bleomycin 30 UI IV weekly on days 1.8 and 15; Etoposide 100 mg/m² IV on days 1-5; Cisplatin 20 mg/m² IV on days 1-5 (6). CT scan taken one month after the completion of chemotherapy showed a great deal of reduction in the retroperitoneal lymph node masses (6 x 4 cm current vs 17 x 12 cm prior). Serum level of tumour markers was also decreased (Table 2). Subsequently, the patient underwent CT-scan at 3-month intervals. Abdominal and chest imaging showed a stable disease (SD) according to the Response Evaluation Criteria in Solid Tumors (RECIST) (7) with no parenchymal metastases for one year and half. A progressive disease (PD) was documented after 18 months. CT-scan showed a new dimensional increase in the left periaortic lymph node tissue (55 x 35 mm current vs 50 x 25 mm prior), along the left external iliac chains (57 x 44 cm current vs 47 x 37 cm prior) and the appearance of infiltration of the left iliac and psoas muscles by the pathological lymph node tissue (7). Moreover, the patient underwent a PET-CT scan that showed an intense metabolic activity corresponding to a voluminous lymph node masses (10 x 13 cm) in the left iliac region, infiltrating the left ileo-psoas muscle. After considering the disease progression, we had a multi-disciplinary meeting. As salvage chemotherapy the patient is being treated with four three-weekly cycles of standard TIP (Paclitaxel 175 mg/m² IV on day 1; Cisplatin 20 mg/m² IV on days 1-5; Ifosfamide 1000 mg/m² IV on days 1-5). In case of mass reduction, a combined surgical retroperitoneal lymph node dissection (RPLND) and radiotherapeutic approach will be evaluated. DISCUSSION In this report, we described a rare case of large seminoma extending to the inguinal canal with diffuse retroperitoneal spreading and skin ulceration. Presentation at advanced stage or even metastatic at diagnosis is more common for NGCTTs (5). Our case is paradigmatic for several reasons. First of all, the age of diagnosis. Our patient presented a primary testicular cancer in the absence of risk factors and at an age older than usual. This highlights the importance of genital examination at every age, even when the probability of a testicular tumour is low. Moreover, our patient had a very unusual presentation. Indeed, while most of testis cancers are diagnosed as localized tumours of few centimeters in diameter, in our case the patient turned to physicians only when symptomatic. When investigating the reasons why the patient delayed the primary intervention, a complex mix of personal, familiar and social causes emerged. Several studies have shown a detrimental effect of low socio-economic and familiar status on cancer awareness and intervention timing (8,9). In a recent analysis Macload et al. showed that socio-economic status was associated with poorer oncological outcomes and a more difficult access to primary treatment in patients with testicular cancer (10). Our case corroborates these evidences and suggests the importance of a social tissue that could led to prompt access to primary care and early diagnosis. It is of note that the Italian one is a single payer healthcare system. In consequence, cure costs are not one of the major barriers to early diagnosis and treatment. However, even in this context weaker social strata still exist. Within this strata population could be more susceptible to experiment worse oncological outcomes. In fact, even after a wide surgical excision and associated chemotherapy, as recommended by international guidelines (11), we obtained only a partial response with a subsequent relapse of the disease. Furthermore, in our case is evident how, even if cisplatin-based regimen is effective on testis cancer, a multidisciplinary approach should be warranted. Early diagnosis, a multidisciplinary approach and a close follow-up remain mandatory to improve prognosis of testicular cancer (12,13). After relapse, our patient will undergo four cycles of TIP (14). Surgery and radiotherapy should be considered in the case of mass reduction and resectable masses with small residual tumour (15). Moreover, a close follow-up of all psychological aspects was planned in consideration of the high psychological burden of testis cancer. CONCLUSIONS In conclusion, we reported an extremely rare presentation of locally advanced testis cancer, resulting from the combination of cancer and patient related conditions. Early diagnosis is fundamental to guarantee a good oncological prognosis for testis cancer. Moreover, a multidisciplinary approach is important to guarantee a good oncological outcome. Table 2. Tumour markers in the eighteen months following chemotherapy.
2021-03-24T06:16:52.680Z
2021-03-19T00:00:00.000
{ "year": 2021, "sha1": "78b85c649033152aff4b99dc2090a8fd421d4113", "oa_license": "CCBYNC", "oa_url": "https://www.pagepressjournals.org/index.php/aiua/article/download/aiua.2021.1.64/9315", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c1500e7d3005a869f3a2aa4c38a90c4dbf618393", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246276301
pes2o/s2orc
v3-fos-license
Weighted assignment fusion algorithm of evidence conflict based on Euclidean distance and weighting strategy, and application in the wind turbine system In the process of intelligent system operation fault diagnosis and decision making, the multi-source, heterogeneous, complex, and fuzzy characteristics of information make the conflict, uncertainty, and validity problems appear in the process of information fusion, which has not been solved. In this study, we analyze the credibility and variation of conflict among evidence from the perspective of conflict credibility weight and propose an improved model of multi-source information fusion based on Dempster-Shafer theory (DST). From the perspectives of the weighting strategy and Euclidean distance strategy, we process the basic probability assignment (BPA) of evidence and assign the credible weight of conflict between evidence to achieve the extraction of credible conflicts and the adoption of credible conflicts in the process of evidence fusion. The improved algorithm weakens the problem of uncertainty and ambiguity caused by conflicts in the information fusion process, and reduces the impact of information complexity on analysis results. And it carries a practical application out with the fault diagnosis of wind turbine system to analyze the operation status of wind turbines in a wind farm to verify the effectiveness of the proposed algorithm. The result shows that under the conditions of improved distance metric evidence discrepancy and credible conflict quantification, the algorithm better shows the conflict and correlation among the evidence. It improves the accuracy of system operation reliability analysis, improves the utilization rate of wind energy resources, and has practical implication value. In the information explosion era, information presents a massive, multi-source, heterogeneous, multi-dimensional, complex, and fuzzy feature. It has developed rapidly emerging information technology. The development of intelligence has significantly increased the complexity of the various levels of the system, which makes the system faces reliable operation challenges [20,21]. Under this condition, the Chinese government actively encourages researchers to organize fundamental research on the reliable operation of important equipment and components in key areas, including extra energy, energy conservation, emission reduction, and environmental protection. The data-driven multi-source information fusion technology has become one concern of system operation reliability research. With extended the prior research, the Dempster-Shafer theory (DST) fusion algorithm has achieved better performance in comprehensive system state analysis and decision making. However, it has a strong subjective dependence [22] on basic probabilities assignment (BPA) and the independence of evidence, and the correlation relationship between evidence affects the fusion [23]. There are even troubles with distortion and disorder in the practical application process. Thus, based on previous studies, this study argues that quantifying the correlation between evidence and fairly assigning the fusion weights of evidence features is crucial to the fusion results. In response to these questions, researchers have studied the DST fusion algorithm from the perspectives of fusion framework, weight allocation, and method combination. In terms of fusion frameworks, researchers have proposed different framework models, which improved the algorithm effectiveness. Brommer et al. [24] proposed a modular multisensor fusion framework, which is better efficient in dealing with delayed statistics collection, disordered updates, and monitoring the health of sensors themselves in complex systems. Xiao [25] discussed the modeling of uncertainty based on the framework of Triangular fuzzy numbers for fuzzy complex event processing systems in an uncertain environment, and proposed a fault-tolerant and reliable strategy for scheduling. Wang et al. [26] dealt with evidence conflicts in DST under the framework of fuzzy preference relationships, which improved the diagnostic accuracy of hybrid classifier integration. Prior research improved the idea and effectiveness of the integration to different degrees under the idea of modularity and different attention allocation. To deal with the diversity, uncertainty, and conflict of information, researchers have proposed ideas of feature correlation, difference, different conflict values, and non-similarity measures. They improved and integrated algorithms [27][28][29][30][31] from mathematical perspectives, such as mean, combination, and entropy. Zhang et al. [32] proposed a method incorporating fuzzy object elements, Monte Carlo simulation, and DST, through weighted averaging and data deblurring rules, the result has clear analytical values to represent the final risk level. Xiao [33] combined the complex D-S theory and Quantum mechanics, to express and handle the uncertain information in the framework of the complex plane, and reduce the interference effects caused by uncertainty. Wu et al. [34] proposed an improved evidence aggregation strategy combining the Dempster-Shafer rule and the weighted average rule. It overcomes the counterintuitive dilemma existing in the high conflict evidence combination by constructing the BPA under relevance metric. Jiang et al. [35] used evidence theory to model uncertainty, adopted a weighted average combination method to merge BPAs. Finally, it validated the method by motor empirical cases under the decision rules. Li et al. [36] proposed a weighted conflicting evidence combination method based on Hellinger distance and belief entropy., and uses distance to measure the conflict between evidence and applies belief entropy to quantify the uncertainty of basic belief assignments. Under the Dempster-Shafer framework, Tang et al. [37] proposed a weighted belief entropy which is based on Dunn's entropy, to quantify the uncertainty in uncertain information and reduce information loss during information processing. Ullah et al. [38] designed a data fusion scheme based on improved BPA belief entropy and quantified the uncertainty in information and transform conflicting data into decision results. The simulation result showed that the proposed scheme had stronger performance in terms of uncertainty, reason, and decision accuracy in an intelligent environment. Brumancia et al. [39] proposed an information fusion algorithm for decision making under different information conditions, which is based on D-S theory and adaptive neuro-fuzzy reasoning (DSANFI) system, it has widely used in robotics, statistics, control, and other fields. From the researchers' exploration, the information fusion algorithm based on Dempster-Shafer has always been a hot focus of research, which has a broad theoretical and practical value. In the current development process, the widespread application of intelligent systems increases the demand for system operation and maintenance. However, the existing algorithms [40][41][42] still have different degrees of information loss, fusion disorder, and low fusion accuracy in practical application, and the algorithms have the problem of universality [43]. Some studies [44][45][46] suggested that the main problem of the affected fusion results are the incomplete identification framework of evidence features, and the basic reliability probability of evidence is difficult to calculate completely and accurately, which lead to information loss and disorder. Therefore, in this research, the DST fusion model is promoted from the perspectives of the knowledge fusion framework, quantification of correlations, and extraction of credible conflicts to overcome the information loss problem. Propose a fusion framework The multi-source information fusion problem in this paper refers to integrating multiple sources of information. Multiple sources are information originates from different means of monitoring the same part of the same thing. Therefore, in our proposed fusion framework, the multi-source information fusion problem [47] is summarized as a ternary problem, as shown in Eq (1). Where, N i , <N i >, and D represent data, features, and decisions respectively. The type, state, format, and scenario of data lead to its multi-source heterogeneity and complexity in the information management. Data set N i contains an enormous amount of information, and the information is represented in the knowledge form, and the data feature set <N i > is constructed by mining the information of potential features' data from the perspective of knowledge management. The knowledge is fused with algorithms to improve the recognition framework, and the accuracy and reliability of algorithms in the fusion process are improved to provide the foundation for management decisions. The relationship between data, features, and decisions is shown in Fig 1. Data fusion is mainly reflected by the fusion of data features. It fuses the features exhibited by multiple homogeneous or heterogeneous data in the time or frequency domain which is beneficial to decision making. Considering the different data exhibits different features, assuming that V i is different perspectives, then there is some correspondence between the whole process from the mapping of perspective space to data space and then to data feature space, as shown in Eq (2). When the data features or attributes cannot be directly fused, some kind of consistency processing needs to be performed before fusion. It studies the multi-source information fusion analysis framework from three perspectives: information, algorithm, and decision making, and presents the problems of data ambiguity, conflicting evidence, and low fusion degree in the fusion process. It takes data represented as knowledge and classifies information features from different sources. Considering features similarity and conflict, data features should be quantified and changing rules should be found out, to weaken data ambiguity and keep the potential value of information [48]. Regarding the shortage of algorithms, it deals with the consistency of features and adopts methods of conflict weight assignment to reduce the impact of evidence association and evidence conflict on the fusion results. According to the feature performance, it can make a judgment on the system condition, to rationalize the system failure management in time and effectively reduce the loss. The fusion analysis framework is shown in Fig 2. Materials and methods This study divides the algorithm into two stages, including the BPA calculation session and the fusion session. In parts "Improved algorithm under the weighting strategy" and "Improved algorithm under Euclidean distance weighting strategy", improvements to the BPA calculation process are proposed, in part "Fusion algorithm under Improved Euclidean distance weighting strategy", improvements of the fusion session is proposed. The fusion improvements are based on the BPA calculation. Improved algorithm under the weighting strategy The feature information in different data sources of the same type has a certain similarity, and the feature information in different heterogeneous data sources also has a certain similarity. Studying the homogeneity and heterogeneity of data, it is necessary to analyze the data similarity when analyzing faults, to reduce the repetitive calculation work. Therefore, it defines a formal concept of data feature similarity, which is the degree of the similarity of the features in the information. As known that the data set is composed of multiple data, so it can be as a matrix. Therefore, the features of the data can be denoted as < E i N ! j >, then the similarity between the corresponding features of the two data sets is denoted as According to the data time domain, the data is paired by pair, and the weight of the qth pair of data features is denoted as w q , then the similarity between the features of the two sets can be expressed by Eq (3). Where, N j 2N, i6 ¼k, i, j, k, q is not equal to 0. The weight w q are assigned according to the importance of the features characterized by the data and need to satisfy w 1 +w 2 +. . .+w i = 1. There is a similarity between evidence i and j, so it introduces the similarity factor Si. The weighting strategy is used to quantify the similarity between evidence features, then the specific formula for quantifying the similarity between the two evidence is shown in Eq (4). ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi Let the similarity of the evidence be Simz, then the similarity of evidence i is shown in Eq (5). ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi Then, it gets a set of similarity sequences of length n � (n−1)/2, where n>1. Each group of evidence that forms a series of similarities with other evidence and the number of similarity data between evidence i and other evidence is (n-1). Therefore, the total similarity between evidence i and other evidence can be expressed by Eq (6). Where, i is the specified evidence, and when i is fixed without change, the dynamic value is taken for j, i6 ¼j. Then, the weights of the evidence are assigned as shown in Eq (7). When the similarity of data features is high, the weight of evidence is correspondingly high, shows that the supporting evidence for a certain type of event occurrence is high. And it can use more complete evidence data for two types of evidence factors that have high similarities. When the similarity is low, the weight declines, means that the perspective of making a judgment on event occurrence between data may be different, rather than the completely untrustworthy evidence. So it adopts multiple evidence factors to mine valuable conflicting information, to improve the accuracy of system fault diagnosis. Once the similarity of the characteristics of the evidence is mastered, it can perform a new fusion of the evidence. Improved algorithm under Euclidean distance weighting strategy Degree of evidence variation. In practice, there are conflicts among evidence [49]. Conflict is a kind of information related to the similarity of data features and is likely to have some value. The BPA of the evidence shows the credibility level of the evidence and reflects consistency in the assignment of the basic credibility probability of the evidence to the focal elements. Therefore, this study performs dynamic extraction of BPA, and based on this, adjusts the weight of evidence under conflict conditions, assigns conflict coefficients to different focal elements, reduces the weight of evidence with lower confidence in the fusion process, to improve the reliability of fusion results. According to the relation between the variation in historical data features and the reliability of the system, it sets a reasonable threshold value. Dynamically monitor and extract the frequency of data features emerging in different threshold ranges to get the BPA of dynamic changes, as shown in Eq (8). The primary methods to measure the correlation between data include distance measure, Pearson relationship coefficient, cosine similarity, and deviation measure. Among them, the Pearson relationship coefficient is usually used to measure the inconsistency of data scale, when there is a subjective judgment standard inconsistency scenario. The cosine similarity coefficient is acting on data sparsity. The deviation is to reflect the difference between the basic credible probability distribution of focal elements and the average similarity value, but its use of the average similarity value weakens the measure of the true difference of data. Euclidean distance is a simple method to measure the distance between two points in the m-dimensional space and especially has a significant advantage with integrity data. Therefore, this paper adopts distance [50] to reflect the difference between two sets of data. Assuming that the difference between two pieces of evidence i and j is d ij , to ensure that the data is positive, Eq (9) can express the calculation of the difference between two pieces of evidence. ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi X When the difference between two pieces of evidence is high, the similarity between the evidence is low and the conflict is high. When the difference is low, the similarity between the evidence is high and the conflict low. The total number of data on the variation among the evidence is n � (n−1)/2, where, n > 1. It aggregates the differences between one evidence and the other to get n sets of variation data. Then, Eq (10) can express the difference between evidence i and all others that affects the conclusion. Normalizing the difference between evidence i and others is the difference of evidence i, which can be expressed by Eq (11). Where, n denotes the number of evidence, and the credibility of evidence i is low when it conflicts with other evidence at a high level. Credible weight of evidence. The confidence level of the evidence reflects the credibility level of the evidence, and the similarity of the focal elements reflects the similarity of the evidence, which reflects the consistency in the assignment of the basic credibility probability of the evidence to the focal elements. So this is an entry point for adjusting the weight under conflict conditions. Assigning the conflict coefficient K to different focal elements A i reduces the weight of evidence with lower confidence in the fusion process, thus increasing the weight of evidence with high confidence and improving the reliability of fusion results. Confidence is the support of data features to the event results, and it is the trustworthiness of the evidence information. The confidence function on the identification framework can be expressed by Eq (12). The equation shows that the confidence function is the sum of the probabilities of event support for all subsets of that event, and B is a subset of A. The confidence function has a certain influence on the reliable transmission of the system. The likelihood function is the degree to which the evidence information does not negate the occurrence of an event, and it shows that the likelihood function is the sum of the probabilities that the intersection with that event is not empty. In the identification framework, it can express the likelihood function in Eq (13). The likelihood function contains both credible and implausible information, as shown in Eq (14). Therefore, the credibility of the evidence needs to be analyzed. There is a correlation between the support and the discrepancy of the evidence, as expressed in Eq (15). Therefore, the credible weight of evidence to focal element support can be expressed in Eq (16). When the credibility weight of evidence to focal element support is high, it shows that the supports of other evidence is to a high degree. The credible weight corresponds to the confidence function in the identification framework, and the product of the credible weight and the confidence function is the reliability of that subsystem. Then the reliability transfer function of the entire system is the product of the subsystem reliability. Fusion algorithm under improved Euclidean distance weighting strategy. The BPA calculation session introduces evidence similarity, evidence difference, and evidence trustworthiness weights to improve the BPA calculation process of the original algorithm. To fully retain the trustworthy conflicts, this part improves the fusion session of the algorithm based on the improved BPA calculation session. The conflict involvement in fusion directly affects the BPA of the event. Therefore, we construct an improved probability assignment model in terms of the credible weight assignment of conflict information, which uses the product of the credibility of evidence to focal element support and the original probability assignment function. Then, it shows the BPA function of evidence under the new probability assignment model calculated through the BPA calculation session in Eq (17). By introducing credible conflict, the sum of the fusion results of the relevance of evidence and the fusion results of the credible conflicting evidence makes up a new fusion function. The improved probability assignment function is a fusion calculation of the BPA of the non-conflicting information and the credible conflicting information in the conflict under the new support condition. It means that the improved BPA function is the fusion calculation of the basic probability assignment of the non-conflicting information under the new support condition and the credible conflicting information in the conflict. Thus, the improved probability assignment function is the sum of the BPA function of evidence to a focal element and the support of other evidence to that focal element under the conflict condition, which contains the credible conflict extraction treatment under the new weight for the changed evidence. It shows the new probability distribution function in Eq (18). Where, s6 ¼j, i、j、s�n. Creðm i Þ X m 0 s ð A ! i Þ denotes the extent to which other evidence agrees with evidence j in support of focal element A i . Normalizing the improved probability assignment to keep the probabilities are in the same mapping environment. Reassigning the weights of conflict and the sum of all evidence probabilities is 1. Under the new conditions, we classify the features of credible conflicts into the category of trustworthy features; the evidence is independent of each other, and the remaining conflicts that are not considered are discardable. Therefore, the evidence under the new BPA is re-fused, and it shows the fusion rule in Eq (19). Analysis of improved algorithms in wind turbine operation Wind power generation technology is mature in renewable energy generation. China has abundant wind energy resources, especially on the southeast coast, Liaotung Peninsula, and northeast. Compared with fossil fuels, the use of clean energy such as wind power can have an effect on reducing carbon dioxide emissions and mitigating global climate change trends. According to a study [51], nearly 80% of power plants in Asia have lost over 30% of their wind energy potential since 1979. Therefore, it takes a wind farm in Jilin province, northeast of China, as an example to analyze the wind turbine operation data, diagnose the fault state, improve system reliability, and increase the efficiency of wind energy utilization. According to the preliminary analysis, we find that wind speed is one of the key parameters of wind turbine operation; some data showed consistency in the variation pattern; parameters such as pressure and temperature are more sensitive to environmental changes; changes in voltage and current are associated with other parameters, and the overall fluctuations of different wind turbine operations have some similarity. Therefore, we organize and analyze data with a tendency, and select a representative wind turbine in the wind farm to study the parameters such as generator speed, gearbox low-speed bearing temperature, gearbox oil pressure, gearbox inlet oil temperature, and grid current in a certain period. And it does not describe the screening process here. Table 1 shows some of the underlying data sets in the experiment. When the wind speed is in the steady-state range, the wind turbine speed in the normal operation state of the system is also in the steady-state range. So we analyze the relative change trends of generator speed, gearbox low-speed bearing temperature, gearbox oil pressure, gearbox inlet oil temperature, and grid current during the operation of the wind turbine at a certain time with the wind turbine speed as the base reference parameter. And we find that there is a correlation between the change patterns of some data; the variation trend of different data is different, and the inconsistency of variation shows that there is a conflict between the evidence. Therefore, according to the difference in the changing pattern of data features, we judge whether there is a credible part of the evidence conflict, dig deeply into the consistency information and conflict information in the data, extract credible fault features, analyze the system operation status, and diagnose the system fault. Let the relative change trends of parameters such as generator speed, gearbox low-speed bearing temperature, gearbox oil pressure, gearbox inlet oil temperature, and grid current are the evidence E 1 , E 2 , E 3 , E 4 , and E 5 , respectively. According to the characterization of distinct features, we excerpt valid and representative data periods from the data set, select the basic feature parameters of the evidence in the operation state, and map them into the [0,1] interval to eliminate the influence of data heterogeneity on feature fusion, as shown in Table 2. In Table 2, we can see that the selected evidence is overall well aggregated. From the variance and root mean square, the dispersion of evidence E 1 and E 2 is higher than that of E 3 , E 4 , and E 5 ; from the cliff factor, the fluctuation of the evidence is roughly a continuous flat change, showing that the data situation is more stable, and it can select the above parameters for the next analysis of the wind turbine. We divide the mapping of the evidence to system fault support into four types: normal state, implicit fault, explicit fault, and warning. According to the actual occurrence of faults, we identify the points with a more stable change trend in the evidence, define the distribution of the evidence characteristics in the fault characterization, and determine the interval of the fault characterization, as shown in Table 3. Since 0 in the mapping interval [0,1] contains the cases of the continuous shutdown without starting and shutdown due to fault, we remove element 0 from the normal state F 0 , which means that it excludes the status data at the moment of normal wind turbine start-up. While Organize the data of wind turbine operation, and analyze the fluctuation of data characteristics under different state conditions in the historical data. According to the distribution of the points of the evidence fluctuation interval in different states, such as normal state, hidden fault, explicit fault, and warning, we select a certain moment region with certain credibility and representativeness and calculate the dynamic BPA of each evidence. Depending on the selected interval of the system, it dynamically changes the basic probability distribution. The basic probabilities of selected regions in this paper are calculated and derived, as shown in Table 4. From Table 4, it shows that there are different levels of conflicting situations among the evidence, with Evidence E 1 , E 4 , and E 5 considering the system to have a higher probability of explicit failure, Evidence E 2 considering the system to have a higher probability of hidden failure, and Evidence E 3 considering the system to have a higher probability of normal state. Exhibit E 5 considers that the system also has a higher risk of implicit failure under the high probability of explicit failure. Fusion results of the classical algorithm Based on the typical DST, we fuses the above evidence and it shows the fusion results in Table 5. From Table 5, we see that after the evidence fused by the original algorithm, the system has a probability of 69.63% of the occurrence of explicit failure, 18.38% of the occurrence of implicit failure, 11.86% of being in a normal state, and a low probability of 0.12% of the occurrence of warning. If evidence E 1 , E 4 , and E 5 consider the system to have a higher probability of explicit failure, it significantly weakens the support of evidence E 3 for the system to be in a normal state and the support of evidence E 2 and E 5 for the occurrence of implicit failure to some extent. If evidence E 2 considers the system to be in a warning state with a lower probability, it weakens the support of evidence E 1 for the system to be in a warning state. The trends of the same features strengthen each other and the trends of distinct features weaken each other. The fusion results under the new probability are shown in Table 6. Table 7. Based on the reassigned weights, calculate the new basic probability function values of the evidence, as shown in Table 8. From Table 8, we see that it redistributes the probabilities after adopting the trusting attitude to a part of the inter-evidence conflict, the implicit failure rate of evidence E 2 decreases, and the explicit failure rate increases; the probability of the normal state of evidence E 3 decreases and the probability of explicit failure increases; the probability of the normal state of evidence E 5 increases, and the changes of evidence E 1 and E 4 are smaller. Re-fused them, and it shows the new fusion results in Table 9. Comparative analysis of fusion results under different algorithms This paper introduces the improved weighting strategy and distance strategy to quantify the correlation and conflict between evidence features in the research process. Compares the fusion results of the original, and improved algorithms, with the actual situation, as shown in Table 10. From Table 10, the evidence after improved algorithm fusion under the weighting and distance strategies, reduces the probability of explicit failure of the system by 5.48%~6.03% compared with the original algorithm; it reduces the probability of implicit failure by 1.12% 1.15% and increases the probability of being in a normal state by 6.64%~7.16%; the probability of early warning is 0.12%, which is consistent with the original algorithm fusion. The analysis of the fusion results, as shown in Fig 3, leads to the following conclusions. 1. The changes in the fusion of evidence E 1 and E 2 before and after the algorithm improved are small, show that the conflicting nature between the two pieces of evidence is small and the conflict participation in the fusion has little impact on the results, as shown in Fig 3(A). 2. In the fusion with evidence E 3 and E 4 , there is a significant change in the judgment that the system is in the F 0 state and F 2 state. The improved algorithm discards part of the worthless conflicting information in the evidence and absorbs part of the conflicting information in the two states of F 0 and F 2 , leading to a large deviation before and after the improvement, as shown in Fig 3(B) and 3(C). 3. When fused with E 5 , the probability that the system is in each state shows irregular fluctuations, but overall, the probability that the system is in F 3 state has been decreasing, as shown in Fig 3(D). Fig 3(D) and 3(E) demonstrate the gap between the overall trend of fusion change and the actual situation. We can see that the improved fusion algorithm fully considers the conflict The stability analysis of the changing trend of the fusion results, as shown in Fig 4, reveals that the original algorithm fusion results fluctuate more with the actual value fitting curve, and the fluctuation of the fusion results with the actual value fitting curve under the weighting strategy and the distance strategy are the same, both improvements have a certain effect, but the improved algorithm under the distance strategy is slightly better than the weighting strategy, and the target value of the fit is better. The improved algorithm under the distance strategy improves the fit with the actual situation by 9.47% compared with the original algorithm, and the improved algorithm under the weighting strategy improves the fit with the actual situation by 8.37%. Overall, the improved algorithm under the distance strategy has better results in diagnosing and predicting system faults and it is more effective in improving energy utilization efficiency. Conclusion In this paper, we propose an improved model of multi-source information fusion under the weighting strategy and distance strategy and check the validity by a case of wind turbine system fault diagnosis in northeastern China. The research results show that the improved algorithm approach under distance strategy has a better adaptability and fits to conflicting information, and quantifies the discrepancy of evidence to event support, credibility, and credible conflict weights considering the fit to reality. The involvement of credible conflicts in the fusion diagnosis solves some uncertainties caused by the loss of credible conflicts and weakens the interference of untrustworthy conflicts on the results. The proposed algorithm in this paper improves the accuracy of the calculation model, reduces the relevance and uncertainty in the process of using information features, and interprets the practical application significance of the evidence factors after readjusting the basic probability of the evidence. It also improves the scientific and rational system management, enables managers to have a better understand to the system operation status in time. Effectively reducing the system operation and maintenance costs and losses caused by the faults as well as improves the energy utilization efficiency and it has certain advantages in accuracy and timeliness of fault diagnosis. The method is not only applicable to the wind farm calculations but also to the operational reliability analysis of other energy utilization systems that require comprehensive consideration of multiple factors. Considering the resource utilization efficiency in China, and the complexity and uncertainty of the system operational environment, in the future, we will study the complex system operation reliability in information technology developmentto improve the overall accuracy of the model and realize efficient management of system operation.
2022-01-26T05:19:59.463Z
2022-01-24T00:00:00.000
{ "year": 2022, "sha1": "c7f18827ea892c181f68a4aa199f9e1b520938fd", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0262883&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7f18827ea892c181f68a4aa199f9e1b520938fd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
258438648
pes2o/s2orc
v3-fos-license
Using ecosystem health and welfare assessments to determine impacts of wild collection for public aquariums Aquatic ecosystems are currently facing a multitude of stressors from anthropogenic impacts, including climate change, pollution, and overfishing. Public aquariums positively contribute to ecosystems through conservation, education, and scientific advancement; but may also negatively detract from these systems through collection of animals from the wild and sourcing from commercial suppliers. Changes within the industry have occurred, although evidence-based assessments of 1) how aquariums collect and maintain their populations to determine sustainability of the environment they have harvested; and 2) the welfare of these harvested animals once within the aquariums are still needed. The objectives of this study were to assess the ecosystem health of locations aquariums frequently visit to collect fish from the wild, and then evaluate the wellbeing of fishes at aquariums after extended periods in captivity. Assessments included use of chemical, physical, and biological indicators at field sites, and use of a quantitative welfare assessment at aquariums for comparison to species reared through aquaculture. Anthropogenic pressures at field sites were observed, but no evidence of high degradation or compromised health of animals were found. Welfare assessments of aquarium exhibit tanks produced high-positive scores overall (> 70/84), demonstrating that both wild collected (avg. score 78.8) and aquaculture fishes (avg. score 74.5) were coping appropriately within their environments. Although findings indicated that fish can be taken from the wild at low-moderate rates without any deleterious impact on the environment and cope equally well in aquarium settings, alternatives such as aquaculture should be considered as a strategy to reduce pressure on known stressed aquatic environments or where significant numbers of fishes are being taken. Introduction Coastal environments are some of the most vulnerable ecosystems to anthropogenic stressors yet over 50% of the United States population lives within 50 miles of these areas [1]. Examples of human imposed stressors include impacts from climate change such as sea level rise, beach erosion, and flooding, point and non-point pollution from agricultural, industrial, and urban runoff, and biodiversity loss of important species through overfishing [2][3][4][5][6]. In response to better understand the metrics of these pressures, concepts, such as ecosystem health, have evolved as frameworks to consider these stressors holistically and address complex issues influencing humans, animals, and the environment [7]. Good ecosystem health has been defined as the condition of an entire system that is able to sustain services to humans, maintain resilience to natural and human imposed pressures, as well as recognizes human, animal, and environmental wellbeing and their interdependence on one another [7]. To determine the ecosystem health of an environment, it is now recognized that the use of biological, physical, and chemical indicators must be employed [8][9][10][11]. Reference information is often incorporated for comparison of potentially degraded ecosystems to their historical natural range of variability with considerations for ecological successes as well as stakeholder inputs [12]. Fish are often used as sentinel species within ecosystem health studies, as they reside in almost all aquatic environments and are of ecological and economic importance due to their roles in the food web, ecotourism, as well as recreational and commercial activities [3,13]. Examples of other indicators often used in aquatic ecosystem health studies include measurements of water quality and environmental contaminants as chemical indicators, as well as physical indicators such as land use and channel morphology [reviewed by 14]. Commercial fisheries have adopted this ecosystem health approach through cultural and legislative shifts from single species to ecosystem-based fisheries management, recognizing the services and societal benefits of the industry, as well as the direct and indirect pressures humans impose when harvesting from these environments [15,16]. Although advances in commercial fishing have occurred, recreational fishing restrictions are limited to certain species, and completion of catch surveys are recommended but not required [17]. Further, the public aquarium industry regularly collects fishes from the wild bringing the potential for additional stress on vulnerable environments that animals are collected from [18]. Programs such as Species Survival Plans have been developed by accrediting organizations for zoological institutions, including public aquariums, to advance sustainable population management initiatives [19]. To date, most of the developed programs are limited to terrestrial and charismatic aquatic species leading to little to no collection of these animals from the wild [20]. For aquariums, previous challenges of husbandry requirements for rearing fishes in intensive larval programs have led to most species being collected from the wild or purchased from commercial suppliers [19,21]. Although most freshwater species are now available through commercial aquaculture, around 95% of marine fish species are still sourced from the wild for private home aquariums and public displays at large institutions [22][23][24][25]. As of 2019, the Association of Zoos and Aquariums (AZA) established the Aquatic Collections Sustainability Committee to develop population guidelines for their over 230 member institutions to "assure thriving, sustainable, aquatic populations" [26]. These institutions have also demonstrated interest in increasing marine species available through aquaculture, although more resources are needed [27,28]. Along with shifts towards sustainable management, zoological institutions now also prioritize animal wellbeing within their institutions with the goal to become the gold standard for animal welfare centered care of animals [29]. Members of accrediting organizations are required to perform annual welfare assessments of animals in their collections to maintain accreditation, developed through influence of previously established animal welfare frameworks [for frameworks see [30][31][32]. Continuous monitoring of species in aquariums is especially important as exhibit tanks are often structurally and socially complex to be eye-catching and frequently contain predator and prey animals sharing a new restricted environment. Previous studies reviewing the literature have identified the lack of animal welfare research of fish species in zoological systems, creating a gap within the literature of how these species cope within captive environments [33][34][35]. Additionally, concerns for and lack of understanding regarding animal wellbeing in aquaculture settings has been identified as an area in need of additional consideration [28]. The objectives of this study were to 1) assess the ecosystem health of locations aquariums frequently visit to collect fish from the wild compared to baseline reference parameters; and then 2) evaluate the wellbeing of the wild collected fishes in exhibit tanks at aquariums compared to similar fish species reared through aquaculture. Hypotheses included: 1) reference water parameters would not be different than field data collected at aquarium sites; 2) species diversity at aquarium collection sites would be lower than baseline reference information; and 3) total welfare assessment scores of aquaculture exhibit tanks would be greater than that of wild collected exhibit tanks. Materials and methods Sampling and handling of animals occurred according to acquired South Carolina (5762) and North Carolina (2026400) permits and approved by The Ohio State University Institutional Animal Care and Use Committee (IACUC) protocol (2021A00000040). Seine nets were used for sampling animals, detailed below, and no protected species were sampled. Animals selected for health assessments were mildly sedated through emersion in tricaine methanesulfonate (MS-222) buffered solution, detailed below. Upon completion of assessments, animals were placed in clean water, monitored, and returned to their respective water bodies after returning to homeostasis (maximum time in human care averaged two hours). No euthanasia of animals occurred. Field site descriptions Assessments occurred at four sites, two (SC1 and SC2) in Charleston, South Carolina (32.834752, -79.986940 respectively and 32.856259, -79.902133 respectively), and two (NC1 and NC2) in Pine Knoll Shores, North Carolina (34.692179, -76.829056 respectively and 34.701380, -76.832141 respectively) as locations that two public aquariums visit on an annual basis to collect animals from the wild for their institutions (Fig 1). Both locations in Charleston, SC were estuarine ecosystems with public access. SC1 was located at Northbridge Park, established in 2014, on the Ashley River under the Cosgrove Bridge. A long-time fishing location for locals before officially becoming a Division of Transportation managed park, the area now includes a dock, pier, and a kayak and canoe launching point [36]. SC2 was located on Daniel Island, near the Wando River Bridge at Waterfront Park. Urban development from agricultural land began in this area in the 1990s and the area now includes a shopping area, apartments, and the Charleston boating club [37]. Oyster restoration led by the Division of Natural Resources South Carolina Oyster Recycling and Enhancement Group began in 2010 and is ongoing [38]. Installment of a bulkhead retaining wall at this site occurred in 2022. NC1 was an Atlantic Ocean facing location at Iron Steamer Public Beach. This site once included a fishing pier, which was damaged by a hurricane, torn down in 2004, and replaced by housing [39]. At the time of sampling, beach replenishment programs were ongoing and included annual stocking of sand and planting of American beach grass (Ammophila breviligulata) [40]. NC2 was located within Bogue Sound in a protected area behind the aquarium, with fishing prohibited except for North Carolina Aquariums. Active oyster restoration was also observed at this location, led by the North Carolina Coastal Federation [41]. Reference site descriptions Historical catch per unit effort species data and water chemistry data (1990's-present) were obtained from North and South Carolina agencies to be compared to field site data, when applicable ( were selected for each site based on proximity to aquarium collection sites. For the North Carolina locations, data was obtained from the North Carolina Division of Marine Fisheries Program 120 Estuarine Otter Trawl (1.83 m x 9.14 m) Surveys. Two stations were selected for comparison to NC2 as similar inlet ecosystems, within Bogue Sound, with consistent historical data. No reference data was appropriate for comparison to NC1. Reference data were grouped into four periods to test for decadal differences of diversity of species and differences of water temperature, salinity, and dissolved oxygen (Table 1). Ecosystem health assessments of field sites Ecosystem health was measured at field sites and included water and soil chemistry, habitat complexity and quality, population diversity sampling, and health evaluations of fishes. Data collection occurred at North and South Carolina field sites over three periods from spring 2021-summer 2022 ( Table 2). Reference information was used for comparison to field sites instead of a control site due to inability to sample a true no take area, absent of all commercial and recreational fishing. Data were compared to reference information to test for decadal differences of water parameters and species diversity at these sites as a proxy for environmental change. Indicators of ecosystem health were grouped into chemical, physical, and biological indicators. Chemical indicators of ecosystem health included water parameters and soil chemistry. Water quality data was collected using an In-Situ SmarTROLL™ Multiparameter Handheld Water Quality Meter. Temperature, pH, dissolved oxygen, salinity, and conductivity parameters were collected each time sampling occurred (n = 11 readings per site). Soil chemistries were collected in period three (n = 7 samples per site) and included low-high readings of nitrogen (18 kg, 73 kg, or 145 kg/ A/15.25 cm soil), phosphorus (4 kg, 9 kg, or 29 kg/ A/15.24 cm soil), and potassium (18 kg, 36 kg, or 73 kg/ A/15.24 cm soil). Samples were collected using the grab method and a Soil Quality Test Kit [42,43]. Physical indicators of ecosystem health were ranked 1-5 (poor-excellent) within the categories of habitat complexity and morphology, discharges, recreational and commercial disturbances, and coastline development using a modified environmental impact assessment based on previously established habitat quality index tools [9,[44][45][46]. Rankings were assigned to indicators based on observed percentage of impacted area at each site with the highest achievable total score = 50. Sites were scored each sampling day by the same individual during period three (n = 7 assessments per site). Fishes, sentinel species of ecosystem health, were selected as the biological indicators. Surveying occurred using seine nets (1.22 m x 6.1 m and 1.83 m x 9.14 m) for a total of two hours per site each sampling day (n = 11 sampling days per site). Individuals were identified at the species level, total lengths measured, and then returned to the water body they were collected from. Animals were grouped into functional guilds based on their place within the water column and prey species consumed (Table 3) [47,48]. A subpopulation of surveyed species was selected for full health assessments including blood collection and skin, fin, and gill sample collection for examination of parasitic loads under an onsite microscope. Animals were selected based on size appropriateness for sample collection. Sedation of fish occurred through emersion in < 50 mg/L MS-222 buffered solution for skin, fin, and gill sample collection (n = 331 total individuals sampled). Using iris surgical scissors, fin and gill clips were collected, and gill samples totaled 2-3 tips of lamellae per fish. Skin scrapes were obtained by lightly sliding a microscope coverslip along the lateral surface of the fish for collection of the mucus layer. All samples were examined under an onsite field microscope immediately after collection and rated using the 0-3 (none-many) veterinary technique [technique described in 49]. One blood sample (< 0.05 mL), collected from the caudal vertebrae, was obtained from a subset of individuals selected for skin, fin, and gill collection (n = 200 total individuals sampled). A lateral approach was employed between the scales for creation of a blood smear. Staining of blood smears occurred using Standard H&E. Estimates of total white blood cell counts occurred through taking the sum of 10 high powered fields and multiplying by 200 [protocol described in 50]. Differential leukocyte counts were obtained by counting 100 white blood cells, and included lymphocytes, neutrophils, basophils, monocytes, and eosinophils [described in 51,52]. Welfare assessments Wellbeing of fishes within exhibit tanks at aquariums was evaluated semi-annually (n = 16 total assessments) over the same three periods using a modified welfare assessment influenced by the Five Domains Model [welfare assessments described in 32 and 53]. Indicators in the areas of nutrition, physical environment, health, and behavioral interactions were scored from 1-3 (high risk-good). Scores were assigned based on observations of animals and environmental parameters, as well as obtained health and nutrition information from staff members. Assessments included scoring of resource-based indicators, such as appropriate diet, environmental complexity, and preventive care, as well as animal indicators through observed evidence of species appropriate behaviors, aggression and/or agnostic behaviors, and active avoidance of animals to other individuals within exhibit tanks as well as in areas where interactions with visitors occurred. Behavioral indicators were weighted by two. Indicator scores were summed at the completion of each assessment and the highest achievable assessment score = 84. Assessments were conducted by the same individual in front of exhibits containing species whose natural ranges were the same as the species surveyed at field collection sites. Statistical analysis Data were analyzed using lme4, mclogit, vegan, and tidyverse packages in R Statistical Software v 4.2.0 [54][55][56][57]. Assumptions of data were verified for each model using residual vs fitted and q-q plots. Statistically significant values were those p < 0.05 and biologically significant p < 0.08. Reference data, grouped into four periods, was first analyzed using Linear Mixed-Effects Models to test for decadal differences at Ashely River (AR), Wando River left (LW), and Bogue Sound (CC) locations and then later each period compared to data collected at the field study sites (Table 1; detailed below). Period (one-four) was used as the fixed effect and location the random effect to account for repeated sampling and similarities within sites. Shannon-Wiener diversity indexes, temperature, salinity, and dissolved oxygen were run as the dependent variables in separate models for AR, LW, and CC sites. Water and soil chemistries were examined using descriptive statistics and water data for SC1, SC2, and NC2 compared to reference information using Linear Mixed-Effects Models. Temperature, salinity, and dissolved oxygen were run separately as dependent variables and type (reference period one, reference period two, reference period three, reference period four and field data) was the fixed effect. Location was used as the random effect in each model to account for repeated sampling at sites. Separate models were used for each aquarium collection site (SC1, SC2, and NC2). Environmental assessment data was analyzed using an analysis of variance model. Field site scores for each category were summed and the total score used as the dependent variable. Location (SC1, SC2, NC1, and NC2) was the independent variable to compare sites to one another. Post-hoc comparisons of sites occurred using the Tukey method. Shannon-Wiener diversity indexes for sites SC1, SC2, and NC2 were calculated and then compared to reference locations using Linear Mixed-Effects Models. Type was used as the fixed effect to test for differences between field sites and their respected reference site periods. Location was the random effect to account for repeated sampling and similarities within sites and diversity indexes the dependent variable run in separate models for each site. Pearson's chi-squared portions tests were used to compare the proportions of animals within each functional guild of field and reference site populations. Skin, fin, and gill data were analyzed using multinomial logistic regression. 'None' was used as the baseline category due to being the most frequent rating when examining samples. Skin, fin, and gill parasites were the dependent variables in separate models and location was used as fixed effect to compare field sites to one another. Blood data was analyzed using analysis of variance models. Dependent variables included estimated total white blood cell counts and each differential white blood cell type. Each were analyzed using separate models and location was used as the fixed effect. For the welfare assessments, weights were assigned to behavioral indicators by multiplying scores within the behavior category by two. Indicator scores were summed, and totals were used as the dependent variable and period, aquarium, and exhibit were the fixed effects in an analysis of variance model. A separate model was run to test for an effect of origin (wild collected or aquaculture) on total welfare score of exhibits at the North Carolina Aquarium at Pine Knoll Shores due to this institution having both wild collected and aquaculture exhibit tanks. Ecosystem health assessments Salinity differed by period at all reference sites (AR p < 0.01, LW p < 0.01, CC p = 0.05). Dissolved oxygen was found to differ by period at AR sites (p = 0.01) but not at LW or CC sites (Table 4). No differences of temperature across reference periods of sites were found. For SC1 and AR reference periods, no differences of salinity, temperature, or dissolved oxygen were found. Salinity of SC2 was different than LW reference period one (p < 0.01) and temperature was different than LW reference periods one-three (p < 0.01). No differences of dissolved oxygen at SC2 and reference periods were found. For NC2, salinity was different than CC reference periods one, two, and four (p < 0.01) but not three. No difference of dissolved oxygen or temperature was found for NC2 and CC reference periods (Table 4). Field site location was found to influence environmental assessment score (p < 0.01). Posthoc comparisons of field sites to one another resulted in differences between NC2 and all other sites. No other differences of total score between sites were found (Fig 2). Diversity was found to differ across periods at the North Carolina CC references sites (p < 0.01), while no differences across AR and LW reference periods were found. For SC1, differences of diversity (p < 0.01) and all AR reference periods were found (Fig 3). Differences of diversity (p < 0.01) were also found for SC2 and all LW reference periods (Fig 4). No differences of diversity at NC2 sites compared to CC reference periods were found. After grouping species into functional guilds, differences of the proportion of benthivore species within SC1 and SC2 populations and their respective reference sites were found (Table 5). For SC1, 75% of the sampled population were benthivores which was more than the AR reference periods. While for SC2, 23% of animals fell within this niche which was less than LW sites. The proportion of epifaunal crustacivore species at SC1 and SC2 field sites were also different compared to their reference sites although overall, the proportion of animals found within this niche was generally low for both. For SC2, over 50% were pelagic planktivore species, greater than LW reference periods. Most species (96%) at NC2 fell within the generalist benthivores functional guild, similar to what was found at the CC reference sites (Table 5). For NC1, 83% of species sampled were pelagic planktivore species. No difference of skin, fin, and gill parasites or estimated total white blood cell counts across field sites were found (Tables 6 and 7). After analysis of differentials, a difference of location for eosinophils was found (p < 0.01) although averages for all sites were only about 5% compared to other differential white blood cells. Welfare assessments Aquarium was found to influence total welfare scores (p = 0.01), while no influence of exhibit or period was found (Fig 5). No influence of origin (wild collected vs aquaculture) on average total welfare scores was found when comparing exhibits at the North Carolina Aquarium at Pine Knoll Shores (wild collected average with standard deviation = 74.8 ± 1.83, aquaculture average with standard deviation = 74.5 ± 2.12). Discussion The objectives of this study were met by evaluating the ecosystem health of locations two public aquariums visit on an annual basis to collect fish for their institutions and then assessing the wellbeing of animals at each aquarium. Findings produced comprehensive assessments of each field site through measurement of chemical, physical, and biological indicators of ecosystem health. Evaluation of the wellbeing of fishes was achieved through measurements of indictors within the categories of nutrition, health, physical environment, and behavioral interactions. The hypotheses in this study were not upheld. Anthropogenic pressures of field sites were observed, but no evidence of high degradation or compromised health of animals were found. Welfare assessments of aquarium exhibit tanks produced high-positive scores overall, demonstrating that both wild collected and aquaculture fishes were coping appropriately within their environments. In order to appropriately assess the ecosystem health of each field site, use of environmental and animal indicators was necessary for analysis of the influence of anthropogenic impacts, including public aquarium wild collection, on the functioning of each system [7,14]. Measurements of water and sediment chemistries was an essential component of data collection as coastal ecosystems are sinks to industrial and agricultural pollution, and land use changes from tidal marsh to urbanized environments have occurred [2,3,58]. Differences of water parameters and reference periods were observed for some of the field sites but not others, therefore rejecting the first hypothesis (Table 3). Changes in salinity across reference periods was expected due to hydrological cycles and freshwater inputs, with dry years having higher salinities and lower salinities in years with increased rainfall [59]. Although differences were observed, all water parameters fell within the normal ranges for coastal environments https://doi.org/10.1371/journal.pone.0285198.g004 Table 5. Proportion of species within functional guilds for reference periods and field sites (totals = 1). indicating no immediate concern for animal or human health [60,61]. For soil chemistries, potassium levels were found to be high at NC2 (73 kg/ A/15.24 cm soil), while all other parameters at field sites fell within low-medium ranges (18 kg-36 kg/ A/15.24 cm soil). Previous studies have attributed high levels of potassium to wastewater runoff from agriculture, with excessive quantities leading to leaching in soil prohibiting nutrient uptake in plants [62]. Oyster restoration programs have been employed in many Mid-Atlantic coastal and inlet environments in efforts to restore degraded areas, although more data is needed to quantify the success of increased abundance on entire system function [63]. Incorporation of environmental assessments of each location provided key information about the biotic and abiotic characteristics of each habitat as well as the presence and intensity of human activities. NC2 scored highest out of all field sites due to this site having high complexity as well as being a protected area with no commercial or recreational pressures (Fig 1). Notable categories that scored 3/5 during assessments were morphology and complexity with evidence of erosion, instability, and moderate amounts of vegetative patches at SC1, SC2, and NC1 sites. Although these challenges are being met with active restoration programs, impacts of land use change within these systems should continue to be monitored. Previous studies have demonstrated that differences in intensity of human pressures influence the abundance and diversity of animal functional guilds living within these systems [8]. Site Use of fish as sentinel species in this study was important not only because they are targeted by aquariums, but because of their recreational and commercial role for humans, as well as ecological importance within the coastal systems they inhabit [3,13]. Greater species diversity was found for both SC sites compared to their reference periods and therefore led to rejection of the second hypothesis (Figs 2 and 3). One limitation of this study was the difference of gear type used to sample fishes. Seine nets were chosen as the sampling method at field sites to mimic the sampling methods of both collaborating aquariums. Trammel nets, with larger mesh sizes capable of sampling larger size classes of fishes, were used at AR and LW reference sites while seine nets, for sampling juveniles, were used at all field sites [64]. For the CC reference sites, otter trawls targeted juvenile fishes, suggesting influence on the findings of lack of differences of diversity between NC2 and CC reference sites. Additionally, when comparing reference periods only, AR and LW showed no decadal differences of diversity and CC reference sites increased diversity overtime (CC reference period one average diversity with standard deviation = 0.99 ± 0.39, reference period two average diversity and standard deviation = 0.88 ± 0.33, reference period three average diversity with standard deviation = 1.06 ± 0.33, reference period four average diversity and standard deviation = 1.18 ± 0.33). Findings in this study also supported previous claims that communities within estuarine ecosystems contain a few species with high abundances and many species with low abundances [65][66][67]. Grouping species into functional guilds allowed for comparisons of the proportions of niches occupied within each system and considerations for how anthropogenic impacts may influence assemblages overtime. For SC1 and NC2 field sites, most species within the sampled populations were benthivores suggesting that increased inputs of runoff and pollution could cause substantial damage to the functioning of both ecosystems (Table 5) [9,58]. For SC2 and NC2, most species were pelagic planktivores serving an important role as prey for larger species. High abundances within this functional guild suggests that predator populations may be declining, which could be attributed to increased anthropogenic stresses such as overfishing [47]. Additional research is needed to support these statements as potential shifts in functional guild assemblages across reference site periods was not overwhelmingly clear, apart from increases of pelagic species in AR reference period four (Table 5). Health assessments of fishes was the final key component for use of these species as sentinels of ecosystem health at each field site as both a proxy and outcome of environmental health. Findings provided clear, timely reflections of how animals were responding to environmental changes resulting from anthropogenic impacts [68]. Parasitic loads at all field sites were extremely low with gill parasites being the most frequently observed (Table 6; SC1 avg. rating = 0.25, SC2 avg. rating = 0.08, NC1 avg. rating = 0.05, NC2 avg. rating = 0.06). Estimated total white blood cell counts showed no evidence of disease or compromised health in the animals sampled and differential counts fell within normal ranges (Table 7) [28,69]. All these biotic and abiotic factors allowed us to a draw a metric-based conclusion that local harvesting pressures, including those of the aquariums assessed in this study, were not having a detrimental effect on localized ecosystem health. Semi-annual welfare assessments at each collaborating aquarium allowed for evaluation of how wild collected fishes were coping within captive environments long-term. Previous assessments of the wellbeing of wild fishes shortly after arrival to aquariums while in quarantine showed animals' ability to cope within newly restricted environments although instances of aggression were found in tanks containing multiple species [28]. Both institutions scored high overall, although aggression was observed while assessments were performed, demonstrating that some species may not be able to thrive in restricted mixed species environments. Evidence of aggression was the only behavioral indicator that scored high risk (1) consistently throughout all three periods of assessments, although aggression was not observed in the same exhibit each time. The similarities of welfare scores between aquaculture and wild collected fishes demonstrated that aquaculture can be used to supplement wild collection of species. Further, the high scores assigned to both groups suggested that animals can cope in public aquarium settings long-term. This is a crucial finding in this study as previous assessments showed acute stress in commercial aquaculture fishes demonstrating a need for additional understanding of animal welfare in intensive systems and how animals purchased from this industry adapt in aquarium settings [28]. Additionally, sourcing from aquaculture brings the benefit of purchasing animals from controlled environments who are accustomed to being fed by humans and have received preventive care, including parasite management, prior to arrival to the aquarium and/or introduction to established tanks where they could cause devastation if harboring a communicable disease [70]. The collection practices of the collaborating institutions in this study proved to be lowmoderate risk and did not detract from the functioning of the environment when compared to historical data. Due to the inability to separate aquarium pressures from other anthropogenic stressors at each field site, inclusion of animal wellbeing in aquariums was pivotal for considerations of shifting to increases in aquaculture instead of primarily collecting animals from the wild for these institutions in the future. More evaluations of aquarium collection practices, including larger institutions sourcing animals from fewer areas, are needed to determine if the findings in this study reflect public aquarium sustainability overall. Conclusion Measuring ecosystem health is complex, but this study demonstrated that the South Carolina Aquarium and North Carolina Aquarium at Pine Knoll Shores are sourcing species from resilient field sites that can withstand the pressures of their current wild collection practices. Water and sediment chemistries were within normal ranges for coastal environments, environmental challenges were being met with active restoration programs, and populations sampled showed low parasitic loads and estimated white blood cell counts. Further, the welfare of wild caught versus aquaculture fishes were both positive and comparable. This study suggests that if done in low-moderate numbers, sustainable harvests can occur that do not detrimentally impact the environment or the health and wellbeing of collected animals, as compared to those sourced through aquaculture. Field site monitoring should be continued as anthropogenic expansion, and modification of, coastal environments has the potential to decrease system resilience. Additional research is needed to assess collection practices of other public aquariums as larger institutions with greater species numbers might impose greater stress on the systems they collect from. As institutions that pride themselves on being leaders in conservation, education, and animal wellbeing, public aquariums are in the position to be leaders and influencers in improving the understanding of fish wellbeing in captive environments and advancing marine species aquaculture to take pressure away from stressed aquatic ecosystems. To achieve this, more research into species available through aquaculture is needed. Additionally, as many visitors to aquariums are themselves home hobbyists, these institutions hold a responsibility to advocate for the importance of sustainable sourcing of species, including purchasing through aquaculture. Through this study, we hope to influence additional assessments at other aquarium collection sites and increase resources for marine species aquaculture in the future.
2023-05-03T06:17:31.257Z
2023-05-02T00:00:00.000
{ "year": 2023, "sha1": "c23746b37e6680fb6204da06f3b332df126ef47e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4653c39d7b00a49127416f5939764de83d0c4b2f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
235471876
pes2o/s2orc
v3-fos-license
Evidence for competition and cannibalism in wormlions Trap-building predators, such as web-building spiders and pit-building antlions, construct traps to capture their prey. These predators compete over sites that either enable the construction of suitable traps, are prey rich, or simply satisfy their abiotic requirements. We examined the effect of intraspecific competition over suitable space in pit-building wormlions. As expected, the ability of wormlions to select their favorable microhabitats—shaded or deep sand over lit or shallow sand—decreased with increasing density. Favorable microhabitats were populated more frequently by large than by small individuals and the density of individuals in the favorable microhabitat decreased with their increase in body mass. The advantage of large individuals in populating favorable microhabitats is nevertheless not absolute: both size categories constructed smaller pits when competing over a limited space compared to those constructed in isolation. The outcome of competition also depends on the type of habitat: deep sand is more important for large wormlions than small ones, while shade is similarly important for both size classes. Finally, in contrast to previous reports, cannibalism is shown here to be possible in wormlions. Its prevalence however is much lower compared to that documented in other trap-building predators. Our findings show that the advantage of large individuals over small ones should not be taken for granted, as it can depend on the environmental context. We present suggestions for the relative lack of competitive advantage of large wormlion individuals compared to other trap-building predators, which may stem from the absence of obvious weaponry, such as sharp mandibles. www.nature.com/scientificreports/ prey [28][29][30] . Trap-building (hereafter, TB) predators belong to a sub-group of sit-and-wait predators that construct a trap to catch their prey 31,32 . In addition to considerations related to abiotic conditions and prey abundance, TB predators should choose sites that require the least investment of energy in the construction and maintenance of their traps 33,34 . Relocation after initial trap construction is undesirable owing to the cost of constructing a new trap 35,36 . Furthermore, trap relocation is often dangerous, making the relocating predator vulnerable to other predators and harmful abiotic conditions 37,38 . Consequently, there is competition among TB predators over the most suitable sites for trap construction, and in such competition, large individuals are superior to small ones [39][40][41] . In addition to this direct competition over ambush sites, a TB predator can block the way of potential prey from reaching the traps of other predators. This process, termed "shadow competition", assumes that sites in the periphery of the TB predators' cluster receive more prey than sites in the center [42][43][44] . Finally, many TB predators, especially spiders and antlions, prey on whatever prey is caught in their trap, including related species and conspecifics [45][46][47][48] . The outcome of such cannibalistic and intra-guild predation attempts strongly depends on body size (larger individuals prey on smaller ones 22,49,50 ). Wormlions are fly larvae that dig pit-traps and ambush their prey 51 (Fig. 1a). Owing to their pit-trap construction, their foraging strategy is similar to that of pit-building antlions, which present together an example of convergent evolution 52,53 , although several differences exist between the taxa. For example, antlions use spiral digging, while wormlions use central digging, which is less efficient, and antlions can catch larger prey than can wormlions of the same size 54,55 . Furthermore, while antlions are known to be cannibalistic 22,45,46 , there is only a single anecdotal report to date, suggesting that cannibalism does not exist in wormlions 56 . Previous studies have demonstrated wormlion preference for fine-textured loose soil, deep and dry sand, and shade [57][58][59] . The shade is preferred in order to avoid exposure to high temperature and desiccation 60 , while deep sand enables the construction of larger pits, which in turn improve prey capture 61 . Similar to other TB predators, wormlions compete over suitable sites in which to construct their traps, and the outcome of the competition for the loser is either to settle in an inferior microhabitat or to construct a smaller pit-trap 52,58 . We examined here competition over favorable microhabitats between wormlions of equal and of different sizes. We predicted that the proportion of wormlions settling in inferior microhabitats should increase with their density. We also predicted that the competitively superior large wormlions would occupy the more favorable microhabitats, while the competitively inferior small individuals would construct their traps in the less preferred microhabitats. If a priority effect exists, this pattern is expected to be weaker when the smaller individuals arrive at the favorable microhabitat before the larger ones. Finally, we predicted that smaller wormlions would be more affected by competition under limited space than larger ones, i.e., they would reduce their pit size to a greater extent, as competition under such conditions becomes asymmetrical according to the different sizes of the competitors. Materials and methods Wormlions (Vermileo sp.) were collected from loose soil patches located next to buildings providing cover, at Tel Aviv University, Tel Aviv, Israel (Fig. 1a). We focused on shade and deep sand in the following experiments, both of which are preferred by wormlions in choice settings 57,58 . The experimental arena in Experiments 1-4 was an aluminum tray, 15 × 15 cm, filled with either 2 cm sand (deep sand) or 0.5 cm sand (shallow sand). The wormlions were placed in the middle of a rectangular area (7.5 × 3.5 cm) adjacent to one of the tray edges An adult wormlion female is present on the wall, perhaps before or after oviposition (photo taken by the first author). (b) A scheme of the tray used for Experiments 1-4. Wormlions were always placed in the center of the rectangular area (marked with an arrow). The rectangular area (gray) in most cases contained better conditions than its surroundings (white), either shaded vs. lit sand or deep vs. shallow sand. www.nature.com/scientificreports/ (Fig. 1b). The abiotic conditions in this rectangle differed from those of the surroundings, as explained below. The experimental cup in Experiments 5-6 comprised a cup with a diameter of 5.5 cm filled with 2 cm of sand. The sand used in all experiments was of particle size < 250 µm, reflecting the wormlion preference for fine-textured sand. The goal of experiments 1 and 3 was to examine the effect of density on the ability of wormlions to settle in their preferred microhabitats (shaded over lit and deep over shallow microhabitats). We tested also the effect of density in homogenous microhabitats (shaded, lit, deep, or shallow; treatments 2 and 3 in each experiment) as a reference, to examine the tendency of wormlions to remain in their initial placement location when only density is changing. Experiments 1-2: competition over shade. Experiment 1: competition among wormlions of a similar size. We collected 265 wormlions and weighed them (accuracy of 0.1 mg; 8.9 ± 5.4 mg, mean body mass ± 1 SD). We first sorted the collected individuals according to body mass and then divided them into groups of 1, 2, 3, 4, or 5 individuals, so that the within-group variances in body mass were minimal. We randomly allocated individuals to one of the following three treatments: (1) the rectangular area was shaded, while the rest of the tray was lit (hereafter, the shade choice treatment); (2) the entire tray was shaded; and (3) the entire tray was lit. Each of the 15 treatment combinations (treatment × density) was replicated 4-8 times (5.7 ± 1.1, mean number of trays ± 1 SD). In all cases, the sand depth was 2 cm. We photographed the tray after 24 h, documented the number and location of the constructed pits, and then measured their area using ImageJ 62 . Experiment 2: competition between small and large wormlions. We collected 288 wormlions and weighed them (7.2 ± 5.8 mg, mean body mass ± 1 SD). We first sorted the collected individuals according to body mass and assigned individuals to three different treatments, such that the body mass differences among treatments were minimized. Then, we sorted again the individuals according to treatment and body mass. We cut the masssorted list in the middle, resulting in two groups, small individuals (first half) and large individuals (second half), per treatment. We matched wormlions in groups of four-two small and two large individuals, in ascending order (e.g., the two smallest individuals of the half dataset of smaller individuals were matched with the two smallest individuals of the half dataset of the larger individuals). The difference in body mass between the two large and the two small larvae was 9.0 ± 3.8 mg (mean ± 1 SD), meaning that they most likely pertained to different instar stages. The area below a rectangular cover was shade, while the rest of the tray was lit (similar to treatment 1 in Experiment 1). The three treatments were: (1) all four wormlions were placed simultaneously under the shaded area, (2) the two small ones were placed 2 h before the large ones under the shaded area, and (3) the two large ones were placed 2 h before the small ones under the shaded area. Each of the three treatments was replicated 24 times (3 treatments × 24 replications = 72 experimental trays). After 24 h, we documented the wormlions' location (shade/light), their identity (small/large), and measured their pit area. Not all wormlions constructed pits, and larger individuals constructed pits more frequently than smaller individuals. Therefore, we referred only to the identity of the individuals constructing pits and calculated for each tray the expected number of large individuals under shade assuming no difference according to size ('null expectation'). We compared this expected number to the observed number of large individuals under shade. For example, let us assume that three individuals constructed pits, of which two were large individuals and the third was a small individual. Of these three pits, two were constructed in the shaded area, by large individuals. Therefore, the 'null expectation' , assuming no difference according to size, is that 2/3 of the pits under shaded area would be constructed by large wormlions. However, the observed proportion of large wormlions constructing their pits in the shaded area is one. See the Supplementary Material for a description of all possible cases. Experiments 3-4: competition over deep sand. Experiment 3: competition among wormlions of a similar size. We collected 252 wormlions and weighed them (6.5 ± 4.5 mg, mean body mass ± 1 SD). We first sorted the collected individuals according to body mass and then divided them into groups of 1, 2, 3, 4, or 5 individuals, so that the within-group variances in body mass were minimal. We randomly allocated individuals to one of the following three treatments: (1) the rectangular area contained deep sand (2 cm), while the rest of the tray contained shallow sand (0.5 cm; hereafter, the depth choice treatment); (2) the whole tray contained deep sand; and (3) the whole tray contained shallow sand. Each of the 15 treatment combinations (treatment × density) was replicated 4-6 times (5.5 ± 0.6, mean number of trays ± 1 SD). The trays were placed under shade, 12:12 L:D. After 24 h, we documented the wormlions' location (deep/shallow sand) and measured their pit area. Experiment 4: competition between small and large wormlions. We collected 288 wormlions and weighed them (8.5 ± 6.5 mg, mean body mass ± 1 SD). We sorted them into sizes and assigned them to treatments as in Experiment 2. The difference in body mass between the two large and the two small larvae was 10.3 ± 3.5 mg (mean ± 1 SD). The rectangular area contained deep sand, while the rest of the tray contained shallow sand (similar to treatment 1 in Experiment 3). We randomly assigned the groups to one of the following two treatments: (1) all four wormlions were placed simultaneously on the deep sand, (2) the two small ones were placed 2 h earlier than the large ones, all on deep sand, and (3) the two large ones were placed 2 h earlier than the small ones, all on deep sand. Each of the three treatments was replicated 24 times (3 treatments × 24 replications = 72 experimental trays). After 24 h we documented the wormlions' location, their identity (small/large), and measured their pit area. We calculated the proportion of large wormlions constructing their pits in the deep sand out of the total number of individuals that constructed pits in the deep sand and compared this proportion to the expected probability according to the total number of pits constructed (see Experiment 2). www.nature.com/scientificreports/ Experiment 5: competition between small and large wormlions over a limited area. We collected 100 wormlions and weighed them (10.1 ± 7.5 mg, mean body mass ± 1 SD). We then sorted the collected individuals according to body mass and allocated pairs of one small and one large individual to an experimental cup (N = 50 pairs). The difference in body mass between the large and small wormlion in each cup was 12.6 ± 3.3 mg (mean ± 1 SD). Fifty percent of the pairs were placed as one pair to each cup, while the other pairs were separated and placed in two individual cups. After 24 h, we measured the area of the pits constructed. The next day, we switched between the treatments: the pairs that had shared a cup were separated, and the pairs that had been kept separated were placed together in the same cup. The pit area was measured, as before, after 24 h. The procedure yielded two measurements per individual: the pit area when constructed alone and when paired with a different sized competitor (a small competitor if it was a large wormlion and a large competitor if it was a small wormlion). Experiment 6: cannibalism. In Experiment 5, we observed two cases of cannibalism, in which the large wormlion preyed on the smaller one (a mass difference between the cannibal and the victim of 7.5 mg and 11.0 mg). Because previous studies on wormlions had doubted the existence of cannibalism, we extended the number of pairs placed together in the same experimental cup. In total, we have data for 139 pairs. The body mass of small larvae ranged from 0.1 to 8.9 mg (2.9 ± 2.2 mg, mean body mass ± 1 SD) and that of large larvae from 4.1 to 31.9 mg (13.8 ± 5.7 mg, mean body mass ± 1 SD). The body mass difference between paired wormlions ranged from 3.7 to 23.2 mg (10.9 ± 3.8 mg; mean ± 1 SD). After 24 h, we documented the prevalence of cannibalism by observing any corpses under a stereomicroscope and weighing the remaining larva to verify that it had gained mass. Statistical analysis. To analyze the number of pits constructed in Experiments 1 and 3, we employed a hierarchical generalized linear model, using Poisson distribution and log link function. The number of pits constructed was treated as the response variable, and the number of wormlions in the experimental tray, mean body mass per tray, and treatment were included in the statistical model as explanatory variables. The categorical variable treatment was converted into a dummy variable. Since it comprised three levels, its inclusion in the statistical model required generating two binary indicator variables, representing two of the three treatment levels, with each one of them being compared with the reference level, i.e., the third treatment group. The pit area in Experiments 1-5 was analyzed employing a hierarchical generalized linear model, using Normal distribution and an identity link function. The test is "hierarchical" because the tray/cup was included in the statistical model as a random factor (i.e., accounting for the dependency of individuals within tray/cup). Pit area was treated as the response variable. In Experiments 1 and 3, wormlion density, mean body mass, treatment, and location were treated as the explanatory variables. In Experiments 2 and 4, treatment (individuals placed simultaneously or not), location, and size (large/small) were treated as explanatory variables. In these two experiments, we also used repeated-measures ANOVAs to compare the observed and expected proportion (arcsin-transformed) of large wormlions constructing their pits in the initial location (as the response variable), with treatment as the between-subject factor. In Experiment 5, size (large/small), treatment (alone/together), and order of treatments (first alone or together) were treated as the explanatory variables. Experiment 6 was analyzed using two logistic regressions, with the occurrence of cannibalism as a binary response variable, the body mass difference between the paired wormlions as the explanatory variable, and either the mass of the small individual or the mass of the large individual as another explanatory variable. When an interaction was not significant, it was removed, and the test was redone. All analyses were conducted using STATA 15 (2017; StataCorp, College Station, TX). Results Experiments 1-2: competition over shade. Experiment 1: competition among wormlions of a similar size. The increase in the number of pits constructed in the initial placement location, as a function of the total number of pits in the tray, was stronger in the shade choice treatment (z = 2.37, P = 0.018), and in the lit tray treatment (z = 2.11, P = 0.035), compared to that of the shaded tray treatment (Fig. 2a). Increased body mass led to a decrease in the number of pits constructed in the initial placement location (z = − 2.43, p = 0.015). Pit area increased with body mass (z = 5.70, P < 0.001), but did not differ among the three treatments (P > 0.080). The interaction between pit location and treatment was not significant (P > 0.070 in both comparisons). Experiment 2: competition between small and large wormlions. Large wormlions constructed pits in the preferred shaded area more frequently than small wormlions (F 1,69 = 24.88, P < 0.001; Fig. 2b). There was no effect of treatment (placing the small wormlions 2 h earlier or later vs. all simultaneously) on the outcome (F 2,69 = 0.485, P = 0.618). Regarding pit area, large wormlions constructed larger pits (z = 16.25, P < 0.001; Fig. 2c). Additionally, pit area was larger in the shaded area of the tray (z = 4.02, P < 0.001; Fig. 2d). There was no difference in pit area when small wormlions were placed earlier versus all placed simultaneously and when large wormlions were placed earlier vs. all simultaneously (z = − 0.83 and 0.48, P = 0.407 and P = 0.628, respectively). All two-way interactions were not significant and hence removed (P > 0.062). Experiments 3-4: competition over deep sand. Experiment 3: competition among wormlions of a similar size. As expected, the number of pits constructed in the initial placement location was positively correlated with the total number of pits in the tray (z = 2.23, P = 0.026). However, there were no differences in the number of pits constructed in the initial placement location between the deep sand and the two other treatments (P > 0.835 in both cases), and none of the two-way interaction terms were significant (P > 0.660 in both cases). Increased www.nature.com/scientificreports/ body mass led to a decrease in the number of pits constructed in the initial placement location (z = − 2.40, P = 0.016). Pits constructed in the initial placement location were larger than those constructed elsewhere in the tray (z = 3.43, P = 0.001), and this pattern was more pronounced in the depth choice treatment (z = 2.28, P = 0.022; Fig. 3a). Pits constructed in the deep sand treatment were larger than those constructed in the depth choice treatment (z = − 2.33, P = 0.020), and also tended to be larger than those constructed in the shallow sand treatment (z = − 1.89, P = 0.058). The effect of pit location on pit area did not differ between the deep and shallow sand treatments (z = − 1.83, P = 0.067). As expected, pit area increased with body mass (z = 9.02, P < 0.001). Experiment 4: competition between small and large wormlions. Large wormlions constructed pits in the preferred deep sand more frequently than small wormlions (F 1,69 = 34.76, P < 0.001; Fig. 3b). There was no significant effect of treatment (placing the small wormlions 2 h earlier or later vs. all simultaneously) on the outcome (F 1,69 = 0.12, P = 0.889). Regarding pit area, large wormlions constructed larger pits, with this pattern being more pronounced in deep sand (body size × sand depth interaction; z = 6.68, P < 0.001; Fig. 3c). Size as main effect was significant as well (z = 3.22, P = 0.001), while sand depth was not (z = 0.43, P = 0.668). Treatment interacted with sand depth to affect pit area (z = 2.03, P = 0.042): Pits in deep sand were larger, when large wormlions arrived 2 h before the small wormlions, compared to a simultaneous arrival (Fig. 3d). All other two-way interactions were not significant and hence removed (P > 0.131). www.nature.com/scientificreports/ Discussion Our study provides multiple evidence for competition over space in wormlions, especially over favorable sites. As expected, large individuals were superior to small ones-they occupied the favorable sites, forcing the small individuals to relocate further away, with no "priority effect" evident. In other words, allowing small individuals to arrive earlier did not moderate the advantage of large wormlions in populating superior sites. The advantage of large individuals however was not absolute: they did not prevent neighboring small wormlions from constructing pits, and consequently had to reduce their own pit dimensions. Wormlion cannibalism occurred, although it was rare and required a large body mass difference between the cannibal and its victim. Overall, although large wormlions demonstrated superiority over small ones, we conclude that large individuals have only a moderate www.nature.com/scientificreports/ negative influence on small ones regarding habitat selection, compared to other TB predators, such as antlions and spiders. At low densities, wormlions first inhabit the favorable sites, while sites of a lower quality are occupied only when density increases. This process of habitat selection is common in many other animals 9,63,64 . The superiority of shaded over lit microhabitats from the wormlions' perspective is demonstrated in the greater number of pits built in the initial placement location under shade, when the surrounding microhabitat was lit, over a completely shaded microhabitat. In other words, elevating the quality of the favorable microhabitat (shaded and close by) over the inferior microhabitat (lit and more distant) leads to higher densities occupying the former, as demonstrated in pairwise comparisons of three or more habitats of different quality 65,66 . While this pattern is known, the effect of body size/mass has been less often studied. Here, we demonstrate that fewer wormlions remained in the favorable microhabitat as body size increased, as evidence of intensifying competition for space among larger individuals. This finding highlights the importance of referring to body size/mass in the process of habitat selection and might explain the differences obtained among studies examining individuals of the same species but differing in their sizes. We present evidence of the superior competitive ability of large individuals over smaller ones and demonstrate its consequence for habitat selection. Larger individuals showed a higher probability to occupy the favorable microhabitats, similar to findings in other studies 40,[67][68][69] , and supporting unequal competitor models of habitat selection (reviewed in 10 ). Because density in the favorable microhabitat decreases with body mass, and large individuals populate this microhabitat more often, favorable microhabitats might seem less populated than expected, or the inferior microhabitats more populated than expected. This pattern is common too in other animals, with several explanations having been suggested, such as interference or perceptual limitations 70,71 . In some systems of TB predators, large individuals are located in the cluster's center, and smaller individuals move to the periphery [72][73][74] . It has been suggested that since prey arrives from the periphery, the exterior positions receive more prey, which is prevented from accessing the cluster's center, a process called "shadow competition" [42][43][44] . Large individuals may nonetheless remain in the center more often than small ones because they are less strongly affected by shadow competition than are small individuals 40 . However, here we present a different mechanism behind the occupation of the center by large individuals: if the habitat is not homogenous, and its center is of a higher quality than the periphery, large individuals will aggregate there. The assumption that the center often provides better conditions than the periphery is supported in other systems too because the periphery is more susceptible to various biotic and abiotic types of interference 75,76 . We expected that the earlier arrival of smaller individuals to the favorable area would moderate the advantage larger individuals have in occupying such superior positions. We also expected that the earlier arrival of larger individuals would strengthen the advantage of larger individuals over smaller ones. Neither expectations held true, and large individuals occupied the superior positions in similar proportions, independent of the order of arrival. This finding is not in accord with the phenomenon of "priority effect", according to which early arrival allows specific animals to occupy the best sites, while late arrivals compromise on inferior sites, with consequences for reproduction and survival 25,77 . However, when large individuals were placed before the smaller ones, the pits constructed in deep sand were moderately larger. Since most pits in deep sand were constructed by large wormlions, this result may also be interpreted as weak, partial support for the priority effect from the perspective of large wormlions: when large ones are introduced first, they can construct larger pits in deep sand, compared to the two other scenarios (either a simultaneous arrival or arrival after the small wormlions). When competing over a limited area, both large and small individuals constructed smaller pits compared to their pits when dug in isolation. The competitive ability of large wormlions is thus limited. This is especially true compared to other TB predators. For example, when forced together in a limited area, large and small individuals of the antlion Macroleon quinquemaculatus constructed larger and smaller pits than expected, respectively 39 . Small colonial spiders of the species Metepeira incrassata postpone the construction of their webs and allow large individuals to construct their webs first, to prevent potential conflict 73 . We present here the first evidence of cannibalism in wormlions, in contrast to a previous suggestion that wormlions are not cannibalistic 56 . Cannibalism rates were nevertheless low: ~ 4% of pairs of heterogeneous sizes. In comparison, other TB predators demonstrate much higher cannibalism rates. For example, paired individuals of different instar stages of the antlion Myrmeleon hyalinus resulted in cannibalism in up to 75% of the cases 8 , and 20% of the diet of two Pardosa spider species consists in conspecifics 78 . This result fits well with our current finding of only a limited superiority of large individuals. One explanation for this could be the lack of obvious weaponry in wormlions, in contrast to the mandibles or chelicerae of antlions and spiders. Wormlions display an atypical predatory lifestyle compared to other fly larvae. Although predatory fly larvae in other families are known, such larvae prey on the soft larvae or eggs of other insects (or snails), and not on well-defended prey, such as ants, which are common prey of wormlions 79 . Indeed, antlions can subdue larger ants than can wormlions of the same size 54 , which may also explain why large wormlions "cope" less well with small conspecifics than other TB predators. This is perhaps the best wormlions can do given their morphology. Note that the propensity of cannibalism we report under laboratory settings is almost certainly exaggerated, owing to high density and lack of refuge. It is necessary to evaluate cannibalism rates in the field in order to determine how common it really is. There were several clear differences between the two sets of treatments with shaded/lit vs. deep/shallow sand microhabitats. First, while density had a similar effect on the final location in all sand depth treatments, more wormlions remained in their initial placement location in the shade choice treatment than when placed under full shade or light conditions. Regarding the pit area, the final location in deep or shallow sand had a stronger effect than the treatment per se, while the shade/light treatments were more important for the pit area than the final location. This fits previous studies indicating that while both deep sand and shade are preferred, sand depth affects the pit area more strongly 80 . The reason is probably that while deep sand is preferred in order to construct larger pits, shade is preferred in order to avoid exposure to high temperatures and desiccation 58 www.nature.com/scientificreports/ Second, large wormlions remained either under shade or in deep sand, while small ones relocated more frequently to lit or shallow sand areas. The advantage deep sand provides for large individuals is much greater than for small individuals 61 , as demonstrated in the interactive effect of wormlion size and sand depth on pit area (Fig. 3c). In contrast, shade should be similarly important for large and small wormlions alike. Third, when large individuals were placed before the smaller ones, the pits constructed in deep sand were larger (a significant sand depth × treatment interaction; Fig. 3d), suggesting once more that deep sand is more important for large wormlions than small ones. In summary, we have demonstrated here evidence of competition in wormlions over favorable ambush sites. While large wormlions possess an advantage over small individuals, this advantage is weaker compared to that in other TB predators. Future studies should examine not only intraspecific competition but also competition with other insects, such as antlions, occupying ambush sites of a similar nature. Pit-building antlions and wormlions sometimes co-occur 53,54 . However, antlions are usually rare within wormlion clusters in the Mediterranean area ( 57 ; Scharf I, pers. obs.), and antlions are probably superior competitors in direct interactions with wormlions. An intriguing question thus arises as to what makes antlions unable to invade the wormlions' typical habitats? The answer could be related to an abiotic constraint, or perhaps to the ability of wormlions to settle for lower prey availability or prey of smaller size.
2021-06-19T06:17:02.669Z
2021-06-17T00:00:00.000
{ "year": 2021, "sha1": "4cd4c40aa30f309db38c1b9c23958969e52e05ae", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-92154-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8067f4e659ade51069edde4abc20af0b05a453cd", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
258193150
pes2o/s2orc
v3-fos-license
Genomic surveillance of SARS‐CoV‐2 strains circulating in Iran during six waves of the pandemic Abstract Background SARS‐CoV‐2 genomic surveillance is necessary for the detection, monitoring, and evaluation of virus variants, which can have increased transmissibility, disease severity, or other adverse effects. We sequenced 330 SARS‐CoV‐2 genomes during the sixth wave of the COVID pandemic in Iran and compared them with five previous waves, for identifying SARS‐CoV‐2 variants, the genomic behavior of the virus, and understanding its characteristics. Methods After viral RNA extraction from clinical samples collected during the COVID‐19 pandemic, next generation sequencing was performed using the Nextseq and Nanopore platforms. The sequencing data were analyzed and compared with reference sequences. Results In Iran during the first wave, V and L clades were detected. The second wave was recognized by G, GH, and GR clades. Circulating clades during the third wave were GH and GR. In the fourth wave, GRY (alpha variant), GK (delta variant), and one GH clade (beta variant) were detected. All viruses in the fifth wave were in GK clade (delta variant). In the sixth wave, Omicron variant (GRA clade) was circulating. Conclusions Genome sequencing, a key strategy in genomic surveillance systems, helps to detect and monitor the prevalence of SARS‐CoV‐2 variants, monitor the viral evolution of SARS‐CoV‐2, identify new variants for disease prevention, control, and treatment, and also provide information for and conduct public health measures in this area. With this system, Iran could be ready for surveillance of other respiratory virus diseases besides influenza and SARS‐CoV‐2. They are not vital for viral replication; however, they are involved in pathogenesis. 6,7 Coronaviruses are biologically diverse and mutate rapidly. 8 The virus's properties are largely unaffected by most of changes. However, some changes may have an impact on the properties of the virus, such as how easily it spreads, the severity of the disease it causes, or how well vaccines, therapeutic medicines, diagnostic tools, or other social and public health measures work. 9 Since the beginning of the COVID-19 pandemic, different genetic lineages of SARS-CoV-2 have emerged and spread worldwide. 9 The SARS-CoV-2 variants that may pose an increased risk to public health have been divided into the following three groups by WHO: Variants under monitoring (VUM), variants of interest (VOIs), and variants of concern (VOCs). A variant with genetic changes that are thought to affect the characteristics of the virus is called a VUM, and there are some indications that it may pose a threat to public health and safety in the future. Variants that have been found to cause community spread in multiple cases, clusters, or countries are defined as VOIs. The definition of a VOC is an increase in transmissibility and virulence or a decrease in the efficacy of current public health, social, and therapeutic measures. 10 There are currently 11 clades in the GISAID nomenclature system, which is based on shared marker mutations. The L and S clades formed early in the pandemic before the L split into V and G. The GR, GH, GV, and GK clades split from base clade G. GR evolved into GRY, which later developed into GRA, the current dominant clade. The O clade contains all sequences that have not been classified. 11 The gold standard for monitoring and identifying new variants in SARS-CoV-2 is whole genome sequencing (WGS) using next generation sequencing (NGS). All SARS-CoV-2 genes can be sequenced using this method, including those encoding non-structural proteins and other regions. 12 It is essential to maintain constant monitoring of the genetic diversity of SARS-CoV-2 in order to (a) ensure that vaccines and immune-based diagnostic or therapeutic interventions are effective, (b) offer a treatment that is much more stable, and (c) observe the pattern of the virus's geographic spread during the ongoing pandemic. 13,14 2 | METHODS | Data analysis All the reads were mapped to the SARS-CoV2 reference genome assembly for data investigation. The assembled viral genome was of high quality and contained no unknown nucleotides. The gathered genomes were studied by CoVsurver mutations Application in GISAID and aligned using the sequence alignment program BioEdit. Finally, all sequences were submitted in GISAID. | RESULTS In this study, 330 COVID-19 confirmed cases from the sixth wave of COVID-19 in Iran were subjected to NGS. These specimens were oropharyngeal swabs collected from all over the country. The variants and amino acids changes in structural, non-structural and accessory proteins were compared with SARS-CoV-2 strains circulated in Iran during the first five waves which were evaluated in our previous study. 3,15 Amino acids changes in structural proteins were listed in Table 1 and those related to nonstructural proteins were mentioned in Table 2. It should be noted that amino acid substitutions in accessory proteins were detected in a limited number of strains in the sixth wave. The highest rate of substitution in these proteins was 1.3% among BA.1 and BA.2 variants as follows: 1.3% of BA.1 strains had NS7a-P99S substitution. Besides, 1.3% of BA.2 strains had NS3-H78Y substitution. T A B L E 1 Amino acid changes detected in structural proteins of SARS-CoV2 strains circulated in the sixth wave of the pandemic which were detected in more than 70% of each variant and compared to shared aa changes during the first five waves in Iran. Genes 6th wave Shared changes with previous waves As an RNA virus, SARS-CoV-2 has a high rate of mutations, resulting in ongoing evolution over time that could affect replication, infectivity, transmissibility, virulence, and immunogenicity. 16 Increasing transmissibility, pathogenicity, and the capacity to evade natural or vaccine-induced immunity are all potential outcomes of emerging variants. 17 Analysis of whole-genome sequences is essential for monitoring its increased transmissibility and virulence-altering potential. In this study, we reported the circulation of distinct lineages of Nonstructural NSP3-A1892T NSP3-K38R NSP3-L1266I NSP3-S1265del SARS-CoV-2 variant that emerged in late 2019. BA.1 quickly became the most common variant worldwide after being discovered, and it has since developed into several other lineages. 23 The spike protein of BA.1 has more than 30 mutations that make it less sensitive to vaccine-induced antibody neutralization 25 severe. 34,37,38 Late in January 2020, the spike protein's D614G mutation was occasionally observed in both Europe and China. This mutation first spread to Europe and then gradually spread worldwide. It is still the predominant spike substitution, globally. 37,39 In our study, D614G mutation was continuously identified from the second wave to the sixth (with more than 70% frequency), and it was the major substitution. In the last wave, BA.1, BA.2, and mixed lineages (ML) showed this mutation at high percentage. The N501Y mutation is localized to the RBD and helps in achieving higher binding affinity to host cells, potentially leading to P681H at the S1/S2 spike cleavage site is thought to increase furin cleavage, potentially affecting viral cell entry. 42 It is believed that the mutations in Omicron's spike protein, P681H, increase spike protein cleavage and contribute to Omicron's high-speed transmission. 43 The presence of this substitution in Omicron raised concern as it may be associated with higher virulence and infectivity. 44 In Iran, P681H was detected in the sixth and fourth waves. As in the last wave, this mutation was found in BA.1, BA.2, and mixed lineages with high frequencies. Omicron has been described as a highly mutated variant with an "unusual constellation of mutations." 45 Free energy perturbation and computational mutagenesis could confirm that Omicron RBD binds ACE2, 2.5 times stronger than the prototype SARS-CoV-2. Notably, three substitutions, T478K, Q493K, and Q498R, nearly doubled the electrostatic potential (ELE) of the RBD Omic -ACE2 complex and made a significant contribution to the binding energies. 46 Moreover, the Omicron variant and other VOCs share T478K and E484A mutations, which have been found to increase neutralizing antibody resistance and associate with immune escapes. 47 In this study, T478K substitution was identified in the last two waves (fifth and sixth). In the sixth wave, this mutation was present in all BA.1, BA2, and mixed lineages samples. The Omicron VOC is also described by the four-point mutation in It is important to note that nonstructural proteins of SARS-CoV-2 (NSPs) primarily affect the innate immune responses of humans, facilitating immune escape. NSP3 has cleavage operations on nsp, via the pLpro domain, including self-cleavage of NSP3. 49 This nonstructural protein has several immune escape mechanisms that make it easier for viruses to reproduce, such as hindering ISG15 modification and inhibiting IFN production. [50][51][52] In our study, NSP3 mutation was detected in BA.1, BA.2, and the mixed groups in the sixth wave, but not presented in previous waves. The transmembrane proteins nsp3, nsp4, and nsp6 hijack and rearrange the membranes of the host endoplasmic reticulum, subsequently inducing the formation of double membrane vesicles (DMVs). 53 We observed nsp4 substitution in the fourth, fifth, and sixth waves. But nsp6 mutations were just identified in the sixth wave in BA.1, BA.2, and mixed groups. NSP5 is the major protease (Mpro) of the SARS-CoV-2. NSP5 likewise separates NLRP12 and TAB1 as well as handling long popular polypeptides. This protein is essential for viral infection. 54 ACKNOWLEDGMENTS We would like to thank all the patients who kindly participated in our study. We should say many thanks to the staff of the NIC located at CONFLICT OF INTEREST STATEMENT The authors declare no conflicts of interest. DATA AVAILABILITY STATEMENT Data are openly available in a public repository that issues datasets with DOIs.
2023-04-19T15:34:27.077Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "b4b0d3b987c248f6eaefa679a7613ee064fa9257", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/irv.13135", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "bef27f5a7dfcd5eaf2fee8c8697e7226ebdedef0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119592475
pes2o/s2orc
v3-fos-license
ACM bundles on cubic fourfolds containing a plane We study ACM bundles on cubic fourfolds containing a plane exploiting the geometry of the associated quadric fibration and Kuznetsov's treatment of their bounded derived categories of coherent sheaves. More precisely, we recover the K3 surface naturally associated to the fourfold as a moduli space of Gieseker stable ACM bundles of rank four. Introduction Fourier-Mukai techniques to study stable vector bundles on surfaces have been an extremely useful tool for more than 30 years. In this paper, we use a construction by Kuznetsov to generalize such circle of ideas and study Arithmetically Cohen-Macaulay (ACM) stable vector bundles on smooth projective cubic hypersurfaces. The basic idea is to use a semiorthogonal decomposition of the derived category of coherent sheaves to "reduce dimension". The disadvantage of this approach is that we have to consider complexes and a notion of stability for them; this forces us to restrict to the cubic threefold case and to special examples in the fourfold case. The advantage is that this may lead to a general approach to study ACM stable bundles in higher dimensions. ACM bundles and semiorthogonal decompositions. Let Y ⊂ P n+1 be a smooth complex cubic n-fold, and let O Y (H) denote the corresponding very ample line bundle. A vector bundle F on a Y is called Arithmetically Cohen-Macaulay if dim H i (Y, F (jH)) = 0, for all i = 1, . . . , n − 1 and all j ∈ Z. In algebraic geometry, the interest in studying stable ACM bundles (and their moduli spaces) on projective varieties arose from the papers [12,26,31,32,48]. In fact, in [26] it is proved that the moduli space of rank 2 instanton sheaves on a cubic threefold is isomorphic to the blowup of the intermediate Jacobian in (minus) the Fano surface of lines. The intermediate Jacobian can be used both to control the isomorphism type of the cubic, via the Clemens-Griffiths/Tyurin Torelli Theorem, and to prove the non-rationality of the cubic (see [25]). From a more algebraic viewpoint, ACM bundles correspond to Maximal Cohen-Macaulay (MCM) modules over the graded ring associated to the projectively embedded variety, and as such they have been extensively studied in the past years (see, e.g., [58]). In a different direction, Kuznetsov studied in [35] semiorthogonal decompositions of the derived category of a cubic hypersurface. In fact, as we review in Section 1.1, there exists a non-trivial triangulated subcategory T Y ⊂ D b (Y ), which might encode the birational information of the cubic. For example, in the case of a cubic threefold Y , it is proven in [14] that the isomorphism class of Y can be recovered directly from T Y as a sort of "categorical version" of the Clemens-Griffiths/Tyurin Torelli Theorem. In [36] it is conjectured that a cubic fourfold is rational if and only if the category T Y is equivalent to the derived category of a K3 surface. In the fourfold case, a relation with this conjecture and the classical Hodge theoretical approach to rationality appears in [2]. For the interpretation of T Y as a category of matrix factorization, we refer to [51]. For the interpretation as a summand of the Chow motive of Y, we refer to [13]. For cubic threefolds and fourfolds containing a plane, a different description of T Y is available, via Kuznetsov's semiorthogonal decomposition of the derived category of a quadric fibration (see [37]). Indeed, as we review in Section 1.3, T Y is equivalent to a full subcategory of the derived category of sheaves on P 2 with the action of a sheaf of Clifford algebras B 0 (determined by fixing a structure of quadric fibration on the cubic). We denote the induced fully-faithful functor Ξ : T Y ֒→ D b (P 2 , B 0 ) (for fourfolds, Ξ is an equivalence). The key observation (which is not surprising if we think to ACM bundles as MCM modules, see [21,Section 2] and [51]) is the following: given a stable ACM bundle F on Y , a certain twist of F by the very ample line bundle O Y (H) belongs to T Y (this is Lemma 1.6). Hence, the idea is to study basic properties of ACM bundles on Y (e.g., existence, irreducibility of the moduli spaces, etc.) by using the functor Ξ, and so by considering them as complexes of B 0 -modules on P 2 . The principle is that, since D b (P 2 , B 0 ) has dimension 2, although it is not intrinsic to the cubic, it should still lead to several simplifications. The main question now becomes whether there exists a notion of stability for objects in D b (P 2 , B 0 ) which corresponds to the usual stability for ACM bundles. In this paper we suggest, both for cubic threefolds and fourfolds containing a plane, that such a notion of stability in D b (P 2 , B 0 ) should be Bridgeland stability [18]. Cubic threefolds. Let Y be a cubic threefold. By fixing a line l 0 in Y , the projection from l 0 to P 2 gives a structure of a conic fibration on (a blow-up of) Y . The sheaf of algebras B 0 on P 2 mentioned before is nothing but the sheaf of even parts of the Clifford algebras associated to this conic fibration (see [36]). Denote by Coh(P 2 , B 0 ) the abelian category of coherent B 0 -modules, and by D b (P 2 , B 0 ) the corresponding bounded derived category. As a first step in the study of ACM bundles on Y , we consider the moduli spaces M d of Gieseker stable B 0 -modules in Coh(P 2 , B 0 ) with Chern character (0, 2d, −2d), for any d 1. These moduli spaces are tightly related to the geometry of Y and the first general result we can prove is the following (see Theorem 2.12). Theorem A. The moduli space M d is irreducible with a morphism Υ : M d → |O P 2 (d)| whose fiber on a general smooth curve C in |O P 2 (d)| is the disjoint union of 2 5d−1 copies of the Jacobian of C. Moreover, the stable locus M s d is smooth of dimension d 2 + 1. The geometry of M 1 and M 2 can be understood more explicitly. Indeed, it turns out that M 1 is the Fano variety of lines in Y blown-up at the line l 0 (see Proposition 2.13). On the other hand M 2 is a birational model of the intermediate Jacobian of Y (see Theorem 3.10 for a more detailed statement). Both results are obtained via wall-crossing in the space of Bridgeland stability conditions on the triangulated category D b (P 2 , B 0 ). As a corollary, one gets that the moduli space of instanton sheaves on Y (of charge 2) is isomorphic to a moduli space of Bridgeland stable objects in D b (P 2 , B 0 ) with prescribed Chern character (see Theorem 3.10). As T Y can be naturally identified with a full subcategory of D b (P 2 , B 0 ), via the functor Ξ, one may want to consider objects of M d which are contained in T Y . These generically correspond to ACM bundles on Y . This is the way we can achieve the following theorem which generalizes one of the main results in [22]. Theorem B. Let Y be a cubic threefold. Then, for any r 2, the moduli space of stable rank r Ulrich bundles is non-empty and smooth of dimension r 2 + 1. Recall that an Ulrich bundle E is an ACM bundle whose graded module m∈Z H 0 (Y, E(m)) has 3 rk(E) generators in degree 1 (see Section 2.5 for a discussion about the normalization chosen). If compared to the first part of [22,Thm. 1.2], our result removes the genericity assumption. We believe that Theorem A will also be useful in studying the irreducibility of the moduli space of stable Ulrich bundles. In fact, we expect the functor Ξ to map all stable Ulrich bundles on Y into Bridgeland stable objects in D b (P 2 , B 0 ), thus generalizing Theorem 3.10 to the case r > 2. It is maybe worth pointing out that the proof of Theorem B, which is contained in Section 2.5, is based upon the same deformation argument as in [22]. The main difference is that, by using our categorical approach and the moduli spaces M d , we can make it work also for small rank (r = 2, 3). Indeed, the argument in [22] relies on the existence of an ACM curve on Y of degree 12 and genus 10, proved by Geiß and Schreyer in the appendix to [22], only for a generic cubic threefold, using Macaulay2. Moreover, although we have focused on cubic threefolds, we believe that our approach might work for any quadric fibration. In particular, other interesting Fano threefolds of Picard rank 1 are the intersection of three quadrics in P 6 , the quartic hypersurface containing a double line, or the double covering of P 3 ramified along a quartic with an ordinary double point (see [10]). Cubic fourfolds containing a plane. We consider now the case of a cubic fourfold Y containing a plane P . The plane can be used as in the threefold case to give the structure of quadric fibration π over P 2 . The corresponding functor Ξ : T Y → D b (P 2 , B 0 ) is now an equivalence (see [36]). The category D b (P 2 , B 0 ) can be described more geometrically. Indeed, the singular quadrics of the fibration π lie over a sextic plane curve which we denote by C. Generically, the sextic will be smooth. In such a case, we let S be the smooth projective K3 surface obtained as double cover of P 2 ramified along C. By [36], we have then an equivalence D b (P 2 , B 0 ) ∼ = D b (S, A 0 ), where A 0 is a sheaf of Azumaya algebras over S. The K3 surface S plays an important role in the study of moduli spaces of ACM bundles on cubic fourfolds as explained in the following result. Theorem C. Let Y be a cubic fourfold in P 5 containing a plane P . Then the K3 surface S is the closure of a component of the moduli space of Gieseker stable ACM bundles over Y with class 4, −2H, −P, l, 1 4 , where l is the class of a line in Y . The theorem can be deduced by a quite long computation carried out all along Section 4. As far as we know, this is the first example of a 2-dimensional family of stable ACM bundles of rank 4 on a cubic fourfold. Moreover, we observe that the smooth locus of any moduli space of slope-stable ACM vector bundles on Y carries a symplectic form (see Remark 1.8). A way to rephrase Theorem C is that the embedding of T Y into D b (Y ) can be realized as a fully faithful Fourier-Mukai functor whose kernel is a shift of a sheaf: namely the universal family in the moduli problem in Theorem C. Related works. The idea of using semiorthogonal decompositions to study ACM bundles by reducing dimension is influenced by [39]. More precisely, in loc. cit., Kuznetsov proposes to understand the geometry of moduli spaces of instanton bundles (of any charge) on cubic threefolds via the category D b (P 2 , B 0 ) and the functor Ξ. There have been many studies about ACM bundles of rank 2 in dimension 2 and 3. Besides the already mentioned results on instanton bundles on cubic threefolds, some papers in this direction are [6,16,23,24,46]. The higher rank case has been investigated in [5,6,47]. The papers [50] and [53] give few examples of indecomposable ACM bundles of arbitrarily high rank. The already mentioned papers [21,22] contain a systematic study of stable ACM bundles in higher rank on cubic surfaces and threefolds. A general existence result for Ulrich bundles on hypersurfaces is in [28]. Regarding preservation of stability via the functor Ξ, the papers [14,45] study the case of ideal sheaves of lines on a cubic threefold or fourfold containing a plane. Plan of the paper. The paper is organized as follows. Section 1 collects basic facts about semiorthogonal decompositions and general results about ACM bundles on cubic hypersurfaces. In particular, we show that stable ACM bundles are objects of T Y (up to twists) and state a simple cohomological criterion for a coherent sheaf in T Y to be ACM (see Lemmas 1.6 and 1.9). In Section 1.3 we review Kuznetsov's work on quadric fibrations. Section 2 concerns the case of cubic threefolds where the first two results mentioned above are proved. The argument is based on a detailed description of the easiest case of M 1 which involves Bridgeland stability conditions (see Section 2.2). Some background on the latter subject is provided in the same section. In Sections 2.4 and 2.5 we prove Theorems A and B respectively. The geometric applications to some simple wall-crossing phenomena are described in detail in Section 3, where we study the geometry of M 2 and its relation to instanton bundles. Section 4 is devoted to the proof of Theorem C. Notation. Throughout this paper we work over the complex numbers. For a smooth projective variety X, we denote by D b (X) the bounded derived category of coherent sheaves on X. We refer to [29] for basics on derived categories. If X is not smooth, we denote by X reg the regular part of X. We set hom i (−, −) := dim Hom i (−, −), where Hom i (−, −) is computed in an abelian or triangulated category which will be specified each time. This paper assumes some familiarity with basic constructions and definitions about moduli spaces of stable bundles. For example, we do not define explicitly the notion of slope and Gieseker stability, of Harder-Narasimhan (HN) and Jordan-Hölder (JH) factors of a (semistable) vector bundle. For this, we refer to [30]. The same book is our main reference for the standard construction of moduli spaces of stable sheaves. For the twisted versions of them we refer directly to [54,42,59]. In the following, we will use the short-hand notation (semi)stable to refer to stable (respectively, semistable). Gieseker stability will be simply called stability, while slope stability will be called µ-stability. The derived category of a cubic hypersurface In this section we show that, on a smooth cubic hypersurface Y , all stable ACM bundles are well behaved with respect to Kuznetsov's semiorthogonal decomposition of the derived category. In particular, after recalling the notion of semiorthogonal decomposition of a derived category, we show that stable ACM bundles on Y belong to the non-trivial component T Y of D b (Y ), up to twist by line bundles. We also introduce one of the basic tools to study the derived category of cubic threefolds and fourfolds containing a plane: Kuznetsov's description of the derived category of a quadric fibration. 1.1. Semiorthogonal decompositions. Let X be a smooth projective variety and let D b (X) be its bounded derived category of coherent sheaves. We will denote such a decomposition by D b (X) = T 1 , . . . , T m . is called an exceptional collection if F i is an exceptional object, for all i, and Hom p D b (X) (F i , F j ) = 0, for all p and all i > j. where, by abuse of notation, we denoted by F i the triangulated subcategory generated by F i (equivalent to the bounded derived category of finite dimensional vector spaces). Moreover Similarly, one can define ⊥ F 1 , . . . , F m = {G ∈ T : Hom p (G, F i ) = 0, for all p and i}. Let F ∈ D b (X) be an exceptional object. Consider the two functors, respectively left and right mutation, More intrinsically, let ι⊥ F and ι F ⊥ be the full embeddings of ⊥ F and F ⊥ into D b (X). Denote by ι * ⊥ F and ι ! ⊥ F the left and right adjoints of ι⊥ F and by ι * F ⊥ and ι ! F ⊥ the left and right adjoints of ι F ⊥ . Then (see, e.g., [38,Sect. 2]). The main property of mutations is that, given a semiorthogonal decomposition of D b (X) we can produce two new semiorthogonal decompositions Let us make precise the relation between left and right mutations that will be used throughout the paper. Denote by S X = (−) ⊗ ω X [dim(X)] the Serre functor of X. We have the following lemma (which actually works more generally for any admissible subcategory in D b (X)). Proof. This follows from the remark that ⊥ (S X (F )) = F ⊥ and using adjunction between the functors ι * D , ι D and ι ! D for D equal to ⊥ F or to F ⊥ . The following lemmas show that the category T Y and stable ACM bundles are closely related. Remark 1.7. The previous lemma is slightly more general. Indeed, the same proof works for a balanced ACM bundle of rank greater than one, if it is µ-semistable and Hom(F, O Y (−H)) = 0. Remark 1.8. When n = 4, the Serre functor of the subcategory T Y is isomorphic to the shift by 2 (see [36,Thm. 4.3]). Thus, as an application of the result above and [40,Thm. 4.3], one gets that the smooth locus of any moduli space of µ-stable ACM vector bundles on Y carries a closed symplectic form. Finally, for later use, we recall how to construct autoequivalences of T Y (not fixing Coh(Y ) ∩ T Y ). Lemma 1.10. Let Y ⊂ P n+1 be a smooth cubic n-fold. Then, the functor ) belongs to T Y and the inverse of Θ is given by the exact functor Let us revise a classical example under a slightly different perspective. . This can be obtained by using a slightly different approach. First of all, observe that the ideal sheaves of lines I l ∈ T Y , for all l ⊂ Y and F (Y ) is the moduli space of these sheaves. By applying Θ[−1] (see Lemma 1.10), we get an exact sequence in Coh(Y ) In particular, all F l are torsion-free sheaves with Chern character v. By (1.2.8) and Lemma 1.9, we deduce that F l are all ACM bundles. Since they belong to T Y , we have H 0 (Y, F l ) = 0 and as they have rank 2, this shows that they are µ-stable. Remark 1.12. Let us remark that the original proof in [15] of the result in Example 1.11 relies on the so called Serre's construction which we briefly recall it in a more general form (e.g. [4]). Let X be a smooth projective manifold of dimension at least 3 and let E be a rank r vector bundle on X which is spanned by its global sections. The dependency locus of r − 1 general sections s 1 , . . . , s r−1 of E is a locally complete intersection subscheme V of codimension 2 in X. If L = det(E), then the twisted canonical bundle K V ⊗ L −1 is generated by r − 1 sections. Conversely, let V a codimension 2 locally complete intersection subscheme of X and let L be a line bundle on X such that H 2 (X, L −1 ) = 0. If K V ⊗ L −1 is generated by r − 1 global sections, then V can be obtained as the dependency locus of r − 1 sections of E. This construction is pervasive in the literature and it has been extensively used in various works to produce examples of stable ACM bundles. 1.3. Quadric fibrations. The results of [37] on the structure of the derived category of coherent sheaves on a fibration in quadrics will be the basic tools to study the derived category of cubic threefolds and fourfolds (containing a plane). We briefly summarize them here. Consider a smooth algebraic variety S and a vector bundle E of rank n on S. We consider the projectivization q : P S (E) → S of E on S endowed with the line bundle O P S (E)/S (1). Given a line bundle L on S and an inclusion of vector bundles σ : L → Sym 2 E ∨ , we denote by α : X ֒→ P S (E) the zero locus of σ and by π : X → S the restriction of q to X. It is not difficult to prove that π is a flat quadric fibration of relative dimension n − 2. The geometric picture can be summarized by the following diagram The quadric fibration π : X → S carries a sheaf B σ of Clifford algebras. In fact, B σ is the relative sheafified version of the classical Clifford algebra associated to a quadric on a vector space (more details can be found in [37,Sect. 3]). As in the absolute case, B σ has an even part B 0 whose description as an O S -module is as follows The odd part B 1 of B σ is such that We write Coh(S, B 0 ) for the abelian category of coherent B 0 -modules on S and D b (S, B 0 ) for its derived category. ). If π : X → S is a quadric fibration as above, then there exists a semiorthogonal decomposition where D b (S, B 0 ) is the derived category of coherent sheaves of B 0 -modules on S. In order to make this result precise, we need to give the definition of the fully faithful functor D b (S, B 0 ) → D b (X) providing the embedding in the above semiorthogonal decomposition. The exact functor Φ : where E ′ ∈ Coh(X) is a rank 2 n−2 vector bundle on X with a natural structure of flat left π * B 0module defined by the short exact sequence In the notation of [37,Lemma 4.5], where E ∈ Coh(X) is another rank 2 n−2 vector bundle with a natural structure of right π * B 0module (see again [37,Sect. 4]). The analogous presentation of E is In the notation of [37,Lemma 4.5], E = E −1,0 . The category of B 0 -modules may be hard to work with directly. In some cases, we can reduce to a category of modules over a sheaf of Azumaya algebras, which is easier to deal with. We conclude this section by recalling this interpretation (see [37,Sections 3.5 & 3.6]). We define S 1 ⊂ S to be the degeneracy locus of π, namely the subscheme parameterizing singular quadrics, and S 2 ⊂ S 1 the locus of singular quadrics of corank 2. We have to consider two separate cases, according to n being even or odd. n even. Let f : S → S be the double cover of S ramified at S 1 . is an equivalence of categories. Moreover, the restriction of A 0 to the complement of S 2 = f −1 (S 2 ) in S is a sheaf of Azumaya algebras. This will be the case for cubic fourfolds containing a plane. If the cubic is generic, then S 1 is smooth and S 2 is empty. n odd. Let f : S → S be the the stack of 2 nd roots of O S (S 1 ) along the section S 1 . An object of this stack over T → S is a triple (L, φ, δ), where L is a line bundle over T , φ is an isomorphism of L 2 with the pullback of O S (S 1 ) to T and δ is a section of L such that φ(δ 2 ) = S 1 (see [3,19]). Locally over S, the category of coherent sheaves on S can be identified with the category of coherent sheaves on the double covering of S ramified along S 1 which are Z/2Z-equivariant with respect to the involution of the double covering (which only exists locally). That is, another way of saying, the category of coherent sheaves on the quotient stack of the double cover by the involution. Kuznestov calls the noncommutative variety S, "S with a Z/2Z-stack structure along S 1 " (see [37,Ex. 2.2]). is an equivalence of categories. Moreover, the restriction of A 0 to the complement of S 2 = f −1 (S 2 ) in S is a sheaf of Azumaya algebras. This will be the case for any cubic threefold. In fact, since we assume from the beginning that a cubic threefold is smooth and the projection line is generic, then S 1 is smooth and S 2 is empty. Cubic threefolds This section contains the proofs of our main results on ACM bundles on cubic threefolds. The goal is to generalize a result of Casanellas-Hartshorne on Ulrich bundles. As explained in the introduction, the idea is to use Kuznetsov's results on quadric fibrations, to reduce the problem of studying ACM bundles on a cubic threefold to the study of complexes of sheaves on P 2 with the action of a sheaf of Clifford algebras B 0 . The main technical parts are Sections 2.2 and 2.3; there we prove some results on moduli spaces of objects in D b (P 2 , B 0 ), which are stable with respect to a Bridgeland stability condition. We come back to Ulrich bundles on cubic threefolds in Section 2.5. 2.1. The setting. Let Y ⊂ P 4 be a cubic threefold. Let l 0 ⊆ Y be a general line and consider the blow-up P of P 4 along l 0 . By "general" we mean that, if l is any other line meeting l 0 , then the plane containing them intersects the cubic in three distinct lines (we just avoid the lines of second type, see [25,Def. 6.6]). We set q : P → P 2 to be the P 2 -bundle induced by the projection from l 0 onto a plane and we denote by Y the strict transform of Y via this blow-up. The restriction of q to Y induces a conic fibration π : Y → P 2 . The geometric picture can be summarized by the following diagram In particular, the vector bundle E on S = P 2 introduced in Section 1.3 is now O ⊕2 P 2 ⊕ O P 2 (−h). Set D ⊂ Y to be the exceptional divisor of the blow-up σ : Y → Y . We denote by h both the class of a line in P 2 and its pull-backs to P and Y . We call H both the class of a hyperplane in P 4 and its pull-backs to Y , P, and Y . We The sheaf of even (resp. odd) parts of the Clifford algebra corresponding to π, from Section 1.3, specializes in the case of cubic threefolds to obtained by thinking of Y as the blow-up of Y along l 0 and using the main result in [52]. Then one shows that and thus we get a fully faithful embedding 2.2. B 0 -modules and stability. Our first goal is to study moduli spaces of stable B 0 -modules. In this section we present how the usual notion of stability extends to our more general situation. . We define the numerical Grothendieck group N (P 2 , B 0 ) as the quotient of K(P 2 , B 0 ) by numerically trivial classes. Given K ∈ D b (P 2 , B 0 ), we define its Chern character as is the functor forgetting the B 0 -action. By linearity the Chern character extends to K(P 2 , B 0 ); it factors through N (P 2 , B 0 ). (ii) If l ⊆ Y is a line and I l is its ideal sheaf, by [14, Ex. 2.11], we have , we can compute the Euler characteristic as a B 0 -module with the following formulas [2] (see, e.g., [14,Prop. 2.9]). We define the Hilbert polynomial of a B 0 -module G as the Hilbert polynomial of Forg(G) with respect to O P 2 (h). Then, the notion of Gieseker (semi)stability is defined in the usual way. Moduli spaces of semistable B 0 -modules have been constructed by Simpson in [54,Thm. 4.7]. We can also consider the slope stability for torsion-free sheaves in Coh(P 2 , B 0 ). Indeed, we have two natural functions rank and degree on N (P 2 , B 0 ): Given K ∈ Coh(P 2 , B 0 ) with rk(K) = 0, we can define the slope µ(K) := deg(K)/ rk(K) and the notion of µ-(semi)stability in the usual way. When we say that K is either torsion-free or torsion of dimension d, we always mean that Forg(K) has this property. Remark 2.3. As the rank of B 0 and B 1 is 4, a consequence of [14, Lemma 2.13(i)] is that these two objects are µ-stable. Moreover, all morphisms B 0 → B 1 are injective. . Assume one of the following two conditions is satisfied: • either A and B are torsion-free sheaves and µ-semistable, or • A and B are torsion sheaves pure of dimension 1 and semistable. Then Proof. The first claim follows directly from Serre duality. Indeed, by Remark 2.2, (vi), we have For the second, simply observe that Bridgeland stability. We will need to study stability for objects in D b (P 2 , B 0 ) which are not necessarily sheaves. To this end, we briefly recall the concept of Bridgeland stability condition. For all details we refer to [18,34]. satisfying the following compatibilities: (a) For all 0 = G ∈ A, (b) Harder-Narasimhan filtrations exist with respect to σ-stability, namely for any 0 = G ∈ A, there is a filtration in A In the previous definition, we used the following notation: by (a), any 0 = G ∈ A has a phase φ(G) := 1 π arg(Z(G)) ∈ (0, 1]. The notion of σ-stability in (b) is then given with respect to the phase: The support property is necessary to deform stability conditions and for the existence of a well-behaved wall and chamber structure (this is [17,Sect. 9]; the general statement we need is [7,Prop. 3.3]). We will only need a special family of stability conditions on D b (P 2 , B 0 ). By the explicit computations in Remark 2.2, To define an abelian category which is the heart of a bounded t-structure on D b (P 2 , B 0 ), let T, F ⊆ Coh(S, β) be the following two full additive subcategories: The non-trivial objects in T are the sheaves A ∈ Coh(P 2 , B 0 ) such that their torsion-free part has Harder-Narasimhan factors (with respect to µ-stability) of slope µ > −1. A non-trivial twisted sheaf A ∈ Coh(P 2 , B 0 ) is an object in F if A is torsion-free and every µ-semistable Harder-Narasimhan factor of A has slope µ −1. It is easy to see that (T, F) is a torsion theory and following [17], we define the heart of the induced t-structure as the abelian category By Remarks 2. Proof. This follows exactly in the same way as in [ The only non-standard fact that we need is a Bogomolov-Gieseker inequality for torsion-free µstable sheaves. This is precisely Lemma 2.4: for A ∈ Coh(P 2 , B 0 ) torsion-free and µ-stable, χ(A, A) 1 gives us the desired inequality. By proceeding as in [57,Sect. 3], to prove the lemma we only have to show property (a) in the definition of stability condition. Let A be a torsion-free µ-stable sheaf. Assume further that µ(A) = −1, and so Im(Z m ([A])) = 0. By (2.2.2) and the fact that r > 0, we have We need to prove the inequality Re We also observe that all the arguments in [56] generalize to the non-commutative setting (see also [42,43]). In particular, for all m > 1 4 , it makes sense to speak about moduli spaces of σ msemistable objects in A as Artin stacks (of finite-type over C, if we fix the numerical class), and about moduli space of σ m -stable objects as algebraic spaces. and suppose that, if we let C := j(C ′ ), the morphism j| C ′ is birational. As C ′ and l 0 do not intersect, we can argue exactly as in [14,Ex. 2.4]. In particular, using that Ψ(O Y (mh)) = 0 for all integers m, we conclude that The d = 1 case is treated in Example 2.11 below. We will also use this example for d = 2 and d = 3. In such cases, there always exists a curve C ′ ⊂ Y with the above properties. Example 2.11. We can specialize the previous example to the case in which C ′ ⊂ Y is a line l which does not intersect l 0 , namely d = 1. In such a case, we have F d ∼ = I l and Moreover, we have an isomorphism as O P 2 -modules we deduce that a = 0, as we wanted. It is a standard fact (it follows, e.g., as in [8, Example 9.5]) that the assignment extends to a morphism which is well-defined everywhere. Theorem A becomes then the following statement: Theorem 2.12. The moduli space M d is irreducible and, for a general smooth curve C ∈ |O P 2 (d)|, we have where JC = {L ∈ Pic(C) L is algebraically equivalent to O C } is the Jacobian of C. Moreover, the stable locus M s d is smooth of dimension d 2 + 1. Before proceeding with the general proof which is carried out in the next section, we examine the easy case d = 1: Proposition 2.13. The moduli space M 1 = M s 1 is isomorphic to the Fano surface of lines F (Y ) blown-up at the line l 0 . In particular, M 1 is smooth and irreducible. To prove Proposition 2.13, we use wall-crossing techniques from [8] for the family of Bridgeland stability conditions σ m of Lemma 2.7. The precise result we need is the following lemma, whose proof is exactly the same as [45,Lemma 5.7]. and F becomes σ m -semistable for m = √ 5 8 with Jordan-Hölder filtration By [14,Example 2.11], the object Ξ 3 (I l 0 ) sits in the distinguished triangle which is the Harder-Narasimhan filtration of Ξ 3 (I l 0 ) for m > m 0 := Step 1: Deformation theory. For any G ∈ M d , we have χ(G, G) = −d 2 . Hence, to prove that M s d is smooth of dimension d 2 + 1 it is enough to show that, it is non-empty and Step 2: Recall that the conic fibration π degenerates along a smooth quintic ∆ ⊂ P 2 . We denote ∆| C by 5d i=1 p i (the points are possibly non-distinct) and, abusing notation, we set 1 2 p i to be the the section in C corresponding to the 2 nd -root of p i . As in Proposition 1.15, we can consider the stack P 2 over P 2 of 2 nd roots of O P 2 (∆) along the section ∆. We denote by ψ : P 2 → P 2 the natural projection. We then have an equivalence of abelian categories Given a smooth curve C ⊂ P 2 we can restrict this construction to ψ : C → C where C is a twisted curve (stack of 2 nd roots of (C, ∆| C )). The restriction A 0 | C is a sheaf of (trivial) Azumaya algebras, i.e., there exists a vector bundle of rank 2, E C,0 ∈ Coh( C), such that A 0 | C = End(E C,0 ) (see, for example, [37,Cor. 3.16]) and is an equivalence of categories. In particular, Since E C,0 is determined up to tensorization by line bundles, we can assume directly that ψ * E ∨ C,0 ∈ M d . As E C,0 is a rank two vector bundle on C, it is clear that the fiber of Υ over the smooth curve C consists of line bundles on C. By [19,Cor. 3.1.2], an invertible sheaf on C is of the form On the other hand, as E ∨ C,0 has rank 2, we have Let J be the set of all subsets of {1, . . . , 5d} of even cardinality and, for I ∈ J, set τ I to be the cardinality of I. Then the above discussion can be rewritten as Hence which is precisely (2.4.1), because J has cardinality 2 5d−1 . Step 3: M d is irreducible. To prove the irreducibility of M d , we follow the same strategy as in [33]. We first prove that M d is connected by simply following the same argument as in the proof of [33,Thm. 4.4]. Indeed, by Proposition 2.13, we know that [33, §4.3] show that we can essentially assume that there is a universal family F ∈ Coh(X × M d ) with two projections p : Hence a computation of local Exts shows that ) (more generally, this holds for any base change S → X). It turns out that the point F ∈ X is the degeneracy locus of the map α (see [33,Lemma 4.3]). Thus, blowing up X at F , we get f : Z → X providing, as in (2.4.2), a new complex of O Z -modules Let D be the exceptional divisor in Z and let W ′ be the middle cohomology of . As the rank of W and W ′ are smaller than dim(X), one gets a contradiction as Kuranishi map is trivial. Following then the notation of [33, §2.7], the quadratic part µ of the Kuranishi map is also trivial and the null-fibre This finally concludes the proof of Theorem 2.12. Remark 2.15. When d = 1, the map Υ has a very natural and well-known geometrical interpretation. In fact, given a B 0 -module F supported on a general line l ⊂ P 2 , we can consider all the lines l ′ in Y such that Ξ 3 (I l ′ ) ∼ = F . By Proposition 2.13, we have to count the number of lines l ′ that map to l via the projection from l 0 (where the lines that intersect l 0 are mapped to the projection of the tangent space of the intersection point). The lines that intersect l 0 form an Abel-Prym curve in F (Y ), so they do not dominate |O P 2 (1)|. Hence, we need only to count the skew lines to l 0 that map to l. The preimage of l via the projection is a cubic surface, so it contains 27 lines. The line l intersects the degeneration quintic ∆ in 5 points, which give us 5 coplanar pairs of lines intersecting l 0 . Hence we have 27 − 10 − 1 = 2 4 lines skew from l 0 that project to l. Indeed, if Bl l 0 F (Y ) is the blow-up of F (Y ) along l 0 , we have a finite morphism Bl l 0 F (Y ) → |O P 2 (1)| which is 2 4 : 1 (see, e.g., [11,Proof of Thm. 4]). For applications to stable sheaves on cubic threefolds, as in Lemma 2.14, we consider the subset Proof. This is a well-known general fact. The proof we give here mimics [9,Theorem 2.15]. We first recall that, as proved in Step 3 of the proof of Theorem 2.12, by considering the maps We refer to [22,Sect. 1] for the basic properties of Ulrich bundles on projective varieties. In particular, we recall the following presentation of stable Ulrich bundles due to the Harthsorne-Serre construction Lemma 2.19. A stable Ulrich bundle F of rank r on a cubic threefold Y admits the following presentation where C is a smooth connected curve of degree 3r 2 −r Since h 1 (Y, I C ) = 0, C is connected. By [21,Prop. 3.7], we have that deg C = 3r 2 −r 2 and by Riemann-Roch we get p a (C) = r 3 − 2r 2 + 1. From Lemma 2.19 it is standard to compute the Chern character of an Ulrich bundle F of rank r, by using Hirzebruch-Riemann-Roch: Denote by M sU r the moduli space of stable Ulrich bundles of rank r 2. It is smooth of dimension r 2 +1 since for any such bundle E, we have dim Ext 1 (E, E) = r 2 +1 while dim Ext 2 (E, E) = 0. To prove that M sU r is non-empty, the strategy is to show the existence of low rank Ulrich bundles (r = 2, 3) and then use a "standard" deformation argument [22,Thm. 5.7]. The existence of rank 2 Ulrich bundles is well-known [26,48]. They usually appear in the literature as instanton bundles (see the forthcoming Section 3). In [22] the authors construct rank 3 Ulrich bundles, relying on the existence of an ACM curve on Y of degree 12 and genus 10 (see Lemma 2.19). The existence of such curves is proved, using Macaulay2, by Geiß and Schreyer in the appendix and only for a generic cubic threefold. Our approach to construct Ulrich bundles of rank 3 is different (for completeness we also construct rank 2 Ulrich bundles). In particular, we do not use the Hartshorne-Serre construction (see Lemma 2.19), but the structure of conic fibration of a blow-up of Y . We have computed the image in D b (P 2 , B 0 ) of the ideal sheaves of lines in Y in Example 2.11. We can therefore consider extensions of them, and use deformation theory to cover the subset N d ⊂ M d (for d = 2, 3). If G is a general sheaf in N d , then the object Ξ −1 3 (G) will be a stable ACM bundle of rank d, which will be automatically Ulrich. The advantage of our approach is that by using the category T Y we are able to reduce all computations to the category D b (P 2 , B 0 ), via the functor Ξ 3 . Thus, the existence result needed goes back to Theorem 2.12. Given G ∈ N d , we want to study Ξ −1 3 (G) ∈ T Y . In order to show that it is an ACM bundle we want to see how the vanishings in Lemma 1.9 can be checked in D b (P 2 , B 0 ). Lemma 2.21. We have the following natural isomorphisms Proof. As for the first series of isomorphisms, we start with the following chain of natural isomorphisms, which follows directly from the definitions: By definition of Ψ (1.3.2), we have the two exact triangles in D b (P 2 , B 0 ): Therefore, combining (2.5.3) and (2.5.4) we have where we have used that cone [1], and the first row of isomorphisms follows. It remains to prove the isomorphisms in the second line of the lemma. We start with the following chain of natural isomorphisms, which follows directly from the definitions: The last isomorphism is an easy computation, and the lemma follows. Now we are ready to give a geometric interpretation of the objects of N d (recall (2.4)). Proposition 2.22. If d = 2, 3 and G is a general sheaf in N d , then the object Ξ −1 3 (G) is a stable ACM bundle of rank d. Proof. Again the argument can be divided in a few parts. Step 1: Ξ −1 3 G is a coherent sheaf. By Example 2.10, the sheaf F d is in T Y and Ξ 3 (F d ) ∈ N d . By semi-continuity, for G ∈ N d general, the object Ξ −1 3 (G) has to be a sheaf. Step 2: First vanishing. We want to show that H i (Y, Ξ −1 3 (G) ⊗ O Y (−2H)) = 0 for i = 1, 2. By Lemma 2.21, we need to prove that Hom 0 D b (P 2 ) (Ω P 2 (2h), G) = 0. Before that, by Example 2.11, we observe that and all other Hom-groups are trivial. The extension of d sheaves Ξ 3 (I l ) (with different l) lies in M s d . By semicontinuity and induction on d, Notice that here we are implicitly using that χ(B 0 , G) = d. Indeed, this follows from (2.2.3). Case 2: Rank d = 3. Let G be supported in C ∈ |O P 2 (3)|. Let C be smooth and intersect ∆ transversally. Note that Ω P 2 (2h)| C = F is an Atiyah bundle of degree 3, so we have to show that H 0 (C, G⊗F ∨ ) = 0. By (2.5.6) and semi-continuity, G has only three possibilities (as O P 2 -module): G ∼ = Atiyah bundle of degree 3, (2.5.9) where j C denotes the embedding and L i are generic line bundles of degree i on C. Step 3: Second vanishing and stability. We want to show that H 1 (Y, Ξ −1 3 (G) ⊗ O Y (H)) = 0. By the second part of Lemma 2.21, we need to prove that H 1 (P 2 , Forg(G ⊗ B 0 B 1 )) = 0. Again we can argue as in Step 1 by semi-continuity and use that G = Ξ 3 (F d ) satisfies the vanishing. Thus Remark 2.23. Note that we have also reproven that M sU r is smooth of dimension r 2 + 1. Indeed, the computations dim Ext 1 (F, F ) = r 2 + 1 and dim Ext 2 (F, F ) = 0 have already been done in Step 1 of Theorem 2.12. Remark 2.24. The above proof fails, for the case d = 1, essentially only in Step 2; more precisely, the restriction Ω P 2 (2h)| C to a line C ⊂ P 2 is not semistable. The d = 2 case and the instanton bundles on cubic threefolds In this section we will describe the wall-crossing phenomena that links the space M 2 to the moduli space of semistable instanton sheaves on Y . As in Section 2.2, the approach follows closely the discussion in [45,Sect. 5], but since the corresponding numerical class is not primitive, we need some extra arguments. The argument is a bit involved and thus we prefer to sketch it here for the convenience of the reader. First of all, we need to analyze how stability and semistability of special objects in D b (P, B 0 ) vary in the family of stability conditions described in Lemma 2.7 (see Section 3.1). This is conceptually rather standard but computationally a bit involved. Once this is settled, one can consider instanton sheaves E and look at their images under the functor Ξ 3 . It turns out that they are all stable B 0 -modules if E is locally free (see Lemma 3.9). On the other hand special attention has to be paid to instanton sheaves E which are not locally free. The most delicate cases are when they are extensions of ideal sheaves of two lines, one of which is the line of projection l 0 . Having the toy-model of M 1 in mind, it is rather clear that all this leads naturally to a wall-crossing phenomena. This will be described in Theorem 3.10, where again we combine the classical description of the moduli space of semistable instanton sheaves [26] and the machinery of (Bridgeland) stability conditions from Section 3.1. In order to σ m -destabilize F , we need that Summarizing, if F ∈ N 2 is stable then it is σ m -stable for all m > 1 4 . If F ∈ N 2 is properly semistable, then its two JH-factors are σ m -stable for all m > 1 4 . and F admits a morphism from B 1 and it has also a morphism to B 0 [1]. Hence it could be J m (A) ∈ {m, 3m}. We study more precisely these two cases. If J m (A) = m, then we claim there exist the following exact sequences in A: The second exact sequence is obtained from the first one using χ(B 1 , F ) = 0, so hom A (F, B 0 [1]) = 0. Indeed, it remains to prove that F → B 0 [1] needs to be surjective in A. If not, let L be the cokernel. Clearly H 0 (L) = 0. Note that H 0 (T ) is a torsion sheaf. Then, if L ′ = 0, then B 0 → L ′ needs to be injective and H −1 (T ) = 0. Therefore T ∈ T and it is a quotient of F as a B 0 -module. We know that F is a Gieseker semistable B 0 -module and c 1 (T ) 2. This implies c 1 (L ′ ) −3, which contradicts L ′ ∈ F, since by [14,Lemma 2.13 Equivalently, if we are in case J m (A) = 3m, then we claim that we get again the exact sequences (3.1.2). Indeed, now the first exact sequence is obtained from the second one by using χ(B 1 , F ) = 0, so hom A (B 1 , F ) = 0. In that case we need to prove that B 1 → F is injective in A. If not, let K be the kernel. Clearly H −1 (K) = 0. Then K is a B 0 -module in T. If K = 0, then K → B 1 needs to be injective. Hence T := Im A (B 1 → F ) ∈ T and it is a subobject of F as a B 0 -module. We know that F is Gieseker a semistable B 0 -module and c 1 (T ) 2. This implies c 1 (K) −5, which contradicts K ∈ T, since by [14, Lemma 2.13(i)] rk(K) 4. In both cases we can summarize the situation in the following commutative diagram of exact sequences of σ m 0 -semistable objects in A Proof. Assume, for a contradiction, that G is not σ m -semistable (resp. σ m -stable) at m m 0 . Then we have an exact sequence in A where A = 0 is σ m -stable and Re(Z m (A)) < 0 (resp. 0). Let ch(A) = (r, c 1 , ch 2 ). The same argument as in [45,Lemma 5.8] shows that r = 0 and Im (Z m (A)) ∈ {m, 2m, 3m}. But then, the same casuistry as in Lemma 3.1 shows that this can only happen when m < m 0 (resp. m < m 0 or m = m 0 and G ∈ B 1 ⊥ ). Now, let G ∈ M σm 1 (P 2 , B 0 ; w) be a (semi)stable object for m 1 m 0 . By Lemma 3.2, we have two possibilities: either G is σ m -(semi)stable for all m m 0 , or m 1 = m 0 and it either stabilizes or destabilizes for all m > m 0 . We will see in Lemma 3.5 that it destabilizes or stabilizes depending whether hom(B 1 , G) is maximal in its S-equivalence class or not. Proof. We argue as in [45,Lemma 5.9] to deduce that G is a pure B 0 -module of dimension 1. If A is a stable B 0 -module that destabilizes G, then Re Z m (A) < 0 (resp. 0) so G would not be σ m -semistable. As a straightforward consequence of the previous lemmas, we get the following. Finally, we study in general the S-equivalence classes in M σm 0 (P 2 , B 0 ; w) which contain objects outside B 1 ⊥ . In particular, we will study the S-equivalence classes of the objects F ∈ M 2 , which become σ m 0 -semistable with JH-factors as in cases (c.iii) and (c.iv) of Lemma 3.1. The following lemma will be useful in the next section to prove Theorem 3.10. (a.ii) Gieseker properly semistable B 0 -modules in M 2 \ N 2 that are parametrized by a P 1 contained in the P 2 above; in the complement P 1 inside P 2 , the B 0 -modules are Gieseker stable; (a.iii) an extension of Ξ 3 (I l 0 ) and Ξ 3 (I l ), which lies in B 1 ⊥ . These are the only S-equivalence classes that contain σ m 0 -semistable objects that get properly destabilized for m > m 0 and m < m 0 . Suppose we are in case (a) and let G be a representative in the S-equivalence class such that hom(B 1 , G) = 0. Note that an element in Hom 1 (B 0 [1], B 1 ) corresponds to an element in the projective line P 1 which is the exceptional locus of the map M 1 → F (Y ) described in Proposition 2.13. Taking the unique non-trivial extension of one of this B 0 -modules with Ξ 3 (I l ) we obtain a P 1 of properly semistable B 0 -modules (this is (a.ii)). Now we start with an element in Hom 1 (Ξ 3 (I l ), B 1 ). By Example 2.11, we have that Let C ′ ∈ Hom 1 (Ξ 3 (I l ), B 1 ). Clearly C ′ ∈ Coh(P 2 , B 0 ) since Ξ 3 (I l ) and B 1 are also in Coh(P 2 , B 0 ). Note that hom 1 (B 0 [1], C ′ ) = 3, because Let G ∈ Hom 1 (B 0 [1], C ′ ). We want to see that G is a B 0 -module. We have Since B 0 is torsion-free and rk(C ′ ) = rk(B 0 ), either B 0 → C ′ is zero, or H −1 (G) = 0. Hence, the non-trivial extensions between B 0 [1] and C ′ are B 0 -modules and they are parametrized by a P 2 . When the first extension is trivial, i.e., C ′ = B 1 ⊕ Ξ 3 (I l ), we recover the previous case. Finally, we want to see that these extensions G are Gieseker semistable B 0 -modules. Since G is σ m 0 -semistable, up to choosing ε small enough, G is σ m -semistable, for all m ∈ (m 0 , m 0 +ε). Indeed, if not, by [17,Prop. 9.3], the HN-factors of G in the stability condition σ m , for m ∈ (m 0 , m 0 + ε), would survive in the stability condition σ m 0 . This would contradict the σ m 0 -semistability of G. Since we have seen that M 2 = M σm (P 2 , B 0 ; w) for all m > m 0 , we get that G ∈ M 2 and thus (a.i). If G is properly semistable, then G is the extension of two stable B 0 -modules, G 1 and G 2 . Since hom(B 1 , G) = 0, we can suppose that G 1 ∈ N 1 and G 2 ∈ M 1 \N 1 and we are in the aforementioned P 1 . Now, suppose we are in case (a) and let G be a representative in the S-equivalence class in Suppose we are in case (b) and let G be a representative in the S-equivalence class such that hom(B 1 , G) = 2. By the same argument as before, an extension C of B 1 with itself needs to be a subobject of G in A while an extension C ′ of B 0 [1] with itself is a quotient of G in A. Note that necessarily C = B ⊕2 1 and C ′ = B ⊕2 0 [1]. Hence we consider an element in G ∈ Hom 1 (B ⊕2 0 [1], B ⊕2 1 ). Equivalently we can construct G as the extension of two sheaves G 1 and G 2 in the exceptional locus of the map M 1 → F (Y ) described in Proposition 2.13. Each of them is parametrized by a P 1 . But since the role of G 1 and G 2 is symmetric, we obtain that the S-equivalence classes of the G as objects in M 2 are parameterized by P 1 × P 1 quotiented by the natural involution. Thus, the S-equivalence classes of the G's are parameterized by a P 2 and we obtain (b.i). Note that Let G be in B 1 ⊥ and as in (b). Again, G is obtained from an element in Hom 1 (B ⊕2 1 , B ⊕2 0 [1]). Equivalently we can construct G as the extension of the two unique non-trivial extensions in Hom 1 (B 1 , B 0 [1]). Each of them is Ξ 3 (I l 0 ) and Ext 1 ( The remaining indecomposable objects G in case (b) have hom(B 1 , G) = 1 (as in (b.iii)) and he last statement of the lemma follows from the fact that these are the only S-equivalence classes that contain objects G such that hom(B 1 , G) = 0 and objects G ′ such that hom(G ′ , B 1 ) = 0. Definition 3.7. We say that E ∈ Coh(Y ) is an instanton sheaf if E is a Gieseker semistable sheaf of rank 2 and Chern classes c 1 (E) = 0 and c 2 (E) = 2. When E is locally free, we call it instanton bundle. An instanton sheaf according to the above definition would be called instanton sheaf of charge 2 in the existing literature. In general, an instanton bundle of charge s 2 is a locally free sheaf E of rank 2, Chern classes c 1 (E) = 0 and c 2 (E) = s, and such that H 1 (Y, E(−1)) = 0 (see, for example, [35,Def. 2.4]). It is easy to show that if the charge is minimal (i.e., c 2 (E) = 2), then the condition H 1 (Y, E(−1)) = 0 is automatically satisfied (see [39,Cor. 3 (2) E is stable but not locally free. In this case, E is obtained by the construction in Example 2.10. In fact, these are the only stable instanton sheaves that are not locally free. (3) E is properly semistable. In this situation, E is extension of two ideal sheaves of lines in Y . Moreover, given a stable instanton bundle E, then E(1) is globally generated [26,Thm. 2.4], so E is an Ulrich bundle. Indeed, E is associated to a non-degenerate smooth elliptic quintic C via the Serre construction (see [26,Cor. 2.6] and compare it with Lemma 2.19). The following will be crucial in our analysis. Lemma 3.9. Let E be a stable instanton bundle. Then Ξ 3 (E) is a stable B 0 -module. Observe that H −1 (Ψ(σ * I C (H))) ⊆ B 2 is a non-trivial torsion-free sheaf of rank 4. Hence, the map g is either injective or zero. Step 1: Assume that the associated elliptic quintic C does not intersect l 0 . Since C ∩ l 0 = ∅, we have Ψ(σ * O C (H)) = F C [1], where F C is a rank 2 vector bundle supported on C = π(σ −1 (C)) ⊂ P 2 . Hence, (3.2.2) becomes On the one hand, note that f could be surjective, or zero, or have cokernel supported on points. We claim that g in (3.2.3) is injective. Assume, by contradiction that g is zero. Hence, we have the following exact sequence If H 0 (Ψ(σ * I C (H))) is supported at most in dimension 0, then we have 0 = Hom 2 (B 1 , B −1 ) ֒→ Hom 2 (B 1 , H 0 (Ψ(F ))). Note that we have an exact triangle ⊥ and we get a contradiction. If f and g are zero, then H −1 (Ψ(σ * F )) = B 2 and we get a contradiction because Hom 0 (B 1 , H −1 (Ψ(σ * F ))) = 0. The case when f is surjective and g is trivial can be excluded by a similar argument as we would have B −1 ∼ = H 0 (Ψ(σ * F )). Therefore, g is injective and Ψ(σ * F ) is a torsion sheaf with class 2[ Step 2: Assume that the associated elliptic quintic C intersects l 0 transversally in a point. Since C ∩ l 0 = {p}, we have Ψ(σ * O C (H)) = Ψ(O C ′ ∪γ (H)), where by abuse of notation we denote by C ′ the strict transform of C and γ ⊂ D is the line σ −1 (p). Hence, (3.2.2) becomes To characterize Ψ(O C ′ ∪γ (H)) better, consider the exact sequence on Coh( Y ) On the one hand, we need to compute Ψ(O γ (H)). As γ ⊂ D it makes sense to consider the ideal sheaf I γ,D which is actually equal to By [14,Ex. 2.11] and applying the functor Ψ, it provides the exact triangle H − h)). By construction, we know that Ψ(O γ (H − h)) is a torsion sheaf in degree −1 and we have the following exact sequence . Hence, if we tensor (3.2.6) by O Y (h) and we apply Ψ again, we get that Ψ(O γ (H)) is a torsion sheaf in degree −1 and the following exact sequence Note that, Hom(B 2 , H −1 (Ψ(O γ (H)))) ∼ = C. If we are in Case (b.2), exactly the same arguments as in Case (b.1) show that Ψ(σ * F ) is a torsion sheaf with class 2[B 1 ] − 2[B 0 ]. So we can suppose that we are in Case (a.2) and we have the following diagram: By the Horseshoe Lemma we have the following exact sequence So, we have that K ′ ∼ = B 1 and L ′ ∼ = H −1 (Ψ(O (m−1)γ (H))) and we get the following where f 3 could be either zero, or surjective, or have cokernel supported on points. We denote by F (Y ) the strict transform of F (Y ) under f . Since we have chosen l 0 general (i.e., such that for any other line l meeting l 0 , the plane containing them intersects the cubic in three distinct lines), then F (Y ) ∩ (−F (Y )) is the Abel-Prym curve C l 0 ⊂ J(Y ) consisting of all lines inside Y that intersect l 0 (e.g. [41,Sect. 5]). Note that F (Y ) parametrizes properly semistable instanton sheaves that fall under case (3) Proof. Let E be a semistable instanton sheaf. We claim that if Ξ 3 (E) ∈ Coh(P 2 , B 0 ), then Ξ 3 (E) ∈ M 2 (i.e., it is semistable) and, by Lemma 3.1, Ξ 3 (E) ∈ M σm (P 2 , B 0 ; w) for all m > One possibility is that E is a stable instanton bundle. In this case, Ξ 3 (E) ∈ M 2 follows from Lemma 3.9. Another possibility is that E is a stable instanton sheaf which is not locally free. In that case E can be associated to a smooth conic via the Serre construction. If the conic does not intersect the line of projection then Ξ 3 (E) ∈ M 2 follows from Example 2.10. If the conic intersects the line of projection in one point or two points (even tangentially), then the same computations as in Steps 2, 3, and 4 of the proof of Lemma 3.9 show again that Ξ 3 (E) ∈ Coh(P 2 , B 0 ). By Step 5 of the proof of Lemma 3.9, Ξ 3 (E) is stable, so in M 2 . Finally, the last possibility is that E is a properly semistable sheaf and the 2 JH-factors are I l and I l ′ , where eventually l = l ′ , but in any case l, l ′ = l 0 . Then Ξ 3 (E) ∈ M 2 follows from a direct computation based on the fact that Ξ 3 (I l ) and Ξ 3 (I l ′ ) are in M s 1 (see Lemma 2.14). Hence, by [14,Ex. 2.11], the only cases in which Ξ 3 (E) ∈ Coh(P 2 , B 0 ) appear when E is a properly semistable sheaf and I l 0 is a JH-factor. Indeed, this is the only case where Ξ 3 (E) ∈ M 2 and we need to push our analysis a bit further. When I l 0 and I l with l = l 0 are the JH-factors of E, then hom(Ξ 3 (E), B 1 ) = 1. Hence, the HN-filtration of Ξ 3 (E) for m > m 0 = → 0 are exact sequences in the abelian category A which is the heart of the bounded t-structure in the stability condition in Lemma 2.7. If the two JH-factors of E are both isomorphic to I l 0 , then hom(Ξ 3 (E), B 1 ) = 2. Hence, the HN-filtration of Ξ 3 (E) for m > m 0 is In both cases, this means that Ξ 3 (E) is σ m 0 -semistable. As a consequence, up to choosing ε small enough, Ξ 3 (E) is σ m -semistable, for all m ∈ (m 0 − ε, m 0 ). Indeed, if not, by [17,Prop. 9.3], the HN-factors of Ξ 3 (E) in the stability condition σ m , for m ∈ (m 0 − ε, m 0 ), would survive in the stability condition σ m 0 . This would contradict the σ m 0 -semistability of Ξ 3 (E). Since the quotient Ξ 3 (E) → B 1 σ m -destabilizes Ξ 3 (E), for m > m 0 , we have that We claim that M inst Y = M σm (P 2 , B 0 ; w), for all m ∈ (m 0 − ε, m 0 ] and ε > 0 sufficiently small. More precisely, we need to show that, for m ∈ (m 0 − ε, m 0 ), the objects Ξ 3 (E) are the only σ msemistable objects in A with class w, when E is a semistable instanton sheaf. First observe that, if G ∈ A is a σ m -semistable object, for some m ∈ (m 0 − ε, m 0 ) and with class w, then G ∈ B 1 ⊥ . By [17,Prop. 9.3], up to replacing ε, we can assume that all such objects G are σ m 0 -semistable. By Lemma 3.2, we have two possibilities: either G is σ m -semistable for all m m 0 , or G is properly σ m 0 -semistable and destabilizes for all m > m 0 . In the first case, G is a (semi)stable element of N 2 by Lemma 3.3 and the discussion before. Thus, by the proof of Proposition 2.22, Ξ −1 3 (G) is either a balanced ACM bundle of rank 2 (i.e., an instanton bundle) or as in case (2) As a corollary of the previous proof we get the following result which is of interest in itself. Cubic fourfolds containing a plane In this section we prove Theorem C by constructing a family of stable ACM bundles which are parametrized by the K3 surface naturally associated to a cubic fourfold containing a plane. 4.1. Geometry of cubic fourfolds with a plane. In this section, we let Y ⊂ P 5 be a cubic fourfold containing a plane P . Consider the blow-up P of P 5 along P . We set q : P → P 2 to be the P 3 -bundle induced by the projection from P onto a plane and we denote by Y the strict transform of Y via this blow-up. The restriction of q to Y induces a quadric fibration π : Y → P 2 . In particular, the discussion in Section 1.3 applies to Y (note that the vector bundle E on S = P 2 is now O ⊕3 P 2 ⊕ O P 2 (−h)). The fibres of π degenerate along a sextic C ⊂ P 2 . The curve C has at most ordinary double points. On the one hand, over the smooth points of C the fibres are cones with one singular point. On the other hand, over the singular points of C the fibre is the union of two planes intersecting along a line. For the general cubic fourfold containing a plane, the sextic C is smooth. The double cover over f : S → P 2 ramified along C is a projective K3 surface (singular over the singular points of C). The geometric picture can be summarized by the following diagram We let D ⊂ Y be the exceptional divisor of the blow-up σ : Y → Y . We denote by h both the class of a line in P 2 and its pull-backs to P and Y and, accordingly, we call H both the class of a hyperplane in P 5 and its pull-backs to Y , P, and Y . We recall that The sheaf of even (resp. odd) parts of the Clifford algebra corresponding to π (see Section 1.3) specializes in this case to where T Y is the full subcategory in (1.2.1). The way this equivalence is attained is by performing a precise sequence of mutations which allow Kuznetsov to compare the semiorthogonal decomposition in Theorem 1.13 and the one obtained by thinking of Y as the blow-up of Y along P and using [52]. The details will not be needed in the rest of this paper but we just recall that where Φ is the embedding described in Section 1.3 and defined in terms of E ′ . For later use we set When C is smooth, we can describe the category Coh(S, A 0 ) in terms of twisted sheaves. More precisely, there exist α ∈ Br(S) in the Brauer group of S, α 2 = id, and an α-twisted vector bundle of rank 2, E α ∈ Coh(S, α), such that A 0 = End(E α ) and ) is an equivalence of categories. When C is singular, the vector bundle E α still existsétale locally on smooth points. Let x ∈ S reg . Consider L x := f * (C(x)⊗E ∨ α ) ∈ Coh(P 2 , B 0 ). As an O P 2 -module it is isomorphic to V ⊗ C C(f (x)), where V is a 2-dimensional C-vector space. Lemma 4.1. As B 0 -module L x takes the form: and it acts on V via one of the two projections , we get the following. Proposition 4.2. For all x ∈ S reg , the object M x is a coherent sheaf with class ch(M x ) = (4, −2H, −P, l, 1 4 ) ∈ H * (Y, Q). The proof will be carried out in the rest of this section and it is divided up in several steps. Moreover, in Section 4.4, we will show that M x is actually a stable ACM bundle. We denote by Q f (x) the fibre of π : Y → P 2 over f (x) and by l x ⊂ Q f (x) a line in the ruling corresponding to x ∈ S reg . Recall that the points in S reg parametrize rulings of lines in the quadric fibration π. When it is clear from the context, we will denote Q f (x) simply by Q and l x by l. Step 1: Kuznetsov's embedding. We first want to prove that Since π is flat and E ′ is π * B 0 -flat, we have Φ(L x ) ∈ Coh( Y ). As α is a closed immersion, also α * Φ(L x ) is a sheaf, i.e., α * Φ(L x ) ∈ Coh( P). We have where we used the Projection Formula for the fist isomorphism and the identity π = q • α for the second one. Since α * Φ(L x ) is a sheaf, if we tensor the exact sequence (1.3.1) by q * L x , then we get an exact sequence: The first term in the previous exact sequence is is the "matrix factorization" of the quadric Q f (x) = π −1 (f (x)). Therefore, Φ(L x ) is the cokernel of the matrix factorization map, namely the ideal sheaf . This is well explained in [1]. Step 2: the right mutation. . We want to show that Φ ′ (L x ) sits in the following distinguished triangle By Step 1, we have Now, we observe that where the first isomorphism follows from Serre duality and the fact that K Y = −2H − h, while for the second one, we use that O P (H) is a relative ample line bundle. Since l is a line in the quadric Q, we have Hence we get the distinguished triangle (4.3.5). Step 3: the left mutation. . We want to show that Φ ′′ (L x )[−1] is isomorphic to a sheaf sitting in the following non-split short exact sequence where K x is defined as the kernel of the evaluation map . We have by definition By Step 2, we need to compute On the one hand, we have the following natural isomorphisms where the first isomorphisms follows from Serre duality and of the fact that K Y = −2H + h. For the third one we use that D is the exceptional divisor of σ. On the other hand, . Taking cohomology and using the results in Step 2, we get Note that the evaluation map O Y (h−H) ⊕2 ev → I l, Q is surjective and non-split. Hence H 0 (Φ ′′ (L x )) = 0 and Φ ′′ (L x ) sits in the non-split triangle Step 4: the blow-up. Finally we prove that M x = Ξ −1 4 (L x )[−1] is isomorphic to a sheaf sitting in the non-split short exact sequence where K x is defined as the kernel of the evaluation map I ⊕2 Here P ⊂ Y is the plane contained in Y and l x and Q f (x) are the images of l x and Q f (x) . Indeed, we know that Φ ′′ (L x ) is an element on σ * T Y . Hence, by the Projection Formula, M x is a sheaf. We only need to study σ * K x because σ * O Y (−h) ∼ = O Y (−H). The fact that M x ∈ Coh(Y ) implies that σ * K x is also a sheaf that we denote by K x . Since σ| Q is an isomorphism, For later use, we give two different descriptions of the sheaf K x . Given the quadric Q f (x) and a line l x in it, we denote by l ′ x any line in the second ruling not containing l x . When Q f (x) is a cone, we set l ′ x = l x . Lemma 4.3. The sheaf K x sits in the following (non-split) short exact sequences Considering the first column of the previous diagram we can also see K x as the following extension 4.4. A family of stable ACM vector bundles. The aim of this section is to prove the following proposition. Proposition 4.4. For all x ∈ S reg , the sheaf M x is a Gieseker stable ACM bundle on Y . The proof is divided in two steps. Step 1: ACM bundle. In order to prove that M x is an ACM bundle, we want to apply Lemma 1.9. Hence, we need: Proof. From (4.3.7), by tensoring by O Y (mH) and taking cohomology, we get H p (Y, M x (mH)) = H p (Y, K x (mH)) for p = 1, 2 and for all m ∈ Z. To conclude the proof of the lemma we only need to show that H 2 (Y, K x (−mH)) = 0. Since K x is defined as the kernel of the evaluation map I ⊕2 P,Y ev −→ I l,Q , we have an inclusion H 2 (Y, K x (−mH)) ֒→ H 2 (Y, I P,Y (−mH)) ⊕2 and H 2 (Y, I P,Y (−mH)) = 0. In order to prove the crucial vanishing, H 3 (Y, M x (−3H)) = 0, we set Now we want to compute the left mutation of the middle term in (4.4.2) with respect to O Y (2H). To this end, we compute the left mutations of the first and third term in the same distinguished triangle. Now, an easy computation shows that L O Y (2H) (O Y (−H+h) [3]) ∼ = O Y (−H+h) [3]. On the other hand, the vector space Ext p (O Y (2H), O Y (3H)) is 6-dimensional if p = 0 and trivial otherwise. Thus we get a distinguished triangle Hence L O Y (2H) (O Y (3H))[−1] ∼ = σ * Ω P 5 (3H)| Y and putting all together we get the desired conclusion. Finally we can prove the following. Proof. Using adjunction (see, in particular, Lemma 1.4), we get where N is defined in Thus, to prove that α is injective is enough to show that β is such. By construction, β = H 2 (f ) ⊕2 where H 2 (f ) is the morphism induced on cohomology by the map f sitting in the following Koszul exact sequence on Q The cokernel of f is T P 3 (−3H) ⊗ O Q and the kernel of g is Ω P 3 (H) ⊗ O Q . Chasing through the associated long exact sequence in cohomology, we get H 1 (Q, is a base change of the evaluation map. Hence, H 2 (f ) is injective. Step 2: Gieseker stability. Now we finish the proof of Proposition 4.4 by showing that, again for all x ∈ S reg , the ACM bundle M x is stable. Note that µ(K x ) = 0, µ(M x ) = − 1 2 , and by (4.3.9), K x is µ-semistable. Suppose M x is not stable. Since M x is a vector bundle, there exists F a semistable destabilizing reflexive sheaf and a sheaf G sitting in a short exact sequence 0 −→ F −→ M x −→ G −→ 0 with µ(F ) − 1 2 . Now rk(F ) = 1, 2, 3 and the three cases need to be analysed separately. Case A: rk(F ) = 1. Then c 1 (F ) 0 and F is a line bundle. Moreover, we have a commutative diagram Since F is torsion free, the composition φ can only vanish or be an injection. If φ is trivial, then it factors through O Y (−H) ⊕2 which is semistable with µ = −1. Thus we get a contradiction. So assume that φ is injective and consider the following commutative diagram Again, F cannot inject neither in I P ∪Q,Y nor in I P ∪l ′ ,Y , and we get a contradiction. This completes the proof of Proposition 4.4. In particular, we proved that S is the closure of a component of the moduli space of stable ACM bundles with Chern character (4, −2H, −P, l, 1 4 ). 4.5. Universal family. In this section we assume that S is smooth. Then the above discussion can be summarized by saying that there exists a twisted universal family M ∈ Coh(S × Y, p * 1 α) such that the Fourier-Mukai functor where p Y : Y × S → Y is the natural projection, p 1 : S × S → S is the projection on the first factor, ∆ S ⊂ S × S is the diagonal, and E ′ is defined in (1.3.1). From (4.3.2) we have Then, the universal family M over Y × S such that M| Y ×{x} ∼ = M x can be described as denote the corresponding left and right mutations. The fact that M ∈ D b (S × Y, p * 1 α) is actually a locally free sheaf follows from the fact that M| x×Y ∼ = M x is locally free, for all x ∈ S. This was observed in Section 4.4.
2017-11-20T20:42:22.000Z
2013-03-27T00:00:00.000
{ "year": 2013, "sha1": "9e03f820d6e971a0d3cb5466df4bfbe754a6518b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9e03f820d6e971a0d3cb5466df4bfbe754a6518b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
28529019
pes2o/s2orc
v3-fos-license
Evaluation of predictors for anatomical success in macular hole surgery in Indian population Purpose: The aim was to evaluate outcomes and predictors for anatomical success in macular hole (MH) surgery. Materials and Methods: This was a prospective cohort study of patients operated for idiopathic MH with stages II, III or IV. Patients underwent pars plana vitrectomy with internal limiting membrane (ILM) peeling, internal gas tamponade, and postoperative face down positioning. The primary outcome measure was anatomical closure of MH, while secondary outcome measure was postoperative external limiting membrane (ELM) continuity. Effect of MH size, duration of MH, size of ILM peel, type of gas tamponade (SF6 vs. C3F8) and macular hole index (MHI) on anatomical MH closure was also evaluated. Results: Of the 62 eyes operated, anatomical closure of MH was achieved in 55 eyes (88.7%). The median duration of follow-up was 8 months (range: 6–15 months). Mean BVCA improved from 0.94 ± 0.26 at baseline to 0.40 ± 0.23 logMAR at last follow-up (P = 0.01). There was a statistically significant association between size of ILM peel and anatomical closure of MH (P = 0.04). Duration of symptoms, size of MH, type of gas tamponade, MHI had no effect on anatomical closure (P = 0.22, 0.28, 0.40 respectively, Chi-square test). Postoperative continuity of the ELM was significantly associated with a shorter symptom duration (<6 months) before surgery. Conclusion: Acceptable anatomical closure could be attained with the defined technique. Size of ILM peel is a new predictor of anatomical success while symptom duration affects postoperative ELM continuity. Idiopathic macular holes (MHs) result from tangential and antero-posterior traction on the fovea by prefoveal cortical vitreous. [1,2][5][6][7] Kelly and Wendel described role of vitrectomy and posterior hyaloid peeling to relieve macular traction and reported an anatomic MH closure rate of 58%. [8]In the last decade, the success rate of MH closure after primary surgery has increased dramatically.Success rates of up to 70% have been recorded after vitrectomy, SF 6 gas tamponade and selective epiretinal membrane (ERM) peeling. [9][12][13][14][15][16][17][18] Previously, predictors of surgical outcomes have been described, which include size and duration of MH, ILM and ERM peeling, type of gas tamponade used and duration of face-down positioning.In the current study, we assessed the outcomes and predictors of MH surgery in cases of idiopathic MH in our practice. Materials and Methods A prospective cohort study was conducted at a tertiary care center, between June 2011 and June 2012.The study was approved by the Institutional review board and written informed consent was taken from all the subjects. Consecutive patients with idiopathic full thickness MH (stage II and above), with a baseline best-corrected vision acuity (BCVA) >6/60 were included.Exclusion criteria included age-related macular degeneration, diabetic retinopathy, high myopia (>6D), vitrectomized eyes, glaucoma, previous intraocular surgery (except uncomplicated phacoemulsification surgery), traumatic MHs, patients unable to maintain postoperative prone positioning and any systemic contraindication for surgery. All surgeries were done by a single surgeon (A.K.).Surgeries were performed under peribulbar block with 5 ml of 2% lignocaine hydrochloride and 125 IU hyaluronidase mixed with 5 ml of 0.5% bupivacaine.Standard 23/25-gauge pars plana vitrectomy was conducted using the Constellation vitrectomy system (Alcon Labs Inc., Fort Worth, TX).Core vitrectomy was performed; triamcinolone (Retilone, Cipla, Mumbai, India) assisted posterior vitreous separation was done using the vitrector.ERM, if present, was visualized using triamcinolone and peeled using ILM forceps (D.O.R.C., Zuidland, Netherlands).Brilliant blue G (0.5%) dye assisted ILM peeling was initiated using intravitreal ILM forceps (Grieshaber, Alcon, USA).The area of ILM peeling was approximately measured using optic disc size as a size reference [Fig.1].Peel <2 disc diameter (DD) was referred to as partial peel and >2 DD as complete peel.At the time of ILM peeling, precautions were taken to avoid contact with the base of MH to prevent photoreceptor damage.Peripheral vitrectomy was completed, and fluid-air exchange was performed.Eighteen percentperfluoropropane (C 3 F 8 ) or 25% sulfur hexafluoride (SF 6 ) gas was used as a postoperative tamponade.The choice of gas used was as per patient preference after explaining the advantages and disadvantages of both.Surgery was completed by removal of the entry site alignment cannulas without suturing the conjunctiva and sclera. Sclerotomy sites were sutured with a single 7-0 vicryl suture after performing a limited peritomy if any leakage was noted.Patients were prescribed strict postoperative face down position for 18 h daily for 3 days.Postoperatively, topical antibiotics, cycloplegics, steroids and oral analgesics were prescribed and gradually tapered. Duration of MH was assessed on the basis of patient history regarding onset of symptoms and previous records (if any).Preoperative ocular examination included BCVA, lens status evaluation and biomicroscopic examination of fovea and vitreous.Any other anterior or posterior segment abnormalities were noted, and intraocular pressure was recorded.BCVA was measured using a standard wall-mounted Snellen chart with spectacle correction. Optical coherence tomography (OCT) of the macula (Cirrus HD-OCT, Carl Zeiss Meditec) was done in all eyes to confirm the diagnosis and measure the height and base of the MH.Macular hole index (MHI) was calculated as the ratio of its height and base. [19]The maximum diameter of the ILM peel was noted during each surgery.Patients were followed-up on day 1, day 7, day 14 and monthly thereafter for a minimum duration of 6 months.BCVA, lens status, status of MH and the intraocular pressure were assessed at each follow-up.OCT was repeated after the gas bubble receded from the posterior pole in order to document hole closure and to assess the continuity of the external limiting membrane (ELM). The primary outcome measure was anatomical closure of the MH.Secondary outcome measure was ELM continuity postoperatively.Predictors for anatomical closure and ELM continuity that were studied included size of the hole (≤400 μm vs. >400 μm); duration of hole (≤6 months vs. >6 months); size of ILM peel (≤2 DD, partial peel vs. >2 DD, complete peel); stage of MH and postoperative gas tamponade used (25% SF 6 vs. 18% C 3 F 8 ).Anatomical closure and ELM continuity were also correlated with functional success (two-line improvements of vision). Statistical analysis was performed using SPSS advanced statistical software version 17.0 (SPSS Inc., Chicago, IL) for Windows.BCVA was converted to logMAR units for analysis.Qualitative data was expressed as a percentage and quantitative data as mean ± standard deviation.The Student's t-test and Fischer's exact test were used to analyze quantitative and categorical data respectively.Odds ratio (OR) with 95% confidence interval (CI) were calculated for each variable.P ≤ 0.05 was considered as significant. Anatomical closure of the MH was achieved in 55 eyes (88.7%) after surgery.In these eyes, BVCA improved from 0.94 ± 0.26 logMAR preoperatively to 0.40 ± 0.23 logMAR at last follow-up (P = 0.01).None of the patients developed retinal tear or detachment; two patients developed raised intraocular pressure on follow-up, which was managed with a topical antiglaucoma therapy.Twelve phakic eyes (40%) developed cataract or showed progression in nuclear sclerosis during this time, thus eventually required cataract surgery. Size of peeled ILM was found to be significantly associated with anatomical closure of MH [Table 1].In 38 eyes (61.30%),ILM peeling of more than 2 DD was achieved during surgery and in the remaining 24 eyes (38.70%), it was partial.Anatomical hole closure was found to be significantly associated with large ILM peel (OR: 6.00, 1].2].Duration of MH of more than 6 months (23 eyes, 37.1%) had a less chance of ELM continuity than duration < 6 months (39 eyes, 62.9%) (OR: 0.27, CI-0.08-0.88,P = 0.03).Other predictors like stage of MH, size, MHI, size of ILM peel and type of gas tamponade had no effect on postoperative ELM continuity [Table 2]. Continuity of ELM postoperatively was significantly associated with duration of MH [Table Functional success defined as two-line improvement in Snellen's visual acuity was achieved in all patients in whom anatomical closure was documented.Vision improvement was not seen in eyes where the hole failed to close.In eyes with preexisting ELM continuity, vision gradually improved till 6 months compared with those who had a noncontinuous ELM postoperatively [Fig.2]. Discussion Modern day MH surgery is associated with good anatomical success with few complications.[19][20] In this study, we evaluated all known predictors of anatomical hole closure along with ELM continuity in a strictly North Indian population. We achieved an overall anatomical hole closure in 88.7% of our cases that is comparable to past reports. [10,14,15,17,18]During the last decade, there has been increasing focus towards ILM peeling during surgery for MH.ILM is a basement membrane supporting cellular proliferation of Muller cells.Contraction of the ILM has been the implication in traction, which might act as a contributing factor in the pathogenesis of MH.ILM peeling has been postulated to relieve this traction and aid in the surgical success.Some investigators have also suggested that ILM peeling by virtue of relieving tangential traction may decrease the need for prolonged prone positioning. [20,21]We demonstrated a significant association between anatomic hole closure and the size of ILM peel (>2 DD) in our study.This is a novel observation as the size of ILM peeling as a factor for MH closure has never been described.This result adds to the previous knowledge that greater size of peeled ILM helps relieve tangential traction, helps attain anatomical closure of MH without prolonged postoperative positioning. [20,21] are at odds to explain the relation between the size of the ILM peel and a higher MH closure rate.A large ILM peel may prevent reopening of MH since the ILM acts a scaffold for the formation of ERM, the contraction of which is implicated in MH formation.However, ERM formation or re-opening of a closed hole was not observed in eyes where partial ILM peeling was performed. Preoperative documentation of the size of the MH size using OCT provides a prognostic factor for postoperative visual outcome and anatomical success rate of MH surgery.MHI > 0.5 which is a reflection of smaller basal diameter as compared to the vertical height and may be used to predict better anatomical and surgical outcomes after surgery. [19]In our series, however, MHI was not a significant predictor anatomical hole closure. Intraocular gas tamponade is used for MH surgery as high surface tension and buoyancy of gas aid in hole closure.Intra-ocular gas and postoperative prone positioning between 3 and 7 days is advocated by most authors.A recent study shows that the MH surgery with SF 6 gas achieves similar results with C 2 F 6 for MH tamponade and is absorbed faster, thus allowing quicker visual rehabilitation for the patient. [22]Rahman et al. also described surgical success in MH without any postoperative positioning. [22]We also found that the type of gas tamponade in our series did not affect the anatomical outcomes; however, we advised postoperative prone positioning even in cases where C 3 F 8 tamponade was used. Newer generation high resolution OCT machines have substantially improved the visualization of foveal microstructure.Recent reports have demonstrated that the postoperative status of the inner segment -outer segment (IS -OS) junction significantly correlates with the visual outcome after MH surgery; any disruption in this layer may be associated with poorer visual outcomes.The ELM appears as another hyper-reflective landmark in the outer retina.Although less prominent, the ELM line is distinctive, located just above the IS -OS junction hyper-reflective line.Photoreceptor cell bodies containing the nuclei and the apical processes of Muller cells are connected by a row of zonular adherents that collectively form the ELM.The integrity of the ELM appears to have a critical role in the restoration of the photoreceptor microstructures. [25] Madreperla et al. demonstrated the sealing of a break in the ELM by Muller cell processes in the eye with stage III MH. [26] We postulate that successful reformation of zonular adherents between photoreceptors' ISs and Muller cells, as evidenced by a continuous ELM line on OCT, is critical for the restoration of the photoreceptor layer as well as for a better visual outcome following MH repair.We found significant visual improvement in cases that achieved continuous ELM postoperatively.Though duration and size of MH did not significantly affect anatomical outcome, these had a positive association with postoperative continuity of ELM and thus may indirectly affect visual outcomes in these patients. Conclusion Size of ILM peeling modulated anatomical closure of MH, which is a hitherto unknown finding.ELM continuity was found to be good prognostic predictor for final visual outcome.ELM continuity also had an inverse association with duration of visual complaints.Some of the limitations of the study include the inability to monitor the size of peeled internal membrane using OCT.Exact relationship between the size of the peeled membrane and MH closure could be elicited if size documentation is attempted in future.Nevertheless, this series helps justify larger amounts of ILM peeling for the closure of MH as well as advocates earlier surgery for such cases to help preserve continuity of ELM. Figure 1 :[ Figure 1: Intraoperative photographs of patient during macular hole surgery with partial (<2 disc diameter [DD]) internal limiting membrane peel (a) and complete (>2 DD) peel (b) Please note that partial peel is eccentric (A) in this case Figure 2 : Figure 2: Correlation between postoperative external limiting membrane status and best corrected visual acuity over 6 months of follow-up Table 1 : Study predictors for anatomical closure of macular hole Predictor Number of eyes (n=62) (%) Closed holes Open holes Odds ratio (CI) P* *P: Fischer exact test.ILM: Internal limiting membrane, CI: Confidence interval, DD: Disk diameter, NA: Not applicable ijo.in on Wednesday, February 04, 2015, IP: 115.111.224.207] || Click here to download free Android application for this journal CI -1.09-32.77,P = 0.04).All other evaluated parameters such as duration of MH, size, stage, MHI and type of gas used were not significantly associated with anatomical closure [Table
2017-09-26T11:21:14.316Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "72c625cc3677ddf9c1ea5fd28a914c51a1d53e17", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0301-4738.149135", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "72c625cc3677ddf9c1ea5fd28a914c51a1d53e17", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7815663
pes2o/s2orc
v3-fos-license
Torus fibrations, gerbes, and duality Let X be a smooth elliptic fibration over a smooth base B. Under mild assumptions, we establish a Fourier-Mukai equivalence between the derived categories of two objects, each of which is an O^* gerbe over a genus one fibration which is a twisted form of X. The roles of the gerbe and the twist are interchanged by our duality. We state a general conjecture extending this to allow singular fibers, and we prove the conjecture when X is a surface. The duality extends to an action of the full modular group. This duality is related to the Strominger-Yau-Zaslow version of mirror symmetry, to twisted sheaves, and to non-commutative geometry. In this paper we are concerned with categories of sheaves on varieties fibered by genus one curves. For an elliptic fibration on X, by which we always mean a genus one fibration π : X → B admitting a holomorphic section σ : B → X, there is by now a a well understood theory of the Fourier-Mukai transform [Muk81,BBRP98,Bri98,BM02]. The basic result is: be an elliptic fibration with smooth total space. Then the integral transform (Fourier-Mukai transform) induced by the Poincare sheaf P → X × B X, is an auto-equivalence of the bounded derived category D b (X) of coherent sheaves on X. An important feature of F M is that it transforms geometric objects in an interesting way: Bundle data: vector bundles on X, semistable of degree zero on the generic fiber of π. O O F M Spectral data: sheaves on X with the numerics of a line bundle on a 'spectral' divisor C ⊂ X, finite over B. This spectral construction was used to study general compactifications of heterotic string theory and their moduli, and especially the duality with F-theory [FMW97,BJPS97,Don97,AD98,FMW98,Don99,FMW99]. It was used also to construct special bundles on elliptic Calabi-Yau manifolds which lead to more-or-less realistic compactified theories [DOPW01a,DOPW01b]. Partial duality for genus one fibrations In many applications (see e.g. [DOPW01b,DOPW01a]) one is also interested in constructing bundles on genus one fibrations π : Y → B which do not necessarily admit a section. From the viewpoint of the spectral construction one expects that vector bundles on Y should correspond to spectral data supported on a divisor C ⊂ X := Pic 0 (Y /B), where Pic 0 (Y /B) is the compactified relative Jacobian of π : Y → B. However it is unrealistic to expect that this spectral data should again be a sheaf on C. One problem is that X is not a fine moduli space of objects on Y and so we do not have a Poincare sheaf on Y × B X. Another problem is that in the passage from Y to X we seem to be losing information. Indeed, there can be many different Y 's with the same Jacobian X, and there is no obvious way to recover Y from spectral data consisting of a divisor on X and an ordinary sheaf supported on this divisor. The resolution of this problem is suggested by string theory. Namely, in the transition from Y to X one should add an extra piece of data corresponding to the physicists' Bfield. Mathematically the holomorphic version of this data is encoded in an O × -gerbe on X. A detailed discussion of O × -gerbes and their geometric properties can be found in [Gir71,Bry93,Bre94,Hit01] and in our section 2.1. A baby version of our result is that: • There is a gerby Fourier-Mukai transform exchanging Bundle data: vector bundles on Y , semistable of degree zero on the generic fiber of Y → B. O O F M Spectral data: sheaves on the gerbe Y X with the numerics of a line bundle on a 'spectral' divisor ( Y X) × X C, with C ⊂ X finite over B. This statement is asymmetric -we only consider vector bundles on the variety Y , and the spectral data appears only for the gerbe Y X. The symmetry can be restored by extending this gerby spectral construction to a full Fourier-Mukai equivalence of derived categories. One peculiarity of O × -gerbes is that the categories of coherent sheaves on them, and hence also the derived categories, admit an orthogonal decomposition by subcategories labeled by the characters of O × , i.e. by the integers (see section 2.1). For any k ∈ Z, we will write for the k-th summand and we will call D b k ( Y X) the derived category of weight k sheaves on Y X. It may be helpful to note here that when Y X comes from an Azumaya algebra, the weight k corresponds to the central character of the action of this algebra. Related partial dualities were considered previously in [Cȃl02,Cȃl01] in the context of Fourier-Mukai transforms and in [DG02] in the context of the spectral construction. A detailed analysis of the corresponding moduli spaces and the duality transformation between them was recently carried out, for the particular case of Hopf-like surfaces, in [BM03b,BM03a]. 1.3 Main results: duality for genus one fibrations For any elliptic fibration X π / / B σ h h , the twisted versions of X are parameterized by the analytic Tate-Shafarevich group X an (X) (see section 2.2). For a given β ∈ X an (X), let π β : X β → B denote the corresponding genus one fibration. On the other hand for any analytic space X, the analytic O × X -gerbes on X are parameterized by the analytic Brauer group Br ′ an (X) = H 2 an (O × X ). For any α ∈ Br ′ an (X) we denote the corresponding gerbe by α X and the bounded derived category of coherent sheaves on α X by D b ( α X). The latter decomposes naturally as the orthogonal direct sum of "pure weight" subcategories D b k ( Y X) indexed by characters of O × X , i.e. by integers k ∈ Z. For all α ∈ Br ′ an (X) and all k ∈ Z there is a canonical equivalence D b k ( α X) ∼ = D b 1 ( kα X). This follows immediately by comparing representations of α X of pure weight k with representations of kα X of pure weight 1 or by the appropriate idenitfications with the category of twisted sheaves on X (see section 2.1.1). Consider first the case when X is a surface. We will see in section 2.3 that to any α, β ∈ X an (X) we can associate an O × -gerbe α X β over X β . The notation α X β generalizes our previous usage: when α = 0 we get the trivial gerbe 0 X β on X β , and when β = 0 we get the gerbe α X on X = X 0 . Our main result in the case of surfaces, proved in section 4, is: be a non-isotrivial elliptic fibration on a smooth complex surface X. Assume that π has I 1 fibers at worst. Let α, β ∈ X an (X) be two elements such that β is torsion. Then there is an equivalence F M : D b 1 ( α X β ) → D b −1 ( β X α ) of the derived category of weight 1 coherent sheaves on the gerbe α X β and the derived category of weight (−1) coherent sheaves on the gerbe β X α . Equivalently F M can be thought of as an equivalence of the derived categories D b 1 ( α X β ) and D b 1 ( −β X α ). We will see in section 2.3 that for a higher dimensional X, the gerbes α X β may not make sense for arbitrary choices of α, β ∈ X an (X). However in section 2.3 we show that there exists a natural pairing •, • : X an (X) ⊗ Z X an (X) → H 3 an (O × B ) and that α X β can be defined whenever α, β are complementary, i.e. α, β = 0. The natural generalization of Theorem A is the following (see Conjecture 2.19): Main Conjecture For any complementary pair α, β ∈ X an (X), there exists an equivalence of the bounded derived categories of sheaves of weights ±1 on α X β and β X α respectively. We cannot prove this in full generality, mostly due to our inability to handle the general singular fibers. We are able to settle the conjecture in the non-singular case, under the somewhat more restrictive condition that α is m-divisible and β is m-torsion for some integer m. This of course implies that α, β are complementary, so α X β is well defined. For a smooth π our main result is: be a smooth elliptic fibration on an algebraic variety X over a smooth algebraic base B. Assume that Br ′ an (B) = 0. Fix a positive integer m and let α, β ∈ X an (X) be two elements such that α is m-divisible and β is m-torsion. Then there is an equivalence of the derived category of weight 1 coherent sheaves on the gerbe α X β with the derived category of weight (−1) coherent sheaves on the gerbe β X α . Equivalently F M can be thought of as an equivalence of D b 1 ( α X β ) and D b 1 ( −β X α ). In fact, the proof of Theorem B is quite a bit easier than that of Theorem A, so we give it first, in section 3. The proof is based on the construction of two explicit presentations: the lifting presentation of α X β , in section 3.1.1, and the extension presentation of β X α , in section 3.1.2, together with the construction of an explicit Fourier-Mukai duality between them, in section 3.4. Our two main theorems and their proofs have fairly straightforward analogues asserting the equivalence of the derived categories of quasi-coherent sheaves and, in the algebraic case, asserting the equivalence of appropriate categories of algebraic coherent sheaves. Indeed, if the class α happens to be torsion as well, then the spaces X α and X β are algebraic and the gerbes α X β and β X α are algebraic stacks in the sense of Artin. We will see in the proofs of the two theorems that the gerby Fourier-Mukai transform in this case will correspond to a kernel object which is algebraic and so will give rise to an equivalence of the derived categories of weight one algebraic coherent sheaves. Duality for commutative group stacks As was pointed out by Arinkin, our Theorem B (but not Theorem A) fits very naturally in the context of commutative group stacks (cgs). The O × X -gerbe α X is a family of cgs over B which is an extension: 0 → BG m → α X 0 → X → 0 of X by the classifying stack of G m . The torsor X β , on the other hand, is not a cgs over B; but it does determine one, namely the extension: of Z by X, where X β is recovered as the inverse image of 1 ∈ Z. Similarly, the gerbe α X β which we construct, using either the lifting presentation (when β is m-torsion) or the extension presentation (when α is m-torsion), determines a cgs X = α X β which has a two-step filtration, with sub BG m , middle subquotient X, and quotient Z. This can be considered either as an O × X -gerbe over X β , or dually as a torsor over α X. In particular, the derived category D b ( α X β ) is graded by Z × Z. Now quite generally, such a cgs X has a dual cgs X ∨ which has a similar two-step filtration with the roles of the sub and the quotient interchanged. There is a Poincare sheaf P which is a biextension of X ∨ × X by G m , and it induces a Fourier-Mukai transform which is an equivalence of categories D b (X ∨ ) ≃ D b (X ) interchanging the two Z gradings. Our Theorem B can therefore be interpreted as saying that the cgs dual to α X β is β X α ; the previous version is recovered by restricting the equivalence to the piece of bidegree (1, −1). This is explained in some more detail in the Appendix A which D. Arinkin kindly wrote for us. This duality picture has a straighforward extension to more general cgs X over B: these are again endowed with a two step filtration W −2 X ⊂ W −1 X ⊂ W 0 X = X , where gr −2 = W −2 X = BT for some affine torus bundle T → B, the middle subquotient gr −1 is some abelian scheme A → B and the last quotient gr 0 is some bundle Λ → B of finite rank free abelian groups over B. The dual cgs X ∨ is again the stack of homomorphisms Hom cgs (X , BG m ) and one expects that in good cases the duality gives rise to an equivalence of the appropriate categories of representations. Arinkin calls the cgs described in the previous paragraph 'Beilinson's one motives' since they were considered by Beilinson (unpublished) in the context of the theory of mixed motives. The cgs X are formally very similar to the classical one motives studied by Deligne in [Del74]. The one motives of [Del74] can be viewed either as certain mixed Hodge structures of level ≤ 1 or as cgs M defined over C. As a commutative group stack, every Deligne's one motive M is equipped with a two step filtration W −2 M ⊂ W −1 M ⊂ W 0 M = M , for which gr −2 = T for some affine torus T , gr −1 = A is a polarized abelian variety, and gr 0 = BΛ for some free abelian group Λ of finite rank. If we now look at families of Deligne's one motives defined over some base B we arrive at cgs over B which are of essentially the same shape as the Beilinson's one motives, but with the stackiness appearing at a different subquotient of the filtration. Furthermore, as explained in [Del74], the dual of a Deligne's one motive M is the cgs M ∨ := Hom cgs (M , BG m ), which is again a one motive of the same type with gr −2 M ∨ = Hom(Λ, G m ), gr −1 M ∨ = A (the dual abelian variety to A), and gr 0 M ∨ = B Hom(T, G m ). In fact, we can view Hom cgs (•, BG m ) as a transformation acting on commutative group stacks, which preserves the two natural families of Deligne's and Beilinson's one motives and induces a duality on each of these families. Moreover, since in both cases the duality is realized in terms of suitable biextensions of X × X ∨ and M × M ∨ , one expects that the duality of cgs will give rise to an equivalence of the corresponding categories of representations of cgs. For the specific stacks α X β this is precisely the content of our Theorem B. 1.5 The non-commutative aspect Results having the general shape of Theorem A were anticipated in the physics literature. In fact, Ganor-Mihailov-Saulina have conjectured in [GMS00] that when Y is a genus one fibered K3 surface, there should exist a non-commutative deformation Y X of X = Pic 0 (Y /B) and a categorical equivalence between instantons on Y X and spectral data on Y . This is a special case of Theorem A. This statement admits an intriguing interpretation in terms of non-commutative geometry, a topic currently of high interest to physicists [NS98,KKO01]. According to the general yoga of deformation quantization (see e.g. [Kon01]), any symplectic (or Poisson) structure on X is the first term in a non-commutative deformation of its structure sheaf. In a suitable algebro-geometric context, e.g. on a K3, a symplectic structure θ has three incarnations: as a real 2-form θ R , a holomorphic 2-form θ 2,0 , or an antiholomorphic 2-form θ 0,2 . Then θ R determines a "non-commutative four-manifold", and θ 2,0 determines a "non-commutative K3". The third incarnation, θ 0,2 , gives both the element X θ in the Tate-Shafarevich group X(X) and the O × X -gerbe θ X. In this sense, Theorem A can be viewed as an affirmative answer and a generalization of the [GMS00] conjecture. 1.6 Modified T-duality and the SYZ conjecture The celebrated work of Strominger, Yau and Zaslow [SYZ96] interprets mirror symmetry of Calabi-Yau spaces in terms of special Lagrangian (SLAG) torus fibrations. If a CY manifold X (with "large complex struture") has mirror X ′ , [SYZ96] conjecture the existence of fibrations π : X → B and π ′ : X ′ → B whose generic fibers are SLAG tori dual to each other: each parameterizes U(1) flat connections on the other. In particular, each of these fibrations admits a SLAG zero-section, corresponding to the trivial connection on the dual fibers. The analogy with the theorem of [BM02] is clear: the SLAG torus fibration on the Calabi-Yau threefold replaces the elliptic fibration on the surface, and mirror symmetry (interchanging D-branes of type B with D-branes of type A) replaces the Fourier-Mukai transform (which interchanges vector bundles with spectral data). Our work suggests that the SYZ conjecture should be extended to a SLAG analogue of Theorem A or of the Main Conjecture, in which the physical B-fields α ∈ H 2 (X, R/Z) play the role of our gerbes. This extension leads to an integrable system structure on the moduli space underlying mirror symmetry. We give an informal discussion of these matters in section 5, and we hope to return to them in future work. Modularity As often happens in physics, the Fourier-Mukai functor F M : is just one particular element of a whole collection of dualities. For simplicity consider only the case of a projective elliptic surface π : X → B. In this case, our Fourier-Mukai duality works for any pair of elements (α, β) ∈ X(X) × X(X) in the algebraic Tate-Shafarevich group. Thus the Fourier-Mukai functor corresponds to the action of the matrix on the Cartesian square X(X) ×2 of the abelian group X(X). Moreover, one can show (see e.g. section 2.3) that for surfaces the natural map T β : X(X) → Br ′ (X β ), used to define our gerbes, has kernel generated by the element β ∈ X(X). In particular, T β (α + β) = T β (α) and so the gerbes α+β X β and α X β are isomorphic. A choice of such an isomorphism gives rise to an equivalence D b 1 ( α+β X β ) ∼ = D b 1 ( α X β ) which corresponds to the action of the matrix on X(X) ×2 . Since these two matrices generate SL(2, Z), it will be very interesting to investigate which braid group extension of SL(2, Z) acts on α,β∈X(X) D b 1 ( α X β ), lifting the action on X(X) ×2 . In the case when the Mordell-Weil group of X is trivial we expect this extension to be central and to be related to the extensions appearing in [Pol96,Pol02], [Orl02] and [ST01]. We do not discuss this question here but hope to return to it in a future work. Twisted sheaves Another context in which Theorem A turns out to be relevant is the theory of twisted sheaves on a complex space which admits a genus one fibration. Recall [Cȃl00] that for any O × -valuedČech 2-cocycle α on a complex space X, one can consider the abelian category of α-twisted sheaves on X and its derived category D b (X, α). By definition, an α-twisted sheaf on X is a collection of coherent sheaves defined over open sets in X, together with gluing data on overlaps which satisfy the α-twisted cocycle condition on triple overlaps (see [Cȃl00] or our section 2.1.1 for details). Refininig the open covering or changing the cocycle by a coboundary results in an equivalent category of twisted sheaves. Twisted sheaves on Calabi-Yau manifolds, and in particular on genus one fibered Calabi-Yau manifolds, were recently studied by A.Cȃldȃraru [Cȃl00,Cȃl02]. In particular, he observed [Cȃl01] that in the case of a K3 surface, the derived category of α-twisted sheaves possesses certain natural Fourier-Mukai partners. The starting point of his analysis is the observation that if X is a smooth projective K3 surface, then every element α ∈ H 2 et (X, O × ) can be interpreted as a homomorphism α : T X → Q/Z, where T X denotes the transcendental lattice of X (see [Cȃl01] or section 2.1.1). This interpretation suggests the following: Cȃldȃraru's Conjecture Let X and Y be two projective K3 surfaces and let α ∈ H 2 et (X, O × ) and β ∈ H 2 et (Y, O × ). Then the derived categories D b (X, α) and D b (Y, β) are equivalent as triangulated categories iff the lattices ker(α) ⊂ T X and ker(β) ⊂ T Y are Hodge isometric. When both α and β are zero, the conjecture is true in view of a theorem of D.Orlov [Orl97] asserting that two smooth projective K3 surfaces have equivalent derived categories iff their transcendental lattices are Hodge isometric. This has been extended by Cȃldȃraru, who used Mukai's quasi-universal sheaves for non-fine moduli spaces of sheaves on K3 surfaces to deduce that the conjecture holds whenever one of the classes, say β, is trivial. The algebraic case of our Theorem A proves Cȃldȃraru's Conjecture in a series of new cases, with both α and β non-zero. Indeed, if π : X → B is an elliptic K3 surface and if α, β ∈ X(X) are two elements in the algebraic Tate-Shafarevich group, then the natural identification X(X) = H 2 et (X, O × ) coming from the Leray spectral sequence allows us to view both α and β as homomorphisms T X → Q/Z. Using this interpretation one checks immediately that the transcendental lattices of the K3 surfaces X, X α and X β satisfy T Xα = ker(α) ⊂ T X and T X β = ker(β) ⊂ T X , where it is understood that all the equalities are Hodge isometries. denote the class of the gerbe α X β . Assuming that α and β are in general position in H 2 et (X, O × ), i.e. that the cyclic subgroups generated by α and β intersect only at zero, we have natural identifications of Hodge lattices: In other words -the hypothesis of Theorem A implies the hypothesis of Cȃldȃraru's conjecture. Combined with the remark that , this shows that Theorem A implies an interesting new case of Cȃldȃraru's conjecture (see Corollary 4.6 for a slightly more general statement). Note that the condition (required in the statement of Theorem A) that β should be torsion, is vacuous in this case, since for a smooth projective surface X, both the cohomological Brauer group H 2 et (X, O × ) and the Tate-Shafarevich group X(X) are torsion groups. The paper is organized as follows. In Section 2 we recall some standard facts about the geometry of O × gerbes and genus one fibrations. We also derive the compatibility condition between two Tate-Shafarevich classes and state a general conjecture on the equivalence of derived categories for gerbes over genus one fibrations. In Section 3 we introduce the main characters appearing in the proofs of the two theorems stated above. Working in the setup of Theorem B, we define two geometric presentations -the lifting and extension presentations -for the gerbes α X β and β X α . Furthermore, we construct an integral transform between the corresponding atlases. We show that this integral transform sends descent data to descent data and gives rise to an equivalence of the derived categories of the gerbes, thus proving Theorem B. Section 4 deals with the case of surfaces. We show how, in the case of a surface, one can extend the lifting and extension presentations across the singular fibers and produce a Fourier-Mukai transform between the corresponding gerbes. We again check that this transform is an equivalence, which proves Theorem A. Finally, in Section 5 we discuss the analogy between algebraic gerbes over genus one fibrations and flat gerbes over SLAG 3-torus fibrations on Calabi-Yau threefolds. We describe a conjectural picture which amends the Strominger-Yau-Zaslow version of mirror symmetry to incorporate non-trivial B-fields on both sides of the mirror correspondence. The Brauer group and the Tate-Shafarevich group We need some basic facts relating elements of the Brauer group to elements of the Tate-Shafarevich group of an elliptic fibration. We discuss O × -gerbes and the Brauer groups which classify them in section 2.1, then genus-1 fibrations and the Tate-Shafarevich group which classifies them, in section 2.2. For an elliptic fibration there is a simple, direct relation between these two groups. The extension to genus-1 fibrations though is more delicate, and is defined only when a certain alternating pairing vanishes. This is discussed in section 2.3. Brauer groups and O × -gerbes In this section we review the notions of O × -gerbe and presentation, and discuss the relationship between O × -gerbes and elements in the Brauer group. H -gerbes Let H be a sheaf of abelian groups on a topological space (or a site) X. The case of main interest for us is when (X, O X ) is a ringed space and H = O × X is the sheaf of invertible elements in the structure sheaf. In fact, most of the time we will have H = O × X in either the etale or the analytic topologies on a complex scheme (or an algebraic or analytic space) X. In section 5 we will be interested also in the case when H is the sheaf of germs of smooth maps from a C ∞ manifold X to the circle S 1 . An H -gerbe on X is a global structure on X which "locally looks like the quotient of X by the trivial action of H ". More precisely "the quotient of X by the trivial action of H " is the classifying object BH . For example, in case H is the sheaf of holomorphic maps from X to a fixed group H, BH is the sheaf of sections of X × BH over X, where BH is the classifying space of H. In the general case, BH can be interpreted either as a topological space over X (defined up to homotopy), or as a stack in groupoids over X (see [LMB00, §3] for the definition). We adopt the second approach and treat BH as a stack (='sheaf of categories'): over any open set V , the objects of BH (V ) are the H -torsors on V and the morphisms are the isomorphisms of torsors. In particular, the automorphisms of the trivial torsor 1 V are given by elements in H (V ). Note that BH is in fact a commutative group stack over X with a group structure given by convolution of H -torsors. Explicitly, for any two H -torsors A ′ and A ′′ over V the convolution A ′ ⊗ A ′′ is defined as the H -torsor Definition 2.1 An H -gerbe on X is a BH torsor, i.e. a stack of groupoids α X over X, which is equipped with a principal homogeneous action of BH . Remark 2.2 • Explicitly, a stack of groupoids α X → X is an H -gerbe if for any open V ⊂ X and any object s of α X(V ) we have chosen isomorphisms H (V ) ∼ = Aut αX (V ) (s), compatible with pullbacks. • In the literature [Gir71], [Bre90,Bre94], one encounters a more general notion of an Hgerbe, namely -a stack T of groupoids on X, which is locally isomorphic to BH . These more general gerbes are classified by the first cohomology of X with coefficients in the 1truncated simplicial abelian group H → Aut(H ) [Bre90]. They are intimately related to the forms of H , i.e. to sheaves of groups on X which are only locally isomorphic to H . This relatishionship is based on the identification Out(H ) = Aut X (BH ): to any T → X, which is an H -gerbe in this more general sense, one naturally associates an Out(H )-torsor band(T ) := Isom X (T , BH ) -the band of the gerbe T [Gir71], [Bre90]. A gerbe T is said to be banded by H if it is equipped with a trivialization of the torsor band(T ). When H is abelian, this condition is equivalent to requiring that for any open V and any s ∈ T we have chosen isomorphisms H (V ) ∼ = Aut T (s) in a way compatible with pullbacks. In other words, the more restrictive notion of an H -gerbe that we have adopted in this paper is the same as the standard notion of an H -banded gerbe (at least for an abelian H ). We will casually ignore this distinction and will call all our gerbes simply H -gerbes. In case H = O × X (in the relevant topology), the classifying stack BO × X is the sheaf of Picard categories Pic(X): for an open U, the objects of Pic(X)(U) are by definition the line bundles on U, and for two objects L, M ∈ ob(Pic(X)(U)) the set Hom Pic(X) (L, M) is defined to be Isom(L, M). An O × X gerbe α X assigns to each open U a Pic(X)(U)-torsor, denoted Pic α (U), with a compatibility of the assignments to different U's. We can thus think of a section of an O × X -gerbe as a twisting of the notion of a line bundle on X. More generally the sections in an H -gerbe are twistings of the notion of an H -torsor on X: simply replace in the previous discussion each appearance of Pic with T ors H -the group of H -torsors. This interpretation suggests that the group classifying H -gerbes should be H 2 (X, H ). When H is abelian this statement can be made precise via the standard cohomological machinery [Mil80,IV.2] or [Gir71], [Bre90] (but keep in mind that our Hgerbes are the H -banded gerbes of loc. cit.). In more down to earth terms the interpretation of the elements in H 2 (X, H ) as equivalence classes of gerbes can be seen as follows. Assume that we are in the good situation when the cohomology of H can be computed inČech terms. Let {α ijk } be an H -valueď Cech 2-cocycle w.r.t. an open cover {U i } of X. An object L of T ors H α is defined to be an assignment of an H (U i )-torsor L(U i ) to each U i , together with transition functions satisfying the twisted cocycle condition: on triple intersections. A morphism between two α-twisted H -torsors L ′ and L ′′ is given by a compatible collection of isomorphisms Similarly we define the category T ors H α (U) for any open U. The resulting sheaf of categories (=stack) T ors H α on X is by definition a torsor over BH = T ors H 1 , i.e. an Hgerbe, which we denote by α X. Clearly two cocylces which represent the same cohomology class inȞ 2 (X, H ) define isomorphic gerbes. Conversely, if sheaf cohomology on X can be computed inČech terms, any H -gerbe arises this way from some α w.r.t. a sufficiently refined cover [Gir71], [Bre90,Bre94]. Notation: • Given an H -gerbe T over X we write [T ] ∈ H 2 (X, H ) for the element that classifies it. • The base space X for an H -gerbe T → X is called the coarse moduli space of T . This terminology reflects the fact that X represents the sheaf of sets π 0 (T ), i.e. the sheaf of isomorphism classes of sections in T . Basic construction: Starting with an algebraic (or analytic) space X and a short exact sequence of sheaves of groups with H -abelian, we get a coboundary map δ : H 1 (X, K ) → H 2 (X, H ). This admits the following lift on the level of torsors and gerbes: a K -torsor C with class [C ] ∈ H 1 (X, K ) determines an H -gerbe δ(C ) with class δ([C ]) ∈ H 2 (X, H ). Explicitly, for an open U, δ(C )(U) is the category of pairs (D, ι) where D is a G -torsor on U and ι : D × G K → C is an isomorphism of K -torsors on U. A familiar special case involves the sequence It says that every projective bundle on X gives rise to an O × X -gerbe which is trivial if and only if the projective bundle is a projectivization of a vector bundle. If (X, O X ) is a nice ringed space for which cohomology can be computed inČech terms, then the choice of α ∈ H 2 (X, O × X ) gives rise to the notion of α-twisted sheaves on X. More precisely, let U = {U i } be an open cover of X (in the topology under consideration) and let be a 2-cocycle representing α ∈ H 2 (X, O X ). One defines an α-twisted sheaf on X as a collection {F i } of sheaves F i → U i of O X -modules, together with a collection of gluing isomorphisms Composition is defined in an obvious way and so we obtain a category of α-twisted sheaves, which depends both on the cover U and on the cocycle α. It can be checked [Cȃl00, Section 1.2] that the operations of passing to a refinement U ′ of U and of replacing α by a cohomologous cocycle α ′ , give rise to an equivalent category of α ′ -twisted sheaves. Thus for any α ∈ H 2 (X, O × X ) we get category (O X , α) -mod of α-twisted sheaves on X (defined only up to a non-canonical equivalence). An α-sheaf F on X is called quasicoherent (respectively coherent) if each F i is quasi-coherent (respectively coherent). We will write QCoh(X, α) and Coh(X, α) for the categories of quasi-coherent and coherent α-twisted sheaves. Note that (O X , α) -mod, QCoh(X, α) and Coh(X, α) are all abelian categories. More intrinsically the α-twisted sheaves on X can be interpreted as weight one sheaves on α X, where a sheaf on α X is understood as a representation of the sheaf of groupoids α X → X. To spell what this means, let us denote by Q Coh X the stack of quasicoherent sheaves on the space X. Let X → X be any fibered category over X. Recall (see e.g. [Del90, Section 3.3] or [LMB00, Definition 13.3.3]) that a representation of X is a morphism F : X → Q Coh X of fibered categories defined over X. Explicitly this means that for any algebraic (or analytic) space T → X we are given a finctor F T : X (T ) → QCoh(T ) so that F T is compatible with base changes. In particular, if α X → X is an O × X -gerbe, then a representation of α X is a X-functor F : α X → Q Coh X . Given an integer n we say that F is a pure α X-representation of weight n if for any open U ⊂ X and any section L ∈ α X(U) the natural sheaf homomorphism Aut U (L) → Aut U (F (L)) induced by F factors as where in the bottom row the map is the raising into power n. It is instructive to point out that when we are dealing with the trivial gerbe 0 X on X, a representation of 0 X is nothing but a quasicoherent sheaf F on X equipped with a direct sum decomposition F = ⊕ n∈Z F n into quasicoherent sheaves F n so that a locally defined function f ∈ O X acts on F as multiplication by f n on F n . The reader can check as an exercise that the category of representations of α X of pure weight one is equivalent to the category QCoh(X, α) and that the category of representations of α X of pure weight n is equivalent to the category QCoh(X, nα). Geometric gerbes and their presentations In this section we recall a more geometric approach to H -gerbes which involves gluing of certain good local models. This exploits the standard idea that various geometric objects can be conveniently presented in terms of an atlas modulo certain gluing relations on it. For example, for a a manifold X, an atlas U can be taken to be the disjoint union U = i U i of coordinate charts, and the gluing can be specified by the closed subset of relations which comes together with two maps s, t : R → U (corresponding to the two projections of U × U onto U) each of which is a local diffeomorphism. Analogously, presentations can be used to define schemes, algebraic spaces and analytic spaces. Formally, a presentation by objects in a (fibered) category C (or a groupoid in C ) consists of the following data: These data are subject to the obvious analogues of the group axioms, applied to the maps m, i and e. Note that any morphism γ : U → X in C determines a presentation (R, U, m, i, e) in C , where: R := U × X U; the maps s, t are the two projections; the composition map m sends (a, b) × (b, c) to (a, c), i sends (a, b) to (b, a); and e is the diagonal map. In this situation we identify X with the quotient U/R and we will say that R s / / t / / U is generated by γ. Let now γ : U → X be a morphism of complex schemes and let be the presentation of X generated by γ. Let p 1 , p 2 , m : R × U R → R denote the two projections and the multiplication map respectively. Let H be an abelian group scheme over X, H → X its sheaf of sections, H its pullback to R via γ • s = γ • t, and let π : R → R be an H-torsor over R. In order for to be a groupoid we need a biextension isomorphism We claim that this presentation determines an H -gerbe [U/R] on X, which we can interpret as the (stacky) quotient of U by R. Remark 2.3 The necessity of taking stackification in the above construction is dictated by the subtlety of the conditions required to have a 'sheaf of categories'. Let X → X be a category fibered in groupoids. Recall that there are two types of sheaf-like conditions on can impose on X : (1) For any open V ⊂ X and any two objects ξ, η ∈ X (V ), the presheaf of sets (open W ⊂ V ) → Hom X (W ) (ξ |W , η |W ) is required to be a sheaf. (2) If V = ∪ i W i is an open covering of V and we have • ξ i ∈ ob(X (W i )); • ϕ ij : ξ j|W ij →ξ i|W ij isomorphisms satisfying the cocycle condition; then we require the existence of an object ξ ∈ ob(X (V )) together with isomorphisms Now if X satisfies (1) we say that X is a prestack over X and if it satisfies (1) + (2), then we say that X (X) is a stack. Given any prestack X one shows (see [LMB00, § 3] for details) that there is a unique (up to equivalence) stack X a → X together with a map X → X a which is fully faithful and locally on X is essentially bijective. The stack X a is called the stackification of X and is completely analogous to the sheaf one associates with a presheaf of sets. The stacks X → X over X which admit a smooth or etale groupoid presentation (lifting a presentation for X) are the stacks which are closest to schemes and on which one can do geometry in essentially the same way as on spaces. In fact if X is a stack which admits a smooth (respectively etale) presentation, then X is called an Artin algebraic stack (respectively Deligne-Mumford stack) and is the main object of study in the algebraic geometry of stacks [LMB00]. We consider only stacks for which the diagonal map X → X × X X is affine. Note that for an H -gerbe α X → X, the condition of having an affine diagonal is equivalent to H → X being an affine group scheme. In particular α X is an algebraic stack (in the sense of Artin) if and only if α X has a groupoid presentation. (ii) The case of main interest for us is when H = G m , so G = O × X . In this case R is the total space of a (punctured) line bundle on R and j, j 0 are isomorphisms of line bundles. (iii) The above discussion has an obvious analogue where schemes are replaced by (algebraic or analytic) spaces or manifolds. (iv) Not every presentation of X will lift to a presentation of a given gerbe α X. For example, only the trivial gerbe 0 X = BH → X can be presented by a lift of the trivial presentation X Slightly more generally: the same reasoning shows that for any map of complex spaces γ : U → X and any α ∈Ȟ 2 (X, H ) it follows that the presentation of X generated by γ can be lifted to a presentation for the H -gerbe α X if and only if γ * α = 0 ∈Ȟ 2 (U, γ * H ). Basic construction, continued: Assume we are given a short exact sequence of sheaves of groups on an algebraic (analytic) space X. Suppose that H is commutative and that the sheaves H , G and K are represented by group schemes H, G and K respectively. In section 2.1.1 we associated to every In this situation the gerbe δ(T ) comes with a natural presentation: Here γ : U → X is the scheme representing T and R is defined as follows. The presentation of X generated by γ has R = U × X U = U × X K since U is an H-torsor. Furthermore, under the identification R = U × X K the structure maps s and t become the projection on U and the action of K respectively. In other words R s / / t / / U is the transformation groupoid for the action of K on U and X = U/R = U/K. To get the presentation R s / / t / / U of δ(T ) we can just take the transformation groupoid for the action of G on U (where H acts trivially), i.e. take R := U × X G with s and t being again the projection and the action maps. Equivalently we may take R → R to be the trivial G-torsor and check that it satisfies the biextension and trivialization conditions. In particular we get that δ(T ) is a quotient gerbe -it is identified as the quotient of the space U by the group scheme G, where G acts with a stabilizer H at each point. Example 2.5 As a special case of the above we obtain the Azumaya presentation of an O × X -gerbe. Let P be a P n−1 bundle on a scheme X and let P denote the corresponding sheaf of sections. The bundle P is associated to a unique PGL n -bundle U → X (the frame bundle of P ) whose sheaf of sections U is naturally a torsor over PGL n (O X ). The image of U under the coboundary map for the sequence X -gerbe P X on X which comes together with a right Azumaya presentation of P X: Alternatively one may consider the sheaf A P of Azumaya algebras corresponding to P. The subsheaf A × P ⊂ A P of invertible elements in A P is representable by an affine group scheme A × P → X which acts simply transitively on the left on the frame bundle U → X. Using this group scheme we get a left Azumaya presentation of P X: The same gerbe P X has yet another presentation, called the Brauer-Severi presentation. Here the atlas is P itself and the relations are the total space of the punctured line bundle O(1, −1) × on P × X P . It is often useful to describe the sheaves on a gerbe as cartesian sheaves on the simplicial space generated by a presentation or equivalently as descent datum for a presentation. Concretely, given a flat presentation To write this condition one uses the natural identifications on R × t,U,s R and the normalization e * (j) = id F on U. Assume now that H = H(O X ) for some complex reductive abelian group H. In this situation we can recast the above description of sheaves on α X in terms of the presentation R s / / t / / U generated by γ : U → X and the H-torsor π : R → R. Given a sheaf of modules (F, j) on α X we can use the map π : R → R to push the isomorphism j down to R. Decomposing according to the characters H of H we see that π * (j) corresponds to a family {j χ } χ∈ H of isomorphisms, where The category of sheaves of modules on α X is therefore "graded" by the character group H. A sheaf of weight 0 ∈ H is just a sheaf of modules on X. In case H = G m we have H = Z and the sheaves of weight n are precisely the sheaves of weight n in the sense of section 2.1.1. In particular, the sheaves of weight 1 are the α-twisted sheaves on X. This observation leads to a very concrete description of the weight one sheaves on a G m -gerbe. Starting with a presentation (2.1) of a G m -gerbe α X on X write L → R = U × X U for the line bundle associated to the G m -torsor R → R via the tautological character id : G m → G m . The groupoid condition on the presentation (2.1) gives us a biextension isomorphism p * 12 L ⊗ p * 23 L = p * 13 L on U × X U × X U and so a sheaf on α X is the same thing as a sheaf F on U equipped with an L -twisted descent datum on U × X U, i.e. with an isomorphism of sheaves on U × X U, satisfying the cocycle condition Note that in writing (2.3) we had to use the biextension isomorphism for L . Example 2.6 Specializing the previous discussion to the case of the Azumaya gerbe P X of example 2.5 we get a natural identification of the category D b n ( P X) with the derived category of complexes of quasi-coherent sheaves on X equipped with an action of the Azumaya algebra P A and such that the center O × X of P A × acts on the cohomology sheaves with character n. Brauer groups Since the O × gerbes are naturally classified by elements in cohomological Brauer groups, it will be helpful to have an overview of the different variants of the Brauer group of a complex space before discussing properties of individual gerbes. Below we are going to discuss three versions of the Brauer group of a ringed space Z: Azumaya (Br(Z)), geometric (Br geom (Z)), and cohomological (Br ′ (Z)). Each of these makes sense in either the etale or the analytic topology on Z. In particular, for a complex algebraic space Z we have a diagram: When Z is smooth, the following facts are known: (i) all the maps in this diagram are injective; (ii) Br ′ (Z) is torsion by the purity theorem from [Gro68c]; coincides (see [Mil80]) with the torsion subgroup of Br ′ an (Z); Grothendieck has conjectured that the inclusion Br(Z) ֒→ Br ′ (Z) is an isomorphism for all smooth quasi-projective schemes. This may hold also for separated normal Z. The validity of the conjecture was established in the algebraic setting in [Gab81,Hoo82,Sch01] for arbitrary curves, for normal separated algebraic surfaces, for abelian varieties, for smooth toric varieties and for separated unions of two affine varieties. The analogous conjecture in the analytic case is virtually unexplored. The only general result to date [HS03] concerns analytic K3 surfaces and asserts that every torsion class in Br ′ an (X) of an analytic K3 surface X comes from an Azumaya algerba on X. Remark 2.7 As a corollary of fact (i) and Grothendieck's conjecture, we get that Br(Z) = Br(Z) geom for a smooth Z. This corollary is known to hold [EHKV01] in many cases in which the Grothendieck conjecture is still unknown. In fact, for a normal Noetherian scheme, the result of [EHKV01, Theorem 3.6] characterizes the image of Br(Z) in Br(Z) geom as the algebraic-geometric gerbes for which one can find a flat presentation with a projective structure map γ : U → Z, or equivalently as the classes of G m gerbes of quotient type (i.e. a quotient of an algebraic space by an affine algebraic group). In general this characterization seems to be optimal since there are examples of quotient gerbes on nonseparated surfaces whose isomorphism class is not represented by an Azumaya algebra, and examples of infinite order elements in Br ′ (Z) for a normal separated Z which are represented by algebraic-geometric gerbes but are not quotient gerbes [EHKV01, Examples 2.21 and 3.12]. If Z is a complex scheme, then the Azumaya Brauer group Br(Z), is defined [Gro68a] as the group of Morita equivalence classes of sheaves of Azumaya algebras on Z. Recall [Gro68a] that an Azumaya algebra on Z is a coherent sheaf of algebras which locally in the etale topology on Z is isomorphic to the endomorphisms algebra of an algebraic vector bundle on Z. Two Azumaya algebras A and B are called Morita equivalent if etale locally on Z we can find vector bundles E and F so that the sheaves of algebras A ⊗ E nd(E) and B ⊗ E nd(F ) are isomorphic. Morita equivalence classes of Azumaya algebras form a commutative group under the operation of tensoring over O Z ; the inverse is given by the opposite algebra. The Skolem-Noether theorem [Mil80, Proposition 2.3] implies that the Azumaya algebras of rank n 2 are classified by elements in H 1 et (Z, PGL(n)). The short exact sequence of groups schemes over Z: gives rise to a coboundary map The image of a ∈ H 1 et (Z, PGL(n)) under this coboundary map is an n-torsion class in H 2 et (Z, G m ) which is the obstruction to representing a by the endomorphism algebra of a rank n vector bundle. In particular the map (2.4) induces a homomorphism When Z is smooth, the homomorphism (2.5) is known to be injective [Mil80, Theorem IV.2.5]. This suggests that the Brauer classes are intimately related to elements in H 2 et (Z, G m ) and so one defines the algebraic cohomological Brauer group: Recall that by fact (ii) the group H 2 et (Z, G m ) is purely torsion. As explained in section 2.1.2, Azumaya algebras give rise to groupoid presentations of G m -gerbes on Z. In other words, for a smooth Z the inclusion (2.5) can be refined to a sequence of inclusions: where Br(Z) geom denotes the group of equivalence classes of algebraic-geometric G m -gerbes on Z. Recall that a G m -gerbe is algebraic geometric if it is an algebraic stack in the sense of Artin, i.e. if it admits a flat (equivalently, a smooth) groupoid presentation [Art74]. By analogy we define the analytic Azumaya Brauer group Br an (Z), and the analytic geometric Brauer group Br an (Z) geom of an analytic space Z, as the groups of Morita equivalent classes of analytic Azumaya algebras on Z and of isomorphism classes of analytic geometric O × Zan -gerbes respectively. The isomorphism type of an O × Zan -gerbe is determined by a class in the analytic cohomological Brauer group: In other words, the classifying map The analytic cohomological Brauer group can be studied via the exponential sequence: The corresponding cohomology sequence gives This of course is analogous to the usual description of the Picard group: when Z is compact and Kähler 0. In addition, if Z is compact and Kähler, the Hodge theorem implies that is a discrete subgroup of maximal rank. Hence, we can identify the connected component of Pic(Z) with the quotient of its tangent space H 1 an (Z, O Z ) by H 1 (Z, Z). In the case of Br ′ an (Z), there is still a 'tangent space': H 2 an (Z, O Z ), but it is divided by the typically non-discrete subgroup and so there is no good (=separated) topology on Br ′ an (Z). In the special case when Z is a K3 surface, we get that Br an (Z) geometric = Br ′ an (Z) is the quotient of the one dimensional vector space H 2 an (Z, O Z ) by the lattice dual to the transcendental lattice of Z, i.e. by H 2 (Z, Z)/H 1,1 Z (Z). Notice that for a very general analytic K3 this lattice has rank 22 and for a very general algebraic K3 it has rank 21. More precisely, one defines the transcendental lattice T Z of a K3 surface Z by the short exact sequence: In other words, T Z is the sublattice of H 2 (Z, Z) consisting of classes perpendicular to all classes of curves in Z. The dual sequence reads: and we have a natural map This leads to the following commutative diagram with exact rows and columns: The bottom row explicates Br an (Z) = Br ′ an (Z) as the quotient of the real torus Hom Z (T Z , R/Z) by the vector space H 1,1 R (Z)/(H 1,1 Z (Z) ⊗ R), embedded in it as a (usually dense) subgroup. Note that this vector space does not contain any torsion points of the torus. Equivalently the restricted map Hom Z (T Z , Q/Z) →Br an (Z) tor is an isomorphism. When Z happens to be an algebraic K3 surface we have a natural identification Br(Z) = Br an (Z) torsion and so we recover the standard interpretation of elements of the algebraic Brauer group of Z as a homomorphism from the transcendental lattice of Z to Q/Z (see e.g. [Cȃl00, Lemma 5.4.1] or [Cȃl01]). Tate-Shafarevich groups and genus one fibrations In this section we review some basic facts about twisted forms of a given elliptic fibration over an analytic space B. For more details the reader is referred to the excellent references [DG94] and [Nak01]. First we recall some terminology and set up the notation. For us a genus one fibration will always mean a holomorphic map π : X → B between normal analytic varieties whose generic fiber is a smooth curve of genus one. We define an elliptic fibration to be a genus one fibration equipped with a holomorphic section σ : B → X of π. Note that this is slightly more restrictive than the conventional notion of an elliptic fibration used in say [DG94], [Nak01], where only the existence of a meromorphic section of π is required. A genus one fibration will be called (relatively) minimal if X has at most terminal singularities and if the canonical class K X is π-nef. Let now X and B be normal analytic varieties and let Sometimes we may need to require the additional genericity assumption that X is smooth and that π : X → B is Weierstrass. Remark 2.8 (i) When X is a surface, the genericity assumption implies in particular that all the singular fibers of π are of Kodaira types I 1 or II, i.e. they are nodes and cusps. (ii) In this paper we will always deal with a situation in which X is smooth and either π is smooth or X is a surface and π has at worst I 1 fibers. We have included in the present discussion the more general case of an arbitrary Weierstrass π with a smooth total space, because of the potential applications of our duality construction to genus one fibered Calabi-Yau manifolds of arbitrary dimension. This however goes beyond the scope of the present work and will be the subject of a future paper. Let X ♯ ⊂ X denote the regular locus of π, viewed as an abelian group scheme over B. Denote by X an the corresponding sheaf of abelian groups in the analytic topology on B. When B and X happen to underly complex algebraic varieties we will write X for the etale sheaf of sections of X ♯ → B. The analytic Weil-Châtelet group W C an (X) of X is the group of bimeromorphism classes of analytic genus one fibrations Y → B such that: bimeromorphic to a smooth genus one fibration; • The relative Jacobian fibration Pic 0 (Y /B) is bimeromorphic to X ♯ (and hence to X). Note that this definition makes sense since for a suitably chosen dense open subset U ⊂ B the (sheafification of the) presheaf Pic 0 (Y /U) of relative Picard groups along the fibers of Y × B U → U is representable by an analytic space. The analytic Tate-Shafarevich group X an (X) of X is the subgroup of W C an (X) consisting of elements α ∈ W C an (X) such that for any representative Y → B of α and any point b ∈ B one can find an analytic neighborhood b ∈ U ⊂ B so that Y × B U → U has a meromorphic section. This implies that Y → B has no multiple fibers in codimension one. The group X an (X) can be described cohomologically [Nak01] as follows. Assume that X oo is a smooth space. Then by [Nak01, Proposition 5.5.1], the natural classifying map is injective. Furthermore if B oo = B, or if π is Weierstrass with a smooth total space, then the map (2.6) is an isomorphism [Nak01, Proposition 5.5.1]. In addition one knows (see e.g. [Nak01, Theorem 5.4.9]) that under the same assumptions, the sheaf j oo * j oo * X an fits in a short exact sequence Since by definition the sheaf (R 2 π * Z X /ı * ı ! R 1 π * O × X ) torsion is supported on the multiple fiber sublocus of D, it follows that in the absence of multiple fibers, i.e. under our definition of an elliptic fibration we have an isomorphism: In the remainder of this paper we will always assume tacitly that the isomorphism (2.7) holds, in fact we will assume that either π is smooth or that X is a surface. Because of this cohomological interpretation we can view the elements in X an (X) simply as X an -torsors. This definition of X an (X) is consistent with the usual definition of the algebraic Tate-Shafarevich group [Gro68a,Gro68b,Gro68c] and [DG94]. When X is a surface we can also interpret (by compactifying a genus one fibration and then choosing a Weierstrass smooth model over B) the elements in X an (X) as smooth analytic surfaces equipped with a genus one fibration over B. The algebraic Weil-Châtelet and Tate-Shafarevich groups W C(X) and X(X) are defined in a similar manner [Gro68a, Gro68b,Gro68c] and [DG94] with the etale topology replacing the analytic one. Furthermore, the analysis carried out in [DG94, Section 1] implies, that under the assumption that X and B are both smooth and that π has a regular section, the algebraic Tate-Shafarevich group can be interpreted cohomologically as i.e. the elements in X(X) can be viewed as algebraic spaces Y → B which are X -torsors. Given an element α ∈ X an (X) (or α ∈ X(X)) we denote by X ♯ α the analytic (or algebraic) space representing the torsor α and by π ♯ α : X ♯ α → B the corresponding projection. Following [DG94] we say that a morphism of analytic (algebraic) spaces Y → B is a good model for α if Y → B is bimeromorphic to X ♯ α → B, Y is smooth and the map Y → B is proper and flat. Remark 2.9 Note that when π is smooth X ♯ α is itself a good model for α and when X is an arbitrary smooth surface we always have a preferred good model for α, namely the relatively minimal model of a compactification of X ♯ α . When X is of dimension three the good models of elements in X(X) have been analyzed in detail, see e.g. [Mir83,Gra91,DG94]. In this case the good model exists (possibly after blowing up B at finitely many points) but is not unique. However all good models of a given α are related by flops and in particular have equivalent derived categories of coherent sheaves (see e.g. [BO95,Bri02,Kaw02]). In the cases when π : X → B is smooth or X is a surface we put for the canonical good model of α. In particular, if π : X → B ∼ = P 1 is an elliptic K3 surface we have that X α is a well defined analytic (respectively algebraic) K3 surface for any element α ∈ X an (X) (respectively α ∈ X(X)). The meromorphic action of the analytic group space X ♯ → B on X α induces a natural meromorphic action map Furthermore, given a positive integer n we can consider the sheaf of groups X an [n] → B consisting of the n-torsion points in X an . The sheaf X an [n] is represented by a group space X ♯ [n] which is quasi-finite over B. We will write X[n] for the closure of X ♯ [n] in X and by an abuse of notation we will denote the meromorphic map is finite over a dense open set in B we can form the quotient X α /X ♯ [n] which as an analytic space is well defined up to a bimeromorphism which respects the genus one fibration. Moreover X α /X ♯ [n] is naturally a X an -torsor at the general point and so represents an element in X an (X). It is not hard to calculate this element in terms of α and n only. In fact it is clear that X α /X ♯ [n] is tautologically the same as the quotient which by definition represents the element nα ∈ X an (X). Here is the kernel of the natural product map corresponding the group law on X ♯ , and the action of K is induced from the component-wise action of (X ♯ ) In particular we have a bimeromorphism X α /X ♯ [n] X nα which is unique up to an auto-bimeromorphism of X nα , compatible with the genus one fibration. However, as one can see from the proof of [Nak01, Lemma 5.3.3], if we assume that π is relatively minimal with a smooth total space, then all such auto-bimeromorphisms are holomorphic and are translations by sections in X an . So, under this the genericity assumption, we will have a bimeromorphic identification X nα = X α /X ♯ [n] and hence a well defined meromorphic map If in addition we assume that the fibration π : X → B has a trivial Mordel-Weil group, then the meromorphic map q n α is canonical and does not depend on any choices. The index of an element α ∈ X an (X) is defined to be the minimal degree of a global multisection of π α . We will denote the index by ind(α). Assume now that B and X are quasi-projective. Since the element 0 ∈ X an (X) is represented by the algebraic elliptic fibration π : X → B, it follows that for each α of finite index the space X α admits a dominant meromorphic map to the algebraic variety X. In fact Nakayama [Nak01, Proposition 5.5.4] proves that such a X α is bimeromorphic to an algebraic variety and so must be an algebraic space. Furthermore in the case of surfaces Kodaira shows [Kod63] that X α is quasi-projective if and only if α is torsion in X an (X). Complementary fibrations Let X be smooth and let X π / / B σ h h be a relatively minimal elliptic fibration. Consider an element α ∈ X an (X) and a good representative π α : X α → B for α. Our goal in this section is to describe the cohomological Brauer group Br ′ an (X α ) in terms of the Tate-Shafarevich group X an (X). For this we need to analyze the relationship between the sheaf X an and the relative Picard sheaf of π α . If all the fibers of π are integral, then Pic(X α /B) is representable and we have a short exact sequence of abelian sheaves in the analytic topology: where deg α is the map assigning to each L ∈ Pic(π −1 α (U))/π * α Pic(U) its degree along a smooth fiber. Remark 2.10 If we want to allow non-integral fibers for π, then Pic(X α /B) becomes nonrepresentable, but it has a maximal representable quotient Q α as shown in e.g. [Ray70] and [DG94] in the algebraic case and [Nak01] in the analytic case. The sheaf of groups Q α is defined as: where E α ⊂ Pic(X α /B) is a subsheaf generated by local components of the preimage π −1 α (D) of the discriminant D ⊂ B (see [DG94,Proposition 1.13] for the precise statement). Note that when all fibers of π are integral we have E α = 0. In this generality, the short exact sequence (2.8) is replaced by a commutative diagram with exact rows and columns: Note also that (R 2 π α * Z/E α ) torsion is supported on D and that its fibers at smooth points of a component of D parameterizing Kodaira fibers of type I n are isomorphic to Z/n. Fix now an element α ∈ X an (X). Under some mild assumptions on α we will construct a natural map T α : X an (X) → Br ′ an (X α ), which will allow us to compare the Tate-Shafarevich and Brauer groups. The existence of T α is established in the following lemma. Lemma 2.11 Assume that X is smooth, π has integral fibers and Br ′ an (B) = 0. Assume further that Then there is a canonical homomorphism T α which fits in an exact sequence of abelian groups: Proof. The long exact sequence of (2.8) gives Consider now the Leray spectral sequence for π α : X α → B and the sheaf O × Xα , which has only two non-zero rows, so it also becomes a long exact sequence: The assumption ker H 3 = 0 thus immediately implies the identification (2.10) and so the lemma is proven. ✷ The lemma has the following immediate corollary: Corollary 2.12 Assume that X is a smooth projective surface and π has integral fibers. Then the map T α : X an (X) → Br ′ an (X α ) exists for all α ∈ X an (X). Proof. The existence of T α is an immediate consequence of Lemma 2.11 since in this case B is a smooth curve and so H 2 If the vanishing assumption (2.9) does not hold, we can still construct a variant of the map T α which is defined only on a part of the group X an (X): Lemma 2.13 Assume that X is smooth, π has integral fibers and Br ′ an (B) = 0. Then: , then there is a group homomorphism (compatible with T α when the latter exists) mX an (X) → Br ′ an (X α ), from the subgroup mX an (X) ⊂ X an (X) of m-divisible elements in X an (X) to the cohomological Brauer group of X α ; (ii) if α is m-divisible in X an (X), then there is a group homomorphism (compatible with T α when the latter exists) X an (X)[m] → Br ′ an (X α ) from the subgroup X an (X)[m] ⊂ X an (X) of m-torsion elements in X an (X) to the cohomological Brauer group of X α ; Proof. For any given α ∈ X an (X) we have a composition map The assignment α → d α gives rise to a group homomorphism and so the image of mX an (X) in H 1 an (B, Pic(X α /B)) must be contained in Br ′ an (X α ). This proves (i). Similarly, if α = m · ϕ is m-divisible, then for any ξ we have d α (ξ) = d m·ϕ (ξ) = d ϕ (m · ξ) and so d α vanishes identically on X an (X) [m]. Thus the image of X an (X) [m] in H 1 an (B, Pic(X α /B)) must be contained in Br ′ an (X α ) which completes the proof of (ii) and the lemma. ✷ We will denote the maps in items (i) and (ii) of Lemma 2.13 again by T α . Since by construction these maps are compatible with the map T α from Lemma 2.11, whenever the latter exists, this abuse of notation can not lead to any confusion. Let us examine in more detail the map d : H 1 an (B, X ) → Hom Z (H 1 an (B, X ), H 3 an (B, O × B )) given in (2.11). This map can be rewritten as a bilinear pairing The proof of Lemma 2.13 shows that for every α ∈ X an (X) we have a well defined homomorphism T α : α ⊥ → Br ′ an (X α ), where α ⊥ ⊂ X an (X) is the orthogonal complement of α with respect to •, • . Definition 2.14 Two genus one fibrations α, β ∈ X an (X) will be called complementary if α, β = 0. We will call α and β m-compatible if one of them is m-divisible and the other one is m-torsion. Note that using the pairing •, • , Lemma 2.13 follows from the obvious observation that every m-compatible pair α, β is complementary. For future reference we spell out the special case when α = 0: Corollary 2.15 Assume that X is smooth, the fibers of π are integral, and Br ′ an (B) = 0. Then we have an isomorphism H 1 an (B, Pic(X/B)) ∼ = Br ′ an (X) and we have an exact sequence of abelian groups H 1 (B, Z). Proof. Since σ : B → X is a section of π it follows that the composition an (X). Combined with the fact that ind(0) = 1 this gives the short exact sequence of groups above. The corollary is proven. ✷ Our pairing •, • can be explicitly described as follows. Every element α ∈ X an (X) = H 1 an (B, X ) has two different incarnations: • α can be interpreted as a group extension of Z B by X . Concretely this is just the sheaf of groups Pic(X α /B) as it fits in the extension (2.8) viewed as an element e(α) in Ext 1 Z B (Z B , X ). • α can be interpreted as an extension of X by O × B [1]. Concretely this is the amplitude one object α X in the derived category of abelian sheaves on B which is the pullback of the extension class of Alternatively, α X can be thought of as a sheaf of commutative group stacks on B which is just the sheaf of all maps from B to the O × -gerbe on X whose characteristic class is T 0 (α). Note that this gerbe is well defined in view of Corollary 2.15. We will write g(α) ∈ Ext 1 for the extension class of α X . For more on the relevance of commutative group stacks see Section A.1. With this notation it is now clear that α, β is just the Yoneda product g(β) • e(α). Lemma 2.16 The bilinear pairing is skew-symmetric. Proof. The Poincare sheaf P → X × B X satisfies the biextension property and so can be interpreted functorially (see [SGA7-I, Exposé VII,Corollary 3.6.5]) as an object L(P) ∈ ob D b (Z B -mod) in the derived category of abelian sheaves on B, which is an extension of In other words, L(P) fits in a distinguished triangle of complexes of abelian sheaves. Let p ∈ Ext 1 ) be the corresponding extension class. From the definition of the homomorphisms one can easily check that g(α) can be identified with the composition Indeed, observe that both g(α) and p•(e(α)⊗id X ) can naturally be interpreted as amplitude one objects in the derived category of abelian sheaves on B. Since any amplitude one object in D b (Z B -mod) can be viewed as a stack over B it suffices to show the equivalence of the categories fibered in groupoids corresponding to g(α) and p • (e(α) ⊗ id X ) respectively. Let as before α X → B denote the fibered category corresponding to g(α). Since by construction α X comes from the push-forward Rπ α * O × Xα we can identify explicitly the groupoid of sections of α X over an open set U in B as the groupoid of all line bundles L on (X α × B X) |U having the property that for any point b ∈ U and any x ∈ X b we have that L | (Xα) Finally, using the description of the complex L(P) in terms of fibered categories given in [SGA7-I, Exposè VII] we see immediately that this groupoid is precisely the groupoid of sections over U of the fibered category corresponding to p • (e(α) • id X ). Now taking into account that g(α) = p • (e(α) ⊗ id X ), we see that for any two elements α, β ∈ H 1 an (B, X ) the product α, β ∈ H 3 an (B, O × B ) can be rewritten as the Yoneda product is the external cup product of α and β. To understand the symmetry properties of •, • it only remains to notice that where sw : X L ⊗ X → X L ⊗ X is the involution switching the two factors. However recall that P is a normalized Poincare bundle and so can be explicitly described as the rank one divisorial sheaf where ̟ : X× B X → B is the natural projection and N σ/X is the normal bundle to the section σ ⊂ X. In particular sw * (P) = P and so P → X × B X is a symmetric biextension. This shows that p • sw = p. Combined with the fact that sw(β ⋒ α) = (−1) |α|·|β| α ⋒ β = −α ⋒ β we conclude that β, α = − α, β . The lemma is proven. ✷ An immediate corollary of the skew-symmetry of •, • is that an (X β ) is well defined. In the case of surfaces we get the following: Corollary 2.17 Suppose that X is a smooth surface and that π is non-isotrivial with all fibers integral. Then X(X) is infinitely divisible and so any α ∈ X an (X) is m-compatible with all elements in X an (X)[m]. Proof. To show that X(X) is infinitely divisible note that since B is a curve we can apply Corollary 2.15 to conclude that the map T 0 fits in a short exact sequence where the last map is the composition of the identification Br ′ an (X) ∼ = H 1 an (B, Pic(X/B)) coming from the Leray spectral sequence and the map H 1 an (B, Pic(X/B)) → H 1 (B, Z) corresponding to the degree morphism deg : Since by assumption π has only integral fibers, we have natural identifications Pic(X/B) = R 1 π * O × X , and Z B = R 2 π * Z X under which the degree map deg : Pic(X/B) → Z B becomes the coboundary homomorphism δ : R 1 π * O × X → R 2 π * Z X in the long exact sequence of higher direct images associated to the exponential sequence and the map π : X → B. In particular, this implies that the map Br ′ an (X) → H 1 (B, Z) fits in the commutative diagram: Br ′ an (X) Here the maps θ and η between the third and second rows come from the Leray spectral sequences for the map π : X → B and the sheaves O × X and Z X , which give: = 0 since B is one dimensional, and H 2 (B, R 1 π * Z X ) = 0 by the irreducibility of monodromy. This implies that X an (X) = ker[Br ′ an (X) → H 1 (B, Z) and so X(X) is divisible. The corollary is proven. ✷ Definition 2.18 For any complementary pair α, β ∈ X an (X) we denote the O × -gerbe on X β classified by T β (α) by α X β . Conjecture 2.19 For any complementary pair α, β ∈ X an (X), there exists an equivalence of the bounded derived categories of sheaves of pure weights ±1 on α X β and β X α respectively. In section 3.4 we will prove this conjecture in any dimension under the additional assumptions that π is smooth and that α and β are m-compatible. In section 4 we will prove it unconditionally when X is a surface. Smooth genus one fibrations In this section we will consider smooth genus one fibrations over smooth bases of arbitrary dimension and O × -gerbes over them. O × -gerbes In this section we work with a fixed smooth elliptic fibration and two genus one fibrations X α , X β corresponding to two m-compatible elements α, β ∈ X an (X). Recall that m-compatibility means that one of the elements, say β, is actually algebraic, i.e. β is a torsion element of some order m in X an (X), while α is an m-divisible element. Choose an element ϕ ∈ X an (X) such that mϕ = α. We will use this data to construct presentations for gerbes β E α over X α and α L β over X β . Different choices of the root ϕ give rise to different but Morita equivalent presentations of the same gerbes. The lifting presentation Recall from Section 2 that a gerbe presentation over a variety X is a diagram We define a gerbe α L β on X β via the lifting presentation: T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T of the Poincare bundle on X × B X. As usual we denote the natural projection X × B X → B by ̟. The required biextension property for P 1−2, m·3 follows immediately from the see-saw principle . For future reference we note that under the obvious identification the lifting presentation (3.1) can be rewritten as The extension presentation Similarly, we define a gerbe β E α on X α via the extension presentation: T T T T T T T T T T T T T T T T T T T T T T Here Φ β could be taken as the line bundle d is the difference map and M is any line bundle on X[m] whose punctured total space gives a group extension: We will explain below how to construct a relative line bundle Σ β ∈ Γ(B, Pic m (X β /B)) and a global line bundle M β → X[m], determined by the condition that its punctured total space is the theta group G β : The simplest choice would be to take M := M β . However, we will see later that in order to achieve duality with the lifting gerbe, the correct choice is to take M := M β ⊗ M −1 0 , where M 0 is determined by the condition that its punctured total space is the theta group corresponding to the similarly defined relative line bundle Σ 0 ∈ Γ(B, Pic m (X/B)). To define Σ β , consider first two genus one curves E ′ and E ′′ with the same Jacobian E. Let q : E ′ → E ′′ be a map which induces the multiplication by m map E → E. For any two points a, b ∈ E ′ such that q(a) = q(b), we have that This determines a map E ′′ → Pic m (E ′ ). Applying this to our map q β : X β → X we get a well defined morphism X → Pic m (X β /B). The image of σ ⊂ X is our relative line bundle pair (x, λ), where x is a local section of X[m] → B and λ : t * x Σ → Σ is an isomorphism. Applying these constructions to our relative line bundles Σ β and Σ 0 produces the desired line bundles M β and M 0 and theta groups G β and G 0 . For future reference we note that under the obvious isomorphism d × p 2 : X ϕ × Xα X ϕ →X[m] × B X ϕ the extension presentation (3.3) can be rewritten in the equivalent form: where a ϕ : X[m] × B X ϕ → X ϕ denotes the action and p 2 denotes the second projection. Coboundary realizations Although we constructed α L β and β E α directly, it is worth noting that the lifting and extension presentations are both special cases of the coboundary construction described in Sections 2.1.1 and 2.1.2. Recall that the input for the coboundary construction for a gerbe presentation on a variety Y consists of a short exact sequence of group schemes over Y The lifting presentation is obtained from the short exact sequence 1 → G m → tot(p * 1,m·2 P × ) → π * β (X) → 1 and the π * β (X) = X × B X β -torsor X ϕ × B X β . Note that the group structure on tot(p * 1,m·2 P × ) in the above sequence comes from the biextension property of the Poincare bundle (for the group law on π * β (X)). The extension presentation is obtained from the short exact sequence In other words, we can write α L β and β E α as quotient gerbes: The class of the lifting gerbe In this section we continue to assume that π : X → B is smooth, β ∈ X an (X) is of finite order m and α ∈ X an (X) is m-divisible. Theorem 3.1 The class [ α L β ] of the lifting gerbe equals T β (α). In other words α L β is a model for α X β . Proof. The proof is in two steps. In step (1) we show that a cocycle representing the class T β (α) which defines α X β becomes a coboundary δ(c) when pulled back to L := α LU β . In step (2) we check that the line bundle defined by c on L × X β L coincides with the Poincare bundle α LR β . We need to show the isomorphism of two gerbes on the smooth space X β . We will be working with the Cartesian product Recall from section 2.1 that the class of α X β is T β (α) ∈ H 2 (X β , O × ). By the Leray spectral sequence for π β : X β → B, this group equals H 1 (B, Pic(X β /B)). Explicitly, T β (α) is the class of the Pic(X β /B)-torsor induced from the Pic 0 (X β /B) = X-torsor X β . Similarly, the Leray spectral sequence for λ B : L → B gives an injection The λ β -pullback of T β (α) is in the image of (3.5) and is the Pic(L/B)-torsor induced from X β via λ * β : Pic 0 (X β /B) → Pic(L/B). For step (1), consider the short exact sequence of sheaves of groups on B: Here ev sends a line bundle on L to the family of its restrictions on pt × B X ϕ , and it is surjective because π β : X β → B is smooth. We are claiming that λ * β (T β (α)) = 0, so we need to show that T β (α) is in the image of the coboundary In H 0 (B, H om B (X β , Pic(X ϕ /B))) we have a natural element depending on the choice of a trivialization Σ β of X mβ . We will see that ∂(q) = T β (α). Let U = {U i } i∈I be an analytic open cover of B for which we have trivializations s i : U i → X ϕ of the X-torsor X ϕ . In order to calculate ∂(q) we first lift q to an element c ∈ C 0 (U, Pic(L/B)). This lift is given in terms of the map where P is the standard Poincare bundle on X × B X. TheČech differential δ(c) ∈ Z 1 (U, Pic(L/B)) is given by {c i ⊗ c −1 j } i,j∈I . It comes from Z 1 (U, Pic(X β /B)), and is represented there by {O π −1 β (U ij ) (m(s j − s i ))} i,j∈I . On the other hand, ms i can be interpreted as a section of X α = X mϕ over U i , so this cocycle represents our T β (α). For step (2), consider the cochain ¿From the discussion in section 3.1 we know that this cochain is in fact a global section L of Pic(L × X β L/B). We need to show that L = P 1−2, m·3 . As usual we identify L × X β L with X ϕ × B X ϕ × B X β , so p 1 and p 2 become p 13 and p 23 . It suffices to show the equality L |λ −1 for each open set U i . This follows by the theorem of the cube from the identifications: This finishes the proof of the theorem. ✷ The class of the extension gerbe In this section we again assume that π : X → B is a smooth elliptic fibration, that Br ′ an (B) = 0 and that α, β ∈ X an (X) are m-compatible with β of order m. Theorem 3.2 The class [ β E α ] of the extension gerbe equals T α (β). In other words β E α is a model for β X α . Proof. Recall that the assumption Br ′ an (B) = 0 together with the Leray spectral sequence for π α : X α → B give us an injection In terms of this inclusion, T α (β) can be identified with the isomorphism class of the Pic(X α /B)-torsor associated to the Pic 0 (X α /B) = X torsor X β . In order to show that [ β E α ] = T α (β) we must first check that T α (β) pulls back to the trivial element in Recall from Section 3.1.2 that the atlas β EU α for the extension presentation is defined by fixing an element ϕ ∈ X an (X) such that m · ϕ = α and then taking β EU α := X ϕ . With this definition the structure morphism β EU α → X α is identified with the map q := q m ϕ : X ϕ → X α of multiplication by m along the fibers. The pullback via q of relative line bundles defined on the fibers of π α : X α → B gives rise to a morphism of sheaves of groups Q : Pic(X α /B) −→ Pic(X ϕ /B). Since q corresponds to multiplication by m, it follows that Q fits in a commutative diagram Also, since π α • q = π ϕ , it follows that the pullback map is compatible with the Leray spectral sequences for π α and π ϕ , and so fits in a commutative diagram Thus we can identify q * (T α (β)) with the class of the Pic(X ϕ /B)-torsor which is induced from the Pic 0 (X ϕ /B) = X -torsor h 1 (mult m )(X β ) = X m·β . However by assumption m · β = 0 and so X m·β is trivial as a X -torsor. Therefore q * (T α (β)) = 0 as promised. To complete the proof of the theorem we need to realize the cocycle q * (T α (β)) as a coboundary: for some ψ ∈ C 1 an (X ϕ , O × ), and then check that the line bundle defined by ψ on X ϕ × Xα X ϕ is isomorphic to Φ β . In terms of the inclusion H 2 an (X ϕ , O × ) ⊂ H 1 an (B, Pic(X ϕ /B)) this amounts to writing the class q * (T α (β)) ∈ H 1 an (B, Pic(X ϕ /B)) as the coboundary of someČech cochain ψ ∈ C 0 an (b, Pic(X ϕ /B)) and then showing that the global section of Pic(X ϕ × Xα X ϕ /B) determined by ψ coincides with the global section given by Φ β . To carry this out we will need to first choose a cocycle representating of T β (α) ∈ H 1 an (B, Pic(X α /B)) or equivalently a cocycle representating for the X -torsor X β . Let U = {U i } be an analytic open covering of B which trivializes X β as an X torsor. Choose trivializing sections s i ∈ Γ(U i , X β ) over each U i . Then T α (β) ∈ H 1 an (B, Pic(X α /B)) is represented by theČech cocycle Here O X β (s j − s i ) is viewed as a line bundle of degree zero along the fibers of π α : X α|U ij → U ij via the canonical identification Pic 0 (X β /B) = X = Pic 0 (X α /B). In particular q * (T α (β)) ∈ H 1 an (B, Pic(X ϕ /B)) is represented by the cocycle In order to write this cocycle as a coboundary we will have to trivialize the X -torsor X m·β . Recall that in the construction of the line bundle Φ β we used a particular trivialization of X m·β , namely the relative line bundle Σ β ∈ Γ an (B, Pic m (X β /B)). Using Σ β we can construct a cochain namely, the section locally given by p * On the other hand the section corresponding to Φ β → X ϕ × Xα X ϕ can be described as follows. Recall from section 3.1.2 that stand for the difference maps. By construction p * 1 ψ i ⊗ p * 2 ψ −1 i lives naturally in Γ(U i , Pic 0 (X β × B X β /B)) (which we have identified with Γ(U i , Pic 0 (X ϕ × B X ϕ /B))). In view of this, it will be convenient if we rewrite all objects as line bundles on X β × X X β . To that end, choose a local section s : U i → X β and let t −s be the induced isomorphism of translation by s along the fibers. With this notation we have a commutative diagram Therefore, in order to compare p * for every section s : U i → X β . Equivalently, it suffices to show that On the other hand we have a commutative diagram Here O X (s − s i ) ∈ Γ(U i , Pic 0 (X/B)) denotes the relative line bundle on X corresponding to O X β (s − s i ) ∈ Γ(U i , Pic 0 (X β /B)) under the canonical identification Pic 0 (X/B)) = X = Pic 0 (X β /B). The formula (3.7) implies that t t t t t t t t and so (3.6) holds. The theorem is proven. ✷ Duality between the lifting and extension presentations We are now ready to prove TheoremB for a smooth elliptic fibration over a smooth space B satisfying Br ′ an (B) = 0. Let α, β, ϕ ∈ X an (X) satisfy mβ = 0, mϕ = α (in particular α and β are m-compatible). We want to compare the derived categories of coherent sheaves on α X β and β X α . The gerby Fourier-Mukai transform Let D b 1 ( α X β ) and D b −1 ( β X α ) denote the derived categories of coherent sheaves of weight one and minus one on the gerbes α X β and β X α respectively. Alternatively, as explained at the end of Section 2.1.2, we can view D b 1 ( α X β ) and D b −1 ( β X α ) as derived categories of T β (α)-twisted sheaves on X β and T α (−β) twisted sheaves on X α respectively. We want to construct a Fourier-Mukai functor which is an equivalence. To achieve this we will work with the models α L β and β E α for α X β and β X α respectively. The idea is to use the explicit presentations for these models of the gerbes and construct the functor F M in terms of data on the atlases. for the lifting and extension presentations. The (derived) pullback by γ L gives a natural functor , which sends complexes of sheaves on α L β to objects in D b ( α LU β ) preserved by the relations. Explicitly, for a L ∈ D b 1 ( α L β ), the pullback γ * L L is given by a pair (L, f ) where: • L is a bounded complex of sheaves on the atlas α LU β = X ϕ × B X β . Here p ij is the projection of X ϕ × B X ϕ × B X β onto the product of the i-th and j-th components. Under the gerby Fourier-Mukai transform, L should go to an object Q ∈ D b −1 ( β E α ). To produce this object we will perform an integral transform from the derived category of the atlas α LU β to the derived category of the atlas β EU α . Again, we would like to use the fact that the pullback by γ E gives a functor which sends complexes on β E α to objects in D b ( β EU α ) preserved by the relations. In principle this is all we can say about the images of γ * E since even for schemes the derived categories of coherent sheaves do not necessarily glue, see e.g. [Har66]. However in the case of β E α we can be much more precise. It is known [Pol96, Theorem A] that given a scheme S and a finite flat morphism p : U → S, the derived category of coherent sheaves on S is equivalent to the category of pairs (F, φ), where F ∈ D b (U) and φ : p * 1 F →p * 2 F is an isomorphism in , ψ), where G ∈ D b (U) and . In particular, since by construction the morphism β EU α = X ϕ → X α is finite and flat, we conclude that γ * E identifies D b −1 ( β E α ) with the category of pairs (Q, g) where: • Q is a bounded complex of sheaves on the atlas β EU α = X ϕ . Here p i is the projection of X[m] × B X ϕ onto the i-th component. Remark 3.3 The reader may wish to focus on the case when L is a line bundle on α L β of fiber degree zero, i.e. L → X ϕ × B X β is a line bundle with deg(L |{x}× B X β ) = 0 and the existence of f is equivalent to having isomorphisms L |Xϕ× B {y} ∼ = O(my)⊗Σ −1 β for all y ∈ X β . In this case the object Q should be a spectral datum on the gerbe β E α whose support is of degree one over B, i.e. Q is a torsion sheaf on X ϕ supported on (q n ϕ ) −1 (s) for some section s ⊂ X α . We will construct the functor F M by first constructing a functor between the derived categories on the atlases and then checking that this functor preserves the relations. On the level of atlases, consider the functor We now have the following: Proposition 3.4 The functor p 1 * : preserves the relations defining α L β and β E α and so descends to a well defined exact functor Proof. Let γ * L L = (L, f ) be as above. Let Q := p 1 * L. We need to construct a quasiisomorphism of complexes on X[m] × B X ϕ which depends functorially on f . We start by noting that there is a natural commutative diagram Since the two bottom squares are fiber products we have the base change formulas: pr * 1 p 1 * L = p 12 * pr * 13 L pr * 2 p 1 * L = p 12 * pr * 23 L. (3.9) Also, using the isomorphism a ϕ × id : X[m] × B X ϕ → X ϕ × Xα X ϕ we reduce the problem of finding the map (3.8) to the equivalent problem of constructing a map (3.10) pr * 1 p 1 * L → pr * 2 p 1 * L ⊗ Φ β . Therefore, in order to reconstruct from f a map (3.11) (equivalently g), it suffices to exhibit a canonical isomorphism (3.12) As a first step in establishing (3.12) we note that both sides are pullbacks of sheaves on X[m]. On the right hand side, Φ β was defined as d * (M −1 β ⊗ M 0 ) for the difference map d : X ϕ × Xα X ϕ → X[m]. On the left hand side, it suffices (in view of the see-saw principle) to argue that, for a point ξ ∈ X[m], the restriction P 1−2, m·3|{ξ}× B Xϕ× B X β is trivial. But by the definition of P 1−2, m·3 (see Section 3.1.1), this restriction can be identified with ξ ⊗m . Since ξ has order m, we are done. To conclude the construction of the map (3.12) and the proof of the proposition, we need to show that the direct image R 0 p 1−2 * (P Thus, the existence of the isomorphism (3.12) is equivalent to the existence of an isomorphism is the projection and p 1,m·2 : Recall from Section 3.1.2 that by definition we have Here a β : X[m] × B X β is the action, p 2 : X[m] × B X β → X β is the projection on the second factor and Σ β is a line bundle on X β of fiber degree m, which corresponds to the 'multiplication by m' map q β : X β → X. Look at the embedding X[m] × B X β ⊂ X × B X β . The projections p 1 , p 2 and the maps p 1,m·2 and a β extend to the natural projections X × B X β → X and X × B X β → X β and maps X × B X β → X × B X and X × B X β → X β , which we will denote by the same letters. With this notation we have Lemma 3.5 Let P → X × B X denote the standard Poincare bundle. Then we have a natural isomorphism . Proof of the lemma. We will use the see-saw principle. Let ξ ∈ X and let b = π(ξ) ∈ B. Then by viewing ξ as a line bundle of degree zero on (X β ) b and using the fact that Σ β|X b is of degree m we compute Thus for every ξ we have and so by the see-saw principle D β : To compute the bundle D β we consider a point x ∈ X β . Let b = π β (x). Restricting to Next, by the defining relationship between q β and Σ β we have that q β (x) is the line bundle of degree zero on X corresponding to . Also, by the definition of a translation we have that t * x Σ β is the tensor product of In other words, we have for all x ∈ X β . This implies that up to a twist by a pullback of a line bundle on B we have D β ∼ = O X (mσ). Finally, to fix the choice of this line bundle on B we look at the restriction Hence the line bundle on B is trivial and the lemma is proven. However applying the same reasoning we used in the proof of Lemma 3.5 to the projections p 1 , p 2 : X × B X → X and the obvious maps p 1,m·2 : X × B X → X × B X and a 0 : X ×X → X, we see that On the other hand, from the definition of the Poincare bundle we have that p * 1,m·2 P |X[m]× B X ∼ = O and so M 0 ∼ = O X (mσ) |X[m] . This finishes the proof of the existence of (3.12). To complete the proof of the proposition, it only remains to note that since (Q, g) was constructed from (L, f ) by means of the pushforward via X ϕ × B X β → X β and the fixed isomorphism (3.12), it follows that g will satisfy the cocycle condition whenever f does. The proposition is proven. ✷ Categorical yoga for equivalences We have constructed a functor F M : We are going to prove that it is an equivalence. In order to do this, it is convenient to recall some general criteria, due to Bondal-Orlov and Bridgeland, for equivalences of triangulated categories. Throughout this subsection we let F : A → B be an exact functor between triangulated categories. A class Ω of objects in A is called a spanning class if for every a ∈ ob A , the left orthogonality condition Hom i A (a, ω) = 0, for all i ∈ Z, ω ∈ Ω implies that a = 0, and similarly on the right. Recall the following Theorem [BO95] Assume that Ω is a spanning class for A and that the functor F : A → B has left and right adjoints. Then F is fully faithful if and only if it is orthogonal: Assume now that our triangulated category A is linear. A functor S A : A → A is called [BK90] a Serre functor for A if it is an exact equivalence and induces bifunctorial isomorphisms satisfying compatibility with compositions. The basic example of a Serre functor is S : , where X is a smooth n-dimensional projective variety and ω X is the canonical bundle of X. If a Serre functor exists, it is unique up to an isomorphism of functors. We are now ready to state the main equivalence criterion we will be using: Theorem [Bri99,BKR01] Assume that A and B have Serre functors S A , S B , that A = 0, B is indecomposable, and that F : A → B has a left adjoint. Then F is an equivalence if it is fully faithful and it intertwines the Serre functors: F •S A (ω) = S B •F (ω) on all elements ω ∈ Ω in the spanning class. We want to show that our gerby Fourier-Mukai functor F M : is an equivalence. The results above suggest that we should first exhibit Serre functors for D b 1 ( α X β ) and D b 1 ( β X α ), and find a suitable spanning class for D b 1 ( α X β ). These results, which do not involve F M , are carried out in section 3.4.3 below. In section 3.4.4 we then complete the argument by showing that our F M preserves orthogonality and intertwines the Serre functors. Serre functors and spanning classes for O × X -gerbes Let X be an n-dimensional smooth projective variety. Let c : α X → X be an O × X -gerbe corresponding to an element α ∈ H 2 (X, O × X ). Claim 3.6 The functor is a Serre functor. Proof. For any a, b ∈ D b 1 ( α X), we need a natural isomorphism a, b have weight 1, RHom αX (a, b) has weight 0, so there exists a unique H (a, b) H (a, b). H (a, b)). Similarly, H (a, b) × H (b, a). But since It follows that is an equivalence of categories, this follows immediately from the non-degenerate pairing on RHom α X (a, b) × RHom α X (b, a) given by the trace. ✷ Since our functor F = F M was constructed as a push-forward on the atlases, it has an obvious left adjoint G corresponding to the pullback functor on the atlases. Therefore F M also has a right adjoint, namely . Fix a point p ∈ X β . Since the restriction of α X β to p is the trivial gerbe on p for any α, the torsion sheaf O p can be considered as a weight one sheaf on α X β for any α. For our spanning class Ω we take the structure sheaves of points on the space X β , viewed as sheaves of weight one on the stack α X β . Claim 3.7 Let c : α X → X be an O × X -gerbe on a smooth projective X. Then the class Ω consisting of structure sheaves O p of points on X, viewed as sheaves of weight one on α X, is a spanning class for D b 1 ( α X). Proof. In order to show that the class Ω is a spanning class, we need to show that for every a ∈ ob D b 1 ( α X), the left orthogonality condition Hom i D b 1 (αX) (a, O p ) = 0, for all i ∈ Z, p ∈ X implies that a = 0. We also need the analogous result on the right, but this follows using the Serre functor. We can also reduce to the case that a is represented by a single sheaf on α X, i.e. a is an α-twisted sheaf on X. Now such an a is specified in terms of its sections on an appropriate etale atlas U plus some α-twisted gluing. In order to conclude that a = 0, it suffices to show that every p ∈ X has a neighborhood U ′ on which a = 0. But by restricting to a small enough neighborhood U ′ of p in U, we can get the class α to vanish. The restriction of a to U ′ and the O p for p ∈ U ′ become ordinary sheaves. The group Hom i D b 1 (αX) (a, O p ) can be computed on either U or U ′ . Therefore, the orthogonality condition forces a to vanish on U ′ , which is what we need. ✷ Orthogonality and intertwining Now that we have a Serre functor and a spanning class, we are ready to apply the general results of subsection 3.4.2 to our gerby Fourier-Mukai functor F M . Claim 3.8 The gerby Fourier-Mukai functor F M : Proof. Recall (3.4) that our functor F M descends from p 1 * : D b (X ϕ × B X β ) → D b (X ϕ ). Let b 1 , b 2 ∈ B be the images of x 1 , x 2 ∈ X β . If b 1 = b 2 then the supports are disjoint, so the Hom i on both sides clearly vanish. Assume then that b 1 = b 2 = b. In this case, the structure sheaf O αLβ |x i is supported on the fiber C x i = X ϕ × B (x i ), and both supports map We note that both sides of the claim vanish unless x 1 , x 2 differ by a point of X[m], in which case they define isomorphic sheaves. The spanning class Ω may therefore be taken to be parametrized by X = X β /X[m] rather than by X β . Claim 3.9 The gerby Fourier-Mukai functor F M : Proof. Follows immediately from the fact that F M O x is supported on C b and that the canonical sheaf of X β restricts to the trivial line bundle on C b . ✷ Surfaces In case X is a surface, we can refine the previous results to include the singular fibers. On a surface, any pair of classes α, β ∈ X an (X) are complementary in the sense of subsection 2.3, by Corollary 2.17, so the gerbes α X β , β X α are always well-defined. When mβ = 0 we will construct the lifting presentation of α X β and the extension presentation of β X α . Then we will exhibit a Fourier-Mukai transform F M between these presentations. Finally, we will show that F M is an equivalence of categories by verifying the criterion of Bondal-Orlov and Bridgeland. We assume throughout that X is a smooth surface, B is a smooth curve, and the elliptic fibration π : X → B has at most singular fibers of type I 1 . Since every such elliptic surface is uniquely determined by its monodromy representation it is clear that we can always extend X to a smooth compact relatively minimal elliptic surface whose base curve is a suitable compactification of B. Furthermore, by Kodaira's classification of compact complex surfaces it follows that every smooth compact elliptic surface is Kähler (in fact algebraic). Therefore X must be Kähler as well. The lifting presentation Our first goal is to construct the lifting presentation of α X β , in a way that restricts to the previously constructed presentation on the non-singular fibers. We start with the second projection p 2 : X ϕ × B X β → X β . Unfortunately, this is not an atlas for the gerbe α X β . The problem can be traced back to the fact that the threefold X ϕ × B X β is singular. So let be a small resolution of X ϕ × B X β . Now Y is smooth and equipped with flat morphisms ν 1 : Y → X ϕ and ν 2 : Y → X β which lift p 1 and p 2 respectively. There is an induced map ν * 2 : Br ′ an (X β ) → Br ′ an (Y ). We claim that Y is an atlas for α X β , i.e. that α X β has a presentation: We call (4.1) the Lifting Presentation of α X β . By Remark 2.4 (iv), the fact that (4.1) is indeed a presentation follows from: lead via the exponential sequence to the diagram: We have ∂ν * 2 (T β (α)) = ν * 2 ∂(T β (α)) = ν * 2 0 = 0, so ν * 2 (T β (α)) = exp Y (a) for some a ∈ H 2 an (Y, O Y )/H 2 (Y, Z). We know from 3.1 that Y o is an atlas for the restriction of T β (α) to X o , so is injective. Nevertheless, the analogous map: may fail to be injective. In our situation, all we were able to prove, and fortunately all that was needed, was that H 2 As an example, let C ⊂ P 3 be a smooth curve of genus ≥ 2, let A be the blowup of P 3 along C, and let A o := P 3 \ C. Then by the exponential sequence, The extension presentation Next, we want to construct the extension presentation of β X α , in a way that restricts to the previously constructed extension presentation on the complement of the singular fibers. Fix ϕ ∈ X an (X) satisfying m · ϕ = α. Let X o α , X o ϕ be the inverse images in X α , X ϕ of B o , the complement of the discriminant in B. In the non-singular case, our atlas was given by the multiplication-by-m map q ϕ : X o ϕ → X o α . Unfortunately, this does not extend to a morphism q ϕ : X ϕ → X α . Instead, we will construct another (singular!) surface X ϕ with a birational morphism X ϕ → X ϕ and a flat morphismq ϕ : X ϕ → X α which restricts to the previous q o ϕ . This data gives a commutative diagram: The exponential sequence shows that the two vertical maps are isomorphisms, as in the proof of Lemma 4.1: this uses the fact that H 2 an ( X ϕ , O Xϕ )/H 2 ( X ϕ , Z) is a birational invariant, and that ker[H 3 ( X ϕ , Z) → H 3 ( X ϕ , R)] = 0, ker[H 3 (X α , Z) → H 3 (X α , R)] = 0, which in turn follows from the observation that the third cohomology of a smooth 4-manifold has no torsion and that X ϕ and X α are Kähler surfaces. Since (q o ϕ ) * kills all classes of order m, it follows that so doesq * ϕ , so X ϕ is indeed an atlas. In order to construct X ϕ we have to resolve the rational map q ϕ : X ϕ X α . For that we can work locally in the complex topology on B near a point p ∈ B \ B o , i.e. we can replace B by a small disc centered at p. Over this disc the group scheme X ♯ [m] has a subgroup scheme I ⊂ X ♯ [m] of cycles invariant under the local monodromy around p. Since by assumption the singular fibers of X are of type I 1 , the group scheme I is isomorphic to B × (Z/m) and consists of all the sections in X[m] over the disc that pass through smooth points of the fiber X p . Translations by such sections give rise to a well defined action of I on X ϕ which fixes the singular point x p of the fiber (X ϕ ) p . Therefore over our disk the rational map q ϕ decomposes as where s denotes the quotient map. The surface X ϕ /I has a unique singularity at the image of the point x p and the map X ϕ /I → B is a flat genus one fibration. A straightforward local computation at the singular point of the I 1 fiber of X ϕ shows that in suitably chosen local coordinates (z, w) near x p the generator of I acts as (z, w) → (ζz, ζw), where ζ is a primitive m-th root of unity. This implies that the singularity of X ϕ /I is of type A m−1 . The minimal resolution X ϕ /I → X ϕ /I of X ϕ /I is a flat genus one fibration over B with a single I m fiber over p. On the other hand, over our small disk, X α retracts to the singular fiber (X α ) p whose fundamental group is Z. Therefore there is a unique m-sheeted etale cover X α → X α . By construction the covering map commutes with the projectionsπ α : X α → B and π α : X α → B and so the fiber ( X α ) p ofπ α over the point p ∈ B is a Kodaira fiber of type I m . The surfaces X α and X ϕ /I are clearly isomorphic outside of the preimages of p ∈ B and so are birational. Since as genus one fibrations X α and X ϕ /I are both relatively minimal, this implies that X α and X ϕ /I are actually isomorphic due to the uniqueness [Kod63] of the relatively minimal models. Recall next that by the work of Ito and Nakamura [IN99,BKR01] the minimal resolution X ϕ /I of X ϕ /I can be identified with the Hilbert scheme of I-clusters in X ϕ . In the spirit of [BKR01] consider the universal closed subscheme Z ⊂ X ϕ × X ϕ /I with its natural projection to X ϕ and X ϕ /I ∼ = X α . There is a commutative diagram of spaces where the morphism Z → X ϕ is birational and surjective and the morphism Z → X α is finite and flat. In particular the composite map is a finite and flat morphism which extends the multiplication-by-m map q o ϕ : X o ϕ → X o α . It is also helpful to observe that the two intermediate maps appearing in the construction of (4.2) are both Galois covers with Galois group isomorphic to Z/m. The first one is just the quotient of X ϕ × Xϕ/I X α by I and the second is the etale Galois cover X α → X α . Our extension atlas X ϕ is obtained by gluing the local surfaces Z defined over small discs centered at discriminant points p ∈ B \ B o to X o ϕ . We will write ε : X ϕ → X ϕ for the contraction map andq ϕ : X ϕ → X α for the finite flat map gluing each (4.2) to q o ϕ . Note that by construction the surface X ϕ is singular. It has isolated toroidal singularities sitting over the singular points of the singular fibers of X α . In particular X ϕ is a normal analytic surface. Duality for gerby genus one fibered surfaces With the description of the global lifting and extension presentations for gerby genus one fibered surfaces in place, we are now ready to construct the Fourier-Mukai functor between the derived categories of pure weight one and to show that it is an equivalence. The key property of our gerby surfaces, which makes the construction possible is the fact that the gerbes appearing in the picture become trivial when we restrict our attention to a piece of the surface sitting over a sufficiently small open disk in B. We recall our convention that all our direct and inverse images, as well as the tensor product, are taken in the derived category. Thus for any space Z, we will simply write ⊗ for the derived tensor product ⊗ L on D b (Z) and for any map of spaces p : Z → T we will write p * , p * , p ! , p ! , for the corresponding derived functors (whenever these functors make sense on the bounded derived categories). Following the pattern of the proof in section 3.4.1 we will construct a Fourier-Mukai functor −1 ( β X α ) by exhibiting an integral transform between the derived categories on the atlases of the presentations α L β and β E α and then checking that this functor preserves the relations. To avoid cumbersome notation we will write Y := X ϕ × B X β and S := X ϕ for the atlases of the presentations α L β and β E α respectively. With this notation the relations of α L β are given by the total space Here P 1−2, m·3 denotes an appropriate line bundle on Y × X β Y which extends the line bundle p * 1−2,m3 P → X o ϕ × B o X o β discussed in section 3.1.1. Note that we know that such a line bundle exists due to Lemma 4.1. Explicitly, the total space tot(P × 1−2,m3 ) is isomorphic to the stacky fiber product Y × α X β Y (this product is a space again by Lemma 4.1). We will write Y o = X o ϕ × B o X o β for the part of Y sitting over B o , and we put P o 1−2, m·3 := P 1−2, m·3|Y o = p * 1−2, m·3 P. Similarly the relations of the presentation β E α are given by the total space Here Φ β denotes an appropriate line bundle which extends the line bundle As before, the existence of the line bundle Φ β is guaranteed by the observation that the class T α (β) of the extension gerbe vanishes (see section 4.2) when we pull it back to S. We will write S o = X o ϕ for the part of S sitting over B o and we put Φ o β := Φ β|S o = d * (M β ⊗ M −1 0 ). Using the above setup we can now identify the category D b to the problem of constructing a functor F : D b (Y ) → D b (S) which maps descent data to descent data. We will define the functor F as an integral transform with respect to a suitable kernel object Π ∈ D b (Y × S). We proceed to construct Π by gluing together certain locally defined coherent sheaves on Y × S. We carry out this gluing in the analytic topology to obtain the general functor F we need. Note that in the algebraic case, the kernel Π still produces the correct functor Π in view of the GAGA principle. First we look at the smooth part of the genus one fibrations X α , X β , X ϕ , etc. As usual, we write B o ⊂ B for the complement of the discriminant of the map π : X → B. Similarly, for any space (or stack) Z → B mapping to B, we put Z o := Z × B B o . The atlases Y o and S o for the gerbes α L β o and β E α o can be described simply as and so over B o we recover the setup analyzed in section 3.4. In this setup the integral transform we need was defined as the pushforward p 1 * : with respect to the projection on the first factor. Equivalently, we can view this functor as the integral transform whose kernel is the sheaf In view of this we set: Let now p ∈ B \ B o . Choose small analytic discs p ∈ U p ⊂ B around each such p so that U p ∩ U q = ∅ for p = q and the genus one fibrations X ϕ and X β both admit analytic sections over each U p . For any space (or stack) Z → B mapping to B we will write Z p for the restriction Z × B U p . Note that and so, in order to extend Π o to a globally defined sheaf on Y × Xϕ S, it suffices to construct coherent analytic sheaves Π p on each Y p × X p ϕ S p and isomorphisms Π o the map (t −sϕ × q m β ) • ν makes sense as a line bundle defined on Y p \ n p . Combined with the observation that Y p is smooth and that n p ⊂ Y p is of codimension two, it follows that this pullback extends to a unique line bundle Q p on all of Y p . We can now use the seesaw principle in the same way we did in the proof of Theorem 3.1 to show that the isomorphism (4.3) exists and satisfies the cocycle condition. Similarly, to construct ψ p we note that the section s β gives rise to a relative line bundle . Under the canonical identification Pic 0 (X p β /U p ) = X p = Pic 0 (X p ϕ /U p ), this relative bundle corresponds to a relative line bundle ρ p of degree zero along the fibers of X p ϕ → U p and hence to a globally defined line bundle ψ p on X p ϕ which restricts to ρ p ∈ Pic 0 (X p ϕ /U p ). We normalize ψ p by choosing a trivialization s * ϕ ψ p ∼ = O U p . Now the argument used in Theorem 3.2 implies that on (S p × X p α S p ) |Up\{p} we can find an isomorphism Φ β ∼ = p * 1 ψ p ⊗ p * 2 ( ψ p ) −1 which (after possibly rescaling the trivialization s * ϕ ψ p ∼ = O U p ) will also satisfy the cocycle condition. Now since S is an atlas for the gerbe β E α we conclude that ψ p on (S p × X p α S p ) |Up\{p} extends to a unique line bundle ψ p on S p × X p α S p equipped with an isomorphism (4.4) satisfying the cocycle condition. We now define the coherent sheaf Π p ∈ Coh(Y p × X p ϕ S p ) by setting where p Y : Y × S → Y and p S : Y × S → S denote the natural projections. With this notation we now have In particular the sheaves {Π p } p∈B\B o glue to the sheaf Π o to yield a globally defined analytic coherent sheaf Π on Y × S. Proof. We have to show that Π p is naturally isomorphic to the trivial line bundle on . Write U po for the punctured disc U p \ {p} and for any space or stack Z → B write Z po for the fiber product Z × B U po . Using the isomorphisms t −sϕ and t −s β we can now identify π ϕ : X po ϕ → U po and π β : X po β → U po with the smooth elliptic fibration π : gets identified with the fiber product X po × U po X po . Using the same trivializations to recast P p , Q p and ψ p as sheaves on X po × U po X po we get that P p becomes p * 1,m·2 P, Q p becomes p * m·1,2 P and ψ p becomes O. In particular we get that Π p corresponds to p 1,m·2 P ⊗ p * m·1,2 P −1 and so it suffices to check that This however follows immediately from the universal property of P on X × B X and the seesaw theorem. ✷ The sheaf Π gives rise to a well defined integral transform between the derived categories of analytic coherent sheaves on Y and S respectively. If in addition α ∈ X(X) ⊂ X an (X) is also algebraic, then the atlases Y and S of the lifting and extension gerbes are proper separated algebraic spaces and so we can invoke the GAGA theorem [Art70, Corollary 7.15] to conclude that Π is an algebraic coherent sheaf on Y × S. This implies that in the algebraic context F makes sense as an integral transform between algebraic coherent sheaves. We now have the following Proposition 4.4 The functor F : D b (Y ) → D b (S) defined by (4.5) maps the descent data for the lifting presentation to the descent data for the extension presentation and thus defines a functor F M : . Proof. Let L be an object in D b 1 ( α X β ) represented by descent datum (L, f ) for the presentation α L β . In other words, L is an object in D b (Y ) and f : Consider the object F L ∈ D b (S). To prove the proposition we need to construct a quasiisomorphism g : p * 1 F L → p * 2 F L ⊗ Φ −1 β on S × Xα S which depends functorially on f and satisfies the cocycle condition on S × Xα S × Xα S. Let Γ := Y × Xϕ S and let p Y : Γ → Y and p S : Γ → S denote the natural projections. Then F L = p S * (p * Y L ⊗ Π), and so our problem boils down to finding a quasi-isomorphism which depends functorially on f . In other words we want to compare the objects p * 1 p S * (p * Y L ⊗ Π) and p * 2 p S * (p * Y L ⊗ Π) on S × Xα S. Since we have an obvious commutative diagram we see that equivalently we can compare the objects ι * pr * 1 p S * (p * Y L ⊗ Π) and ι * pr * 2 p S * (p * Y L ⊗ Π). To compute these objects, we would like to perform a base change in the commutative squares In view of the previous remark we will be able to treat the squares (4.7) as base change squares if we can show that for the maps To check this, note that In particular, in D b (S × B S) we get identifications: for all L ∈ D b (Y ) and for i = 1, 2. Furthermore since Now, using the commutativity of the top double square in (4.6) we get , we can use f to obtain an identification and so, in order to get the desired isomorphism g : We will construct the desired map (4.8) by gluing some locally defined but canonical identifications. Note that at this point we have completely eliminated the derived category from the picture. In particular, we are left with a question about sheaves, not complexes, and so gluing is a relatively simple matter. (ii) Over the part of Γ × Xα× B X β Γ sitting over U p ⊂ B, p ∈ B \ B o , we can use again the line bundles Q p → Y p and ψ p → S p appearing in the construction of Π p = Π |Γ p to trivialize our gerbes. Recall that Q p and ψ p come equipped with the natural isomorphisms (4.3) and (4.4). Furthermore Π p = P p ⊗ p * Y (Q p ) −1 ⊗ p * S (ψ p ) −1 and so establishing (4.8) over Γ p × X p α × U p X p β Γ p reduces to constructing an identification: or equivalently, after the obvious cancellations, an identification However P p ∈ Coh(Γ p ) was defined as a pullback of a sheaf on X p α × U p X p β and so we get a canonical identification p * 1 P p = p * 2 P p on Γ p × X p α × U p X p β Γ p . This yields the desired canonical identification (4.8) over the part of Γ × Xα× B X β Γ sitting over U p Finally it only remains to observe that the isomorphisms chosen in (4.3) and (4.4) were the ones used in the proof of Lemma 4.3 to glue Π p and Π o on the overlap Γ p ∩ Γ o . Therefore the identifications in items (i) and (ii) above glue on the overlaps (Γ × Xα× B X β Γ) × B U po and so we have found our global identification (4.8). This finishes the proof of the proposition. ✷ We are now ready to complete the Proof of Theorem A. The only thing left to show is that the gerby Fourier-Mukai transform F M : D b 1 ( α X β ) → D b 1 ( β X α ) constructed in Proposition 4.4 is an equivalence of categories. We will again use the criterion of Bondal-Orlov and Bridgeland applied to the spanning class Ω of gerby points in α X β described in Claim 3.7. As before we need to show that F M intertwines the Serre functors on the sheaves O x ∈ Ω and that F M satisfies the orthogonality property Since by definition the Fourier-Mukai image F M O x is supported on the fiber ( β X α ) b of β X α over the point b = π β (x) ∈ B, it suffices to check the intertwining and orthogonality properties of F M locally in the base B. Over B o these properties were established in Claims 3.8 and 3.9. To check the properties for the parts of our gerbes sitting over U p ⊂ B we note that the proof of Lemma 4.3 shows that over U p the functor F M fits in a commutative diagram of functors where the vertical arrows are equivalences. However the bottom arrow is the usual integral transform with respect to the Poincare sheaf on an elliptic surface having at most I 1 fibers. Such a transform is an equivalence, e.g. by [BM02]. Finally, the functor (ν * 1 (•) ⊗ Q p ) • t * −s β transforms a structure sheaf of a point x ∈ X p into a sheaf in the spanning class Ω p for α X β p and clearly every sheaf in Ω p is obtained this way. This implies that F M has the orthogonality and intertwining properties for sheaves in Ω p . The theorem is proven, ✷ ¿From the statement of Theorem A one can derive a whole sequence of new cases of Cȃldȃraru's conjecture. Indeed, suppose X is an elliptic K3 surface whose singular fibers are of type I 1 only. Note that for any element α ∈ X(X) = Hom(T X , Q/Z) we have a natural Hodge isometry T Xα ∼ = ker(α) induced by the isogeny of X α and X. In terms of the identifications X(X) = Br ′ (X) = Hom(T X , Q/Z) and Br ′ (X α ) = Hom(ker(α), Q/Z), the surjective map T alpha : X(X) → Br ′ (X α ) sends a homomorphism a : T X → Q/Z to its restriction a | ker(α) : ker(α) → Q/Z. Now we have: Corollary 4.6 Let X be an elliptic K3 surface whose singular fibers are of type I 1 only. Let α, a ∈ X(X) = Hom(T X , Q/Z) and let (b, β) ∈ X(X) ×2 be in the SL(2, Z) orbit of (α, a). Then (b) D b 1 ( a X α ) and D b 1 ( b X β ) are equivalent. Modified T -duality and the SYZ conjecture The celebrated work of Strominger, Yau and Zaslow [SYZ96] interprets mirror symmetry of Calabi-Yaus in terms of special Lagrangian (SLAG) torus fibrations. If a CY manifold X (with "large complex struture") has mirror X ′ , [SYZ96] conjecture the existence of fibrations π : X → B and π ′ : X ′ → B whose generic fibers are SLAG tori dual to each other: each parameterizes U(1) flat connections on the other. In particular, each of these fibrations admits a SLAG zero-section, corresponding to the trivial connection on the dual fibers. The analogy with the situation considered in the main part of our work is clear: the SLAG torus fibrations replace the elliptic fibrations, and mirror symmetry (interchanging D-branes of type B with D-branes of type A) replaces the Fourier-Mukai transform (which interchanges vector bundles with spectral data). In this context, the analogue of our gerbes and the Brauer group is given by the "B-fields" α ∈ H 2 (X, R/Z). On the other hand, the SLAG analogue of the Tate-Shafarevich group of X ′ is given by H 1 (B, X ′ ), which over the locus where π, π ′ are smooth can be identified with H 1 (B, R 1 π * (R/Z)). As in the proof of lemma 2.11, H 2 (X, R/Z) is related via a Leray spectral sequence to the three groups H i (B, R 2−i π * (R/Z)), for i = 1, 2, 3. Now for i = 0, the local system H 0 (B, R 2 π * (R/Z)) can be identified, over the locus where π is smooth, with the group of homotopy classes of sections of X → B. Therefore, if the fibration X → B is good in the sense of [Gro98,Gro99], H 0 (B, R 2 π * (R/Z)) should be thought of as the analogue of the Mordell-Weil group of X → B. Assume that the SLAG fibration X → B is generic, in the sense that the local system R 2 π * (R/Z) has no global sections. The Leray spectral sequence therefore gives a Brauer-to-Tate-Shafarevich map: Now a B-field β ∈ H 2 (X ′ , R/Z) on X ′ determines a point (X ′ , β) of M, hence a mirror point X β := M S(X ′ , β). For small β, this X β is a deformation of X, so the additional Bfield α ∈ H 2 (X, R/Z) on X determines a corresponding B-field T β (α) ∈ H 2 (X β , R/Z) on X β . Conjecture 5.1 • The SYZ picture holds (near the large complex structure/large volume limit) on M R . • For a B-field B ′ on X ′ , the deformed Calabi-Yau X B ′ admits a SLAG fibration (generally without a section) whose Jacobian (i.e. double dual) is the original SLAG fibration (with section) on X. • Mirror symmetry preserves the integrable system structure: for any pair α, β, the mirror of (X β , T β (α)) is (X ′ α , T α (β)). We note that this modification of [SYZ96] is consistent with recent interpretations in the literature (see [Hit01] and references therein) of D-branes on Calabi-Yaus in the presence of a B-field: A D-brane of type B on X is a coherent sheaf on the gerbe given by the B-filed β, while a D-brane of type A on X ′ is, roughly, a flat U(1) connection on the restriction of the gerbe α to a SLAG submanifold in X ′ . The third part of this conjecture is the exact SLAG translation of Theorem A. Appendix A (by D.Arinkin) Duality for representations of 1-motives In this appendix, we sketch a different approach to the Fourier-Mukai transform for O × -gerbes over smooth genus one fibrations (Theorem B). In this approach, Theorem B claims that the dual commutative group stacks (of a certain type) have equivalent derived categories of coherent sheaves. Let us review the duality for commutative group stacks (sometimes called the duality for generalized 1-motives). Recall that the dual X ∨ of an abelian variety X is the moduli space of line bundles with zero first Chern class on X. Equivalently, X ∨ parametrizes the extensions of the algebraic group X by G m . In this form, the definition immediately generalizes to stacks: for a commutative group stack X , its dual X ∨ is the moduli stack of extensions of commutative group stacks 1 → G m → G → X → 0. The sum of extensions defines a group operation on X ∨ ; actually, X ∨ is naturally a commutative group stack. Remark A.1 For technical reasons, we use a slightly different definition of the dual stack (Definition A.2). This allows to avoid the discussion of short exact sequences of group stacks; also, the group structure on X ∨ seems somewhat more natural. Let P → X ∨ × X be the universal X ∨ -family of extensions of X by G m ; in particular, P is a G m -torsor on X ∨ × X (in fact, P is a biextension of X ∨ × X by G m ). Notice that we can also view P as a X -family of extensions of X ∨ by G m ; this defines a morphism X → (X ∨ ) ∨ . The main idea of the Fourier-Mukai transform for commutative group stacks can be informally stated as follows: For a "good" commutative group stack X , the morphism X → (X ∨ ) ∨ is an isomorphism, and the Fourier-Mukai transform defined by P C is an equivalence F M : D b (X ) → D b (X ∨ ). Here P C is the line bundle on X ∨ × X associated to the G m -torsor P. Now let us explain how Theorem B fits into the framework of the duality for commutative group stacks. First, we notice that the O × -gerbe α X 0 over X is a group stack. Then we see that α X β is a torsor over the group stack α X 0 ; more precisely, the gerbes constructed using the lifting presentation and the extension presentation (from Section 3.1) have a natural structure of α X 0 -torsors. Torsors over a group stack can be thought of as extensions of Z by this group stack; in this way, α X β defines a commutative group stack α X β that fits into an exact sequence 0 → α X 0 → α X β → Z → 0. The argument in section 3.4 shows that the constructions of the lifting presentation and the extension presentation are dual, so α X β and −β X α are dual commutative group stacks (provided that we use the lifting presentation for one of the two stacks and the extension presentation for the other). Moreover, these stacks are "good" in the sense of (A.1), and so the Fourier-Mukai transform gives an equivalence between D b ( α X β ) and D b ( −β X α ). The Fourier-Mukai transform of Theorem B is the restriction of this equivalence to direct summands in the derived categories (see Section A.2) In the rest of the appendix, we discuss the notion of the dual of a group stack (Section A.1) and the special case when the group stack is an extension of Z (Section A.2). No proofs are given, but most statements are almost obvious. I learned about the duality for commutative group stacks from A. Beilinson, and I am deeply grateful to him for the explanation. A.1 Duality for commutative group stacks ¿From now on, the word 'stack' means an algebraic stack locally of finite type over a fixed base scheme B. All results also have an analytic version. Definition A.2 For a commutative group stack X , the dual stack X ∨ parametrizes 1morphisms of commutative group stacks from X to BG m (the classifying stack of G m ). Thus, for a B-scheme S, the category X ∨ (S) is the category of 1-morphisms of commutative group S-stacks X × B S → BG m × S. Notice that X ∨ does not have to be algebraic. Remark A.3 For the definition to make sense, we need certain smallness assumptions. Indeed, if X and Y are stacks on a site B, the 1-morphisms from X to Y form a stack only if X , Y , and B are small. However, this problem can be avoided if we assume that X is an algebraic stack which is locally of finite type and replace the category of finitely presented B-schemes by an equivalent small category. Example A.4 If X is an abelian scheme over B, then X ∨ is the dual abelian scheme. Example A.5 Let X = G be an affine (or ind-affine) abelian group (over C). Then X ∨ is the classifying stack of the Cartier dual of G. In particular, if X = Z, we have X ∨ = BG m . Another example is provided by the stacks α X β (constructed using either the lifting presentation or the extension presentation). It is clear from the construction that locally on B, the stack α X β is isomorphic to X × BG m × Z; globally, it carries a natural filtration 0 ⊂ X (1) ⊂ X (2) ⊂ α X β with X (1) = BG m , X (2) / X (1) = X, and α X β / X (2) = Z. This implies the following statement: Proposition A.6 α X β is "good" in the sense of (A.1). Proof. The property of being "good" is local on B, so it is enough to notice that the stacks X, BG m , and Z are "good". ✷ In particular, we see that the Fourier-Mukai transform gives an equivalence between D b ( α X β ) and D b ( −β X α ). A.2 Duality for torsors Now suppose X is a commutative group stack which is "good", and let X ′ be a torsor over X . Denote by X the corresponding extension of Z by X : it fits into the exact sequence 0 → X → X → Z → 0 and X ′ is identified with the preimage of 1 ∈ Z. Notice that locally on B, the torsor is trivial, so X is isomorphic to X × Z. Since both X and Z are "good", so is X . The dual stack X ∨ is isomorphic to X ∨ × BG m locally on B (globally, it contains a substack isomorphic to BG m , and the quotient equals X ∨ ). In particular, if X ∨ is actually a space (rather than a stack), then X ∨ is an O × -gerbe over the space. The following statement is clear: is the complete subcategory of objects F such that the action of G m on H i (F ) is tautological. Here the action is induced by the morphism BG m → X ∨ . In the case of duality between α X β and −β X α , both stacks are torsors (over α X β and −β X α , respectively), and so we get equivalences This is exactly what Theorem B claims.
2014-10-01T00:00:00.000Z
2003-06-13T00:00:00.000
{ "year": 2003, "sha1": "cbaa33945ff868747c319c1e617591babe361477", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0306213v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7b10e40cdddaff6be1da684e1420f4fc45007500", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
20264697
pes2o/s2orc
v3-fos-license
Signal Relay by Retinoic Acid Receptors α and β in the Retinoic Acid-induced Expression of Insulin-like Growth Factor-binding Protein-3 in Breast Cancer Cells* Neither retinoic acid receptor-β (RARβ) nor insulin-like growth factor-binding protein-3 (IGFBP-3) is expressed in breast cancer cell line MCF-7. The expression of both proteins can be induced in response to all-trans-retinoic acid (atRA). By using an RARα-selective antagonist (Ro 41-5253), we demonstrated that RARβ expression was induced by atRA through an RARα-dependent signaling pathway and that RARβ induction was correlated with IGFBP-3 induction. However, MCF-7 cells transfected with sense RARβ cDNA expressed IGFBP-3 even in the presence of the RARα-selective antagonist Ro 41-5253. On the other hand, antisense RARβ cDNA transfection of MCF-7 cells blocked atRA-induced IGFBP-3 expression, indicating that RARβ is directly involved in the mediation of IGFBP-3 induction by atRA. Induction of IGFBP-3 expression by atRA occurs at the transcriptional level, as measured by nuclear run-on assays. Finally, we showed that atRA-induced IGFBP-3 is functionally active in modulating the growth-promoting effect of IGF-I. These experiments indicate that RARα and RARβ, both individually and together, are important in mammary gland homeostasis and breast cancer development. By linking IGFBP-3 to RARβ, our experiments define the signal intersection between the retinoid and IGF systems in cell growth regulation and explain why loss of RARβ might be critical in breast cancer carcinogenesis/progression. Retinoids induce growth inhibition and apoptosis in a variety of tumor cells, including breast cancer cells (1). Recently, we proposed a mechanism by which all-trans-retinoic acid (atRA) 1 synergizes with interferon to inhibit the growth of both estrogen receptor-positive and estrogen receptor-negative breast cancer cell lines (2). Here we studied mechanisms by which atRA counteracts the growth-promoting effects of insulin-like growth factors (IGFs) in breast cancer cells, focusing on the involvement by retinoic acid receptors (RARs). It is known that the molecular actions of retinoids are pri-marily mediated by their nuclear receptors (RAR␣, ␤ and ␥, and the retinoid X receptors (RXRs) ␣, ␤, and ␥), which function as liganded transcription factors (3). These receptors show both spatiotemporal patterns of expression during development and tissue-specific distribution in adults, suggesting that the various receptors play different roles in transducing retinoid signals. Among the RARs, RAR␣ is expressed ubiquitously in adult tissues, RAR␥ is expressed mainly in skin, and RAR␤ is expressed primarily in epithelial cells, including those in mammary tissue (4). Expression of RAR␤ is lost in the majority of breast cancer cell lines; it can be induced by retinoic acid (RA) in estrogen receptor-positive breast cancer cell lines but not in estrogen receptor-negative cancer cell lines (4 -7). The latter are believed to represent more advanced forms of breast carcinoma. Induction of RAR␤ expression correlates well with the growth-inhibitory and apoptotic effects of retinoic acid (8,9), suggesting that loss of RAR␤ expression may be one of the critical events involved in breast carcinogenesis/progression and in responsiveness of breast cancer cells to retinoid chemotherapy. At the same time, there is strong evidence that RAR␣ is the mediator of the growth inhibition of breast cancer cells by retinoids (10,11). In general, RAR␣ expression is lower in estrogen receptor-negative breast cancer cell lines than in estrogen receptor-positive lines; this corresponds to the responsiveness of these cell lines to RA. Taken together, these observations raise the possibility that both RAR␣ and RAR␤ are involved in the physiological action of retinoic acid in breast cancer cells. The insulin-like growth factor system includes IGF-I and IGF-II, their corresponding receptors, six IGF-binding proteins (IGFBPs), and four IGFBP-related proteins (12). IGF-I and IGF-II are thought to be important growth factors for breast cancer. IGF-I and -II receptors and IGFBP-2 and -4 proteins have been found in breast cancer cell lines and in tissue specimens (13). Although IGF-I and -II proteins are not expressed in breast cancer cell lines, they are expressed in breast cancer specimens, possibly by stromal cells (13), suggesting that IGFs, through a paracrine mechanism, promote breast cancer cell growth and underscoring the importance of IGFBPs for their ability to modulate IGF-I actions in the extracellular matrix. In addition to the well established roles of IGFBPs in regulating IGF bioavailability and IGF-I receptor responsiveness to IGF-I, IGFBP-3 has also been recently proposed to function as a negative regulator of growth, independently of the IGF-I receptor (14,15). Supporting its role as a growth inhibitory regulator, IGFBP-3 expression is up-regulated by growth-inhibitory (and apoptosis-inducing) agents, such as retinoic acid (16 -19), vitamin D (20), transforming growth factor-␤ (16,21,22), antiestrogens (23), tumor necrosis factor-␣ (24), and, most compellingly, the tumor suppressor gene p53 (25); IGFBP-3 expression is down-regulated by growth-promoting factors, such as estrogen (26) and epidermal growth factor (27). All of this information clearly indicates that IGFBP-3 is a common downstream effector of many growth regulatory agents. We report here that both RAR␣ and RAR␤, by relaying the atRA signal in MCF-7 cells, are involved in the induction of IGFBP-3, and our experiments suggest that lack of RAR␤ expression in the majority of breast cancer cell lines may result in the failure of IGFBP-3 induction and growth inhibition by retinoids. EXPERIMENTAL PROCEDURES Cell Cultures and Retinoids-Cells of the breast carcinoma cell line MCF-7 (American Type Culture Collection, Manassas, VA) were grown in phenol red-free Eagle's minimal essential medium (Sigma) supplemented with 5% charcoal-stripped calf serum (Sigma). Cells from Ͻ15 passages were used for experiments. atRA was purchased from Sigma. The RAR-specific agonist Ro 13-7410, the RXR-specific agonist Ro 25-7386, and the RAR␣-selective antagonist Ro 41-5253 were generously provided by Hoffmann-La Roche. Retinoids were dissolved in absolute ethanol under lights that were covered with a UV-blocking film (CLHC, Sydlin, Inc., Lancaster, PA). The integrity of atRA was routinely monitored by spectrophotometry. Preparation of Conditioned Medium-MCF-7 cells were grown as described above for 24 h, washed with phosphate-buffered saline, and then transferred to phenol red-free Eagle's minimal essential medium supplemented with 2 g/ml fibronectin and 2 g/ml transferrin (both from Sigma) for another 24 h before atRA treatment. The conditioned medium was then harvested with the addition of 0.2 mM phenylmethylsulfonyl fluoride and 10 g/ml aprotinin (both from Sigma), dried under speed vacuum, and resuspended for analysis. Cell Growth Inhibition Assay-MCF-7 cells (4 ϫ 10 3 cells/well) were cultured in the conditioned medium described above in 96-well cell culture plates. Recombinant human IGF-I, recombinant human IG-FBP-3 (both generous gifts of Celtrix, Palo Alto, CA), or medium from atRA-treated cell cultures was added alone or in different combinations to the cell cultures for 2 days. Cells were washed, fixed with 10% trichloroacetic acid for 1 h, and then stained with 1% sulforhodamine B for 1 h. Cells were washed again, and then 100 l of 10 M Tris-HCl, pH 10, was added to release the dye (28). The absorbance was measured at 562 nm. Immunodepletion-Conditioned medium from atRA-treated or untreated cells was incubated with 2 g/ml of anti-IGFBP-3 antibodies (goat polyclonal antibodies against human IGFBP-3; Santa Cruz Biotechnology Inc., Santa Cruz, CA) or normal goat serum (Santa Cruz Biotechnology) for 2 h. Protein A/Protein G PLUS-Agarose (Santa Cruz Biotechnology) then was added, and the media were rocked at 4°C overnight followed by filter sterilization of the supernatants. Immunoprecipitates were boiled for 3 min in SDS gel loading buffer and were used in a Western ligand blotting. Western Immunoblotting and Western Ligand Blotting-Fifty g of protein from cell lysates or conditioned medium was loaded onto 8 -12% SDS-polyacrylamide gels under nonreducing conditions. After transfer, nitrocellulose blots were incubated with rabbit polyclonal antibodies against human RAR␤ (Santa Cruz Biotechnology). The blots were then incubated with secondary antibodies and developed using an ECL kit (Amersham Pharmacia Biotech). For Western ligand blotting, nitrocellulose blots were initially washed in 3% Nonidet P-40 (Fluka Chemical Corp., Ronkonkoma, NY) for 30 min, followed by blocking in 1% bovine serum albumin (Sigma) for 2 h and 0.1% Tween 20 (Sigma) for 15 min. Blots were then probed with 125 I-labeled recombinant human IGF-II (Bachem California Inc., Torrance, CA) overnight followed by extensive washing with 1% Tween 20 before autoradiography. Transient Transfection-A luciferase reporter gene construct under the control of a retinoic acid response element (DR5-tk-Luc, provided by Dr. R. M. Evans, Gene Expression Laboratory, Salk Institute for Biological Studies, La Jolla, CA) was used to measure retinoid receptormediated gene activation. Ten g of DR5-tk-Luc was co-transfected into MCF-7 cells with 2 g of ␤-galactosidase expression vector (pCMV␤; CLONTECH, Palo Alto, CA) using Lipofectin reagent (Life Technologies, Inc.). Transfection efficiency was normalized to ␤-galactosidase activity. Stable Transfection-Plasmid constructs for stable transfection experiments were pRC/CMV-RAR␤ and pRC/CMV-antisense RAR␤ (generous gifts from Dr. X.-K. Zhang, La Jolla Cancer Research Center, La Jolla, CA). MCF-7 cells grown to 50% confluence were washed with serum-free growth medium. Two g of either empty vector or construct was mixed with Lipofectin reagent and added to cells for 5 h. Selection was initiated with 400 g/ml of G418 (Life Technologies, Inc.) on the third day and continued for 17-21 days until drug-resistant colonies emerged. Single colonies were cloned and assayed for the expression of the inserted genes by Northern blotting, and the expression of RAR␤ receptor protein was measured by Western blotting. Northern Blot Analysis-Total RNA was isolated using TRI Reagent (Sigma). RNA was separated on 1% agarose/1.1 M formaldehyde gels and then transferred and cross-linked to GeneScreen nylon membranes (NEN Life Science Products). Hybridization was carried out using the following probes: T 4 polynucleotide kinase-labeled 40-mer antisense RAR␣, RAR␤, or ␤-actin DNA (Oncogene Research Products, Cambridge, MA) or random primer-labeled IGFBP-3 cDNA (Genentech, Inc., South San Francisco, CA). The results were analyzed with a phosphorimager (Bio-Rad). IGFBP-3 Expression Is Induced by atRA in a Dose-and Time-dependent Fashion in MCF-7 Cells-To determine the effects of atRA on the expression of IGFBP-3 in our experimental system, MCF-7 cells were grown in the presence of 0, 10 Ϫ9 , 10 Ϫ8 , 10 Ϫ7 , or 10 Ϫ6 M atRA for 72 h followed by Northern blotting analysis of IGFBP-3 mRNA. As shown in Fig. 1A, MCF-7 cells did not express IGFBP-3 message in the absence of atRA, but as little as 10 Ϫ8 M atRA was effective in inducing the expression of IGFBP-3 mRNA. Higher levels of IGFBP-3 mRNA were detected with increasing concentrations of atRA (Fig. 1A). Fig. 1B shows the temporal effect of 10 Ϫ6 M atRA on the expression of IGFBP-3 message. IGFBP-3 mRNA was detected as early as 24 h after atRA treatment and was maximal at 48 h. atRA Activates IGFBP-3 Gene Transcription, and RAR, Rather Than RXR, Mediates This Process-We next wished to determine whether the retinoic acid-induced expression of IG-FBP-3 in MCF-7 cells was mediated by RAR or RXR and whether atRA directly activates the transcription of the IG-FBP-3 gene. The second point was of interest because it is known that retinoids can regulate gene expression posttranscriptionally (29,30). For these experiments, MCF-7 cells were incubated for 48 h with 10 Ϫ6 M of either atRA, the RAR-specific agonist Ro 13-7410, or the RXR-specific agonist Ro 25-7386. Nuclei were isolated, and nuclear run-on assays were performed. As indicated in Fig. 2A, both atRA and the RARspecific agonist Ro 13-7410 activated IGFBP-3 gene transcription, but the gene was not transcribed in cells treated with vehicle only or with the RXR-specific agonist Ro 25-7386. The ␤-actin gene was transcribed normally under all of these experimental conditions. These results indicate that 1) RAR but not RXR is involved in transducing the atRA signal to induce IGFBP-3 expression, and 2) atRA and Ro 13-7410 directly activate IGFBP-3 gene transcription. IGFBP-3 mRNA was measured in parallel experiments following treatment of MCF-7 cells with the various retinoids for 72 h (Fig. 2B). IGFBP-3 mRNA was only present in cells treated with atRA and Ro 13-7410, the RAR-specific agonist. To verify the ability of the synthetic retinoids, Ro 13-7410 and Ro 25-7386, to activate retinoid receptors in our experimental system, a luciferase reporter gene under the control of a DR5 element, the canonical retinoic acid response element activated by RARs, was introduced into MCF-7 cells. Luciferase activity was measured 72 h later in the presence of 10 Ϫ6 M of atRA, Ro 13-7410, or Ro 25-7386. As documented in Fig. 2C, Ro 13-7410, the RAR-specific agonist, activated the expression of luciferase gene at a level similar to that of atRA, but Ro 25-7386, the RXR-specific agonist, was not effective in activating the expression of luciferase gene. These results validate the use of the synthetic retinoids in our experimental system. RAR␤ Expression Is Induced by atRA in an RAR␣-dependent Pathway, and RAR␤ Relays the atRA Signal That Leads to the Induction of IGFBP-3 Expression in MCF-7 Cells-It has been shown that the transcription of the RAR␤ gene is induced rapidly after retinoid treatment, peaking by 6 h, and that it is independent of new protein synthesis (31,32). Furthermore, the level of RAR␣ expression in breast cancer cell lines appears to be correlated with the induced levels of RAR␤ expression (5,8,9). Thus, it is reasonable to postulate that RAR␤ is induced in MCF-7 cells by atRA through a signaling pathway mediated by RAR␣. To test this hypothesis, MCF-7 cells were grown in the presence or absence of 10 Ϫ6 M atRA for 72 h. Total RNA was extracted, and 30 g was used to measure mRNAs for RAR␣ and RAR␤ by Northern blotting. As documented in Fig. 3, the levels of RAR␣ expression in MCF-7 cells were similar in the presence or absence of atRA, whereas RAR␤ expression was detectable only after atRA treatment. These results indicate that RAR␣ mediates the atRA-induced expression of RAR␤. These experiments led us to ask whether the signal leading to the induction of IGFBP-3 expression was mediated by RAR␣, or if the induced RAR␤ mediates IGFBP-3 induction. In order to answer this question, MCF-7 cells were cultured for 72 h in the presence of 10 Ϫ7 M atRA plus 0, 10 Ϫ8 , 10 Ϫ7 , or 10 Ϫ6 M of Ro 41-5253, an RAR␣-selective antagonist. A lower concentration of atRA was used because we wanted to minimize the cytotoxicity of retinoids that is observed at high concentrations. After incubation, 30 g of total RNA was used to assay RAR␤ mRNA. As documented in Fig. 4A, 1 molar excess of Ro 41-5253 blocked the induction of RAR␤ expression. With decreasing concentrations of Ro 41-5253, RAR␤ expression increased, indicating that the process is mediated by RAR␣. IGFBP-3 expression was measured in MCF-7 cells grown for 72 h in the presence of the same combinations of retinoids by Northern blotting (Fig. 4B). Paralleling the diminished expression of RAR␤ in the presence of 10 Ϫ6 M of Ro 41-5253, IGFBP-3 expression was also abolished, indicating that retinoid-induced IGFBP-3 expression is correlated with RAR␤ expression. In order to further document the direct involvement of RAR␤ in the atRA-induced expression of IGFBP-3, RAR␤ sense and antisense cDNA constructs were introduced into MCF-7 cells via expression vectors. Positive colonies were identified, cloned, and tested for RAR␤ expression by Western immunoblotting (Fig. 5). Three clones with average levels of expression of each sense (Fig. 5A, ␤3, ␤5, and ␤6) and antisense (Fig. 5B, As-␤4, As-␤6, and As-␤9) RAR␤ were used for experiments similar to those described above. As exemplified by the results shown for ␤5 (Fig. 6A), the RAR␣-selective antagonist Ro 41-5253 was unable to block atRA-induced IGFBP-3 expression in the three clones of RAR␤ sense transfectants, indicating that RAR␣ is not directly involved in this process. In contrast, in RAR␤ antisense transfectants, the induction of IGFBP-3 expression by atRA was totally blocked (Fig. 6B), indicating that RAR␤ is directly involved in IGFBP-3 gene activation. IGFBP-3 Is a Downstream Effector of RAR␤ in the Inhibition of Breast Cancer Cell Growth by atRA- The IGF growth factor system is believed to be actively involved in the growth of breast cancer (14). IGFBP-3 is a secreted protein that has been thought to primarily regulate the biological activities of IGFs extracellularly. In order to investigate the functional integrity of atRA-induced IGFBP-3 in modifying the actions of IGF-I, we first assayed IGFBP-3 secretion by MCF-7 cells after induction by atRA. For this purpose, MCF-7 cells were grown in conditioned medium for 6 days in the presence or absence of 10 Ϫ6 M atRA. Conditioned medium was harvested at 2, 4, and 6 days, concentrated, and analyzed for IGFBP-3 secretion by Western ligand blotting. As shown in Fig. 7A, IGFBP-3 protein was secreted into the conditioned medium at a measurable level on day 2 of atRA treatment; higher amounts of IGFBP-3 were secreted on days 4 and 6. In addition to IGFBP-3, IGFBP-2 and IGFBP-4 were also secreted into the conditioned medium, in both the presence and absence of atRA (Fig. 7A). We next tested the responsiveness of MCF-7 cells to the IGF system. Exogenous IGF-I or IGFBP-3 was added alone or in different combinations to MCF-7 cells for 4 days, and cell growth was measured by sulforhodamine staining. As documented in Fig. 7B, MCF-7 cells were sensitive to the mitogenic effects of IGF-I, and recombinant human IGFBP-3 (rhIG-FBP-3) inhibited such activity in a dose-dependent manner (0 -10 nM). To investigate the biological activity of atRA-induced endogenous IGFBP-3, MCF-7 cells were maintained in conditioned medium for 4 days in the presence or absence of 10 Ϫ6 M atRA, and the conditioned medium was collected. IGFBP-3 protein was immunodepleted in half of the conditioned medium from atRA-treated cells. The medium was then filter-sterilized and added to MCF-7 cells for 2 days in the presence of 1 nM IGF-I. Cell growth was measured by sulforhodamine staining and expressed as percentage of absorbance relative to control MCF-7 cell cultures treated with control medium supplemented with 1 nM IGF-I (100%) or 1 nM IGF-I plus 10 nM rhIGFBP-3 (0%) (Fig. 7C). Similar to the results described in Fig. 7B, the conditioned medium from atRA-treated MCF-7 cells was able to block the growth promotion of MCF-7 cells by IGF-I (Fig. 7C, CM/RA/BP3). When IGFBP-3 was depleted (Fig. 7C, CM/RA), the medium was no longer effective in blocking the growth promotion by IGF-I, whereas the normal goat serum-treated control (Fig. 7C, CM/RA/BP3/S) did not remove the growth inhibition effect, suggesting that atRAinduced inhibition of IGF-I-stimulated cell growth is mediated rather specifically by IGFBP-3, not IGFBP-2 or IGFBP-4, because IGFBP-3-depleted medium (CM/RA) was unable to counteract IGF-I even when IGFBP-2 and IGFBP-4 were still present (Fig. 7D, lane 2). As shown in the Western blots in Fig. 7D, conditioned medium from the 4-day atRA-treated MCF-7 cells contained the IGFBPs 3, 2, and 4 (Fig. 7D, lane 1). After immunodepletion, only IGFBP-2 and IGFBP-4 were present (Fig. 7D, lane 2); when the immunoprecipitate was examined, only IGFBP-3 was found (Fig. 7D, lane 3). These experiments clearly demonstrate that atRA-induced IGFBP-3 is able to function as a downstream effector of RAR␤ to block the growth promotion by IGF-I in MCF-7 breast cancer cells. A semiquantitative Western blot analysis utilizing rhIGFBP-3 as a standard indicated that the concentration of IGFBP-3 in atRA-treated conditioned medium was ϳ3 nM (data not shown). This result is consistent with a partial block of IGF-I action by recombinant IGFBP-3, which resulted in ϳ50% inhibition at 2 nM concentration (Fig. 7B). DISCUSSION The tissue-specific distribution of retinoic acid receptors in adults and the spatiotemporal patterns of expression during development indicate that these receptors may play different roles. Yet the coexistence of two or three retinoic acid receptor subtypes in a specific tissue also suggests that some type of compensation/coordination may exist among retinoic acid receptors in transducing retinoid signals. Such a compensation/ coordination of RARs in breast cancer cells was demonstrated in our experiments that showed that RAR␤ expression can be induced in MCF-7 cells by atRA via RAR␣ mediation. The levels of RAR␣ expression were similar in the presence or absence of atRA in MCF-7 cells, but RAR␤ expression, which was undetectable in the absence of atRA, was strongly induced by atRA. When the RAR␣-selective antagonist Ro 41-5253 was used, the induction of RAR␤ expression by atRA was blocked, indicating that RAR␤ induction is dependent on RAR␣. Both RAR␣ and RAR␤ have been implicated in tumor development. For example, it is well documented that acute promyelocytic leukemia is caused by a reciprocal chromosome 15:17 translocation in which the t(15:17) breakpoint occurs in the RAR␣ gene (33). An involvement of RAR␤ in cancer development was originally suggested by the finding that RAR␤ is integrated by the hepatitis B virus in human hepatoma (34). Moreover, defective RAR␤ expression is believed to be an early event in epithelial carcinogenesis (35). Recently, it has been observed that RAR␤ expression is lost in many epithelial tumors and tumor cell lines, including breast cancer and breast cancer cells (5,6,(35)(36)(37)(38)(39). Furthermore, transgenic mice carrying antisense RAR␤2 develop carcinoma 14 -18 months after birth (40), strongly supporting a role of RAR␤ in tumorigenesis. By demonstrating an RAR␣-dependent RAR␤ induction, our experiments further stress the importance of RAR␤, which eventually becomes noninducible with progression of the tumor, as in estrogen receptor-negative breast cancer cells. The results of these experiments also allow us to clarify why both RAR␣ and RAR␤ have been implicated in retinoid-induced growth inhibition of breast cancer cells. RAR␤ has been suspected as a tumor suppressor for a long time, and loss of RAR␤ expression has been thought to be a critical event in the development of breast cancer. We suggest that RAR␤ functions as a tumor suppressor by regulating the expression of other critical cell growth regulatory factors. Our experiments show that the regulation (induction) of IGFBP-3 in MCF-7 cells is mediated by RAR␤, because blocking RAR␤ expression by an RAR␣ antagonist, Ro 41-5253, also blocked the expression of IGFBP-3. When MCF-7 cells were transfected with the sense cDNA of RAR␤, IGFBP-3 was expressed, even in the presence of Ro 41-5253. At the same time, when the antisense cDNA of RAR␤ was transfected into MCF-7 cells, those cells were no longer able to respond to atRA by expressing IGFBP-3. Using nuclear run-on assays, we showed that atRA directly activates IGFBP-3 gene transcription, supporting the recent finding that a major consensus sequence for retinoic acid is present in the promoter region of the IGFBP-3 gene (41). Whereas MCF-7 cells synthesize and secrete IGFBP-2 and IGFBP-4 into conditioned medium, the application of atRA not only induces the messenger for IGFBP-3, but also results in the appearance of secreted protein in the conditioned medium. IGFBP-3 secretion seems to occur while secretion of IGFBP-2 decreases and secretion of IGFBP-4 increases. Among their diverse biological activities, IGFBPs are able to negatively modulate the actions of IGF by binding IGFs and preventing them from binding to the type 1 receptor (12, 42). Here, we demonstrated that the application of IGF-I stimulates cell growth in MCF-7 cells and that the application of exogenous rhIGFBP-3 can totally reverse this action. When we tested the biological activity of IGFBP-containing conditioned media in their ability to inhibit the IGF-I-stimulated growth of MCF-7 cells, we expected that changes in all of the IGFBPs (that is, IGFBP-3 induced by atRA along with an increase in IGFBP-4 and a decrease in IGFBP-2) would contribute to the growth inhibition effect. However, immunodepletion of IG-FBP-3 from the conditioned medium removed all growth inhibitory activity, suggesting an IGFBP-3-specific growth inhibitory mechanism. The significance of atRA-induced changes in IGFBP-2 and IGFBP-4 and the reason why these changes do not help counteract the IGF-I stimulation effect are not clear. Although IGFBP-3 induction by retinoids has been consistently observed, inconsistencies exist about retinoid-induced changes in IGFBPs 2 and 4 (17)(18)(19). In addition, as mentioned earlier, the biological activities of IGFBPs are not limited to negative effects on IGFs. Thus, changes in IGFBP-2 and IGFBP-4 may be germane to other, as yet unidentified mechanisms. In fact, the co-presence of IGFBPs 2, 3, and 4 in the cell culture medium may well indicate that these proteins possess different functions rather than simply representing functional redundancy. Another explanation is that IGFBP-3 may act through an IGF-independent pathway. IGF-independent actions of IG-FBP-3 have been reported (14,15,43), and underlying mechanisms are being pursued vigorously. Of particular interest is the recent observation that IGFBP-3 can be translocated into the cell nucleus (44 -46). Both exogenous (47) and endogenous IGFBP-3 2 have been shown to be translocated into the nucleus of breast cancer cells. Given the extremely selective nature of nuclear protein localization, it is reasonable to speculate that IGFBP-3 exerts profound biological activity in the nucleus. In summary, our experiments show that both RAR␣ and RAR␤ are involved in the growth inhibitory activity of retinoids by mediating the induction of IGFBP-3 expression. By linking IGFBP-3 to RAR␤, our experiments have pinpointed an intersection between retinoid and IGF signals. This information also expands knowledge of the downstream effectors of RAR␤ and explains how RAR␤ might act as a tumor suppressor.
2018-04-03T02:55:03.612Z
1999-06-18T00:00:00.000
{ "year": 1999, "sha1": "943374fc364089918c5b83b86adf61f64072d820", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/274/25/18005.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "943374fc364089918c5b83b86adf61f64072d820", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
60456979
pes2o/s2orc
v3-fos-license
The Information Seeking Behavior of Undergraduate Education Majors: Does Library Instruction Play a Role? Objective – This study investigated the information seeking behavior of undergraduate majors to gain a better understanding of where they find their research information (academic vs. non-academic sources) and to determine if library instruction had any impact on the types of sources used. Methods – The study used a convenience sample of 200 students currently enrolled as undergraduates at the University of Central Florida’s College of Education. A chi square test of association was conducted to determine if the proportion of undergraduate Education majors who use academic sources as compared to non-academic sources varied depending on whether the students had attended at least one library instruction session. Results – The majority of students surveyed find their research information on the freely available Web, even though they admit that academic sources are more credible. At an alpha level of .05, types of sources used for research were not statistically significantly related to whether the student attended library instruction sessions (Pearson χ2 (1, N = 200) = 1.612, p = .447, Cramer’s V = .090). Conclusion – These results are supported by other studies that indicate that today’s college students are using freely available Internet sites much more than library resources. Little to no association appears to exist between “one-shot” library instruction sessions and the sources used by students in their research. Serious consideration needs to be given to multiple library instruction sessions and to for-credit library courses over one-shot classes. library resources. Little to no association appears to exist between "one-shot" library instruction sessions and the sources used by students in their research. Serious consideration needs to be given to multiple library instruction sessions and to for-credit library courses over one-shot classes. Introduction A February 2007 editorial in the Washington Post stated that judges had cited Wikipedia four times as often as the Encyclopedia Britannica in their judicial opinions over the previous year. The editorial goes on to praise wikis, YouTube, and other "opensource projects" as an "unstoppable movement toward shared production of knowledge" (Sunstein). While sites such as Wikipedia are valuable for a myriad of reasons, including the community creation of knowledge, librarians, teachers, and other information professionals must wonder at the reasons why judges, extremely learned men and woman, would choose Wikipedia over an esteemed source such as the Encyclopedia Britannica for their opinions and what, if any, evaluation techniques they used when selecting this resource. Similar concerns arise regarding the information seeking behavior of students in higher education. College students' strong preference for quickly and easily accessible Web sites is an issue for librarians, college professors, and others in higher education. Opting for information quickly available on the Internet hinders the development of students' research skills and provides them with only a small fraction of the information available on any given topic. Students relying only on Internet resources will not only be deficient in their knowledge of a subject, but also in how to find more information on that subject. Information seeking can be defined as "the interactions between people, the various forms of data, information, knowledge, and wisdom that fall under the rubric of information, and the diverse contexts in which they interact" (Todd 27). Liao, Finn, and Lu divide information seeking into three broad categories: initiating, searching, and locating (9). Others have argued that information seeking should not be seen in such rigid and linear frames. Instead, they suggest that the process of finding information should be viewed as subjective and influenced by previous experiences, knowledge, and opinions (Weiler 51). However one approaches the concept of information seeking, it is clear that this is an important skill for students to possess. Those individuals who are deficient in information seeking skills have difficulty in knowing when information is needed, the value of libraries in finding information, and how to evaluate the sources they do find (Gross 155). Without these skills, students will perform poorly in the classroom, making the professor's job more difficult and ultimately reflecting poorly upon the university. The problems, however, extend beyond the classroom. These same skills are needed when graduates seek home or small business loans, research options for their retirement plan, or seek to make informed decisions in local or national elections. Research and evaluation skills learned in the classroom are needed throughout life. The information seeking behavior of "NextGen" or "Millennial" students is a matter of great concern for those in higher education. The difference in credibility between a Web and a print source document is negligible to these college students (Abram and Luther 34). Indeed, Long and Shrikhande report, "Students often simply type terms in Google and scan the results until information on their topic is found. No assessment of quality, reliability, or accuracy generally occurs" (358). While some have argued that the growth of the Internet should be seen simply as the development of a new research methodology, rather than as a decline of research skills, it appears that more and more students are forsaking the library altogether (O'Brien and Symons 411). Several studies lend legitimacy to these assertions. A study conducted in the United Kingdom in 2005, found that 45% of the students in that study began their academic research with Google, while another 23% used a different commercial search engine such as Yahoo!, Lycos, or AltaVista. Over two-thirds of the students in this study began their research on the Internet rather than in the academic library (Griffiths and Brophy 545). One reason for this may be that students simply find the Internet easier to use than the library. The study further found that students had difficulty using library resources and were willing to sacrifice quality for ease of use (Griffiths and Brophy 548). Today's college students are definitely at ease with the Internet. A report from the Pew Internet and American Life Project reported that 86% of college students have gone online, and that 20% of today's students began using a computer between the ages of 5 and 8 (Jones 2). The study also found that college students are positive about the Internet, using it for both academic and social/recreational needs. The study found that 78% of the students used the Internet for fun, and 73% of them admitted using the Internet more than they used the library (Jones 2-3). In fact, 80% of the students stated they used the library less than three hours a week. Many remarked that finding information on the Internet was easier than using the library (Jones 12-3). Ease of use is an important component in the information seeking behavior of Millennial generation college students. Academia is filled with jargon that only the most experienced understand. Added to this difficulty is the archaic and technical language used in library catalogs and database subject headings (Bodi 111). These barriers make it difficult for time-pressed students to find what they need for their classes. While faculty are often ecstatic at not finding much or any information on their research topics, students become frustrated and opt for the Internet because it gives them the quantity they crave (Bodi 111). The preference for the Internet over the library is not limited to inexperienced researchers. One study found no real difference in library usage among freshman, sophomore, junior, and senior college students (Van Scoyoc and Cason 51). Although an OCLC study did find that 7 out of 10 students use the library's site for at least some of their research, 43% of those students who do not use the library's site for research do so because they think they can find better information elsewhere (OCLC 6). Millennial college students make heavy use of the Web in their class projects and research. A study that examined the bibliographies of student papers found that the number of citations for Web sites rose from an average of 11.3 per bibliography (or 9% of the total number of references per bibliography) in 1996 to 14.4 per bibliography (or 13% of the total number of references per bibliography) in 2001 (Davis 46). Web citations in student bibliographies peaked in 2000, with an average of 22% per bibliography. The decline in percentage is directly attributable to new restrictions placed by professors regarding the type and frequency of Web citations students were permitted to use (Davis 47). Davis found that faculty were not opposed to students using Web sites in their research, but that they now routinely apply restrictions on what and how many Web sources students may use in their papers (45). Most faculty agree that the Internet is an excellent source of information, but they are concerned that students are not able to properly evaluate the sources they have found (Herring 255). These concerns over Web sites and resource evaluation appear well founded. An OCLC study found that while two-thirds of students polled felt they could determine what sites were best to use, 58% of students believe that sites with advertisements are just as reliable as sites without advertisements (OCLC 4). A number of authors have attempted to determine the effectiveness of library instruction. In an oft-cited study, Lois Pausch and Mary Popp found that few critical assessments of library instruction exist in the literature. Most of what has been published are informal surveys of students that measure the students' satisfaction with a particular class (Pausch and Popp). Brettle reviewed the research on information skills training in the health sector during the time period 1995-2002 and found that many of the studies were poorly designed, executed, and reported (6). Further, Brettle found that many of the studies reviewed relied on subjective measures to test the efficacy of instruction rather than on validated instruments using objective measures. In 2006 Koufogiannakis and Wiebe undertook a review of the literature on teaching information literacy skills. Their findings report a lack of overall quality in the studies reviewed. Many of the published studies suffered from faulty reporting and failed to use a validated instrument. Twenty percent of the studies performed no statistical analysis (Koufogiannakis and Wiebe 19). A 2004 study conducted by Beile and Boote attempted to critically assess the effectiveness of library instruction using pre-and post-tests. While they found a statistically significant difference in test scores, the population was small (49 students) and consisted solely of graduate education majors (6). Most other studies, however, show that library instruction has a minimal impact on students' information seeking behavior. In her systematic review of the literature, Brettle wrote, "the results revealed very limited evidence to show that training does improve skills" (7). After reviewing the literature and performing a meta-analysis, Koufogiannakis and Wiebe reported only that library instruction was better than no instruction (19). Andrew Robinson and Karen Schlegl undertook a bibliometric analysis of student research papers. Their study found that library instruction had little impact on the types of sources cited. The students' choice of resources was most influenced by the instructors' directions to the students; when the instructors enforced penalties related to student grades, the students cited more scholarly sources (Robinson and Schlegl 280). In 2002 Emmons and Martin studied the effects of library instruction on an English writing class; they found a statistically significant increase in scholarly journal citations following library instruction (554). Their overall findings indicated that library instruction "made a small difference in the types of materials students chose and how they found them" (557-8). Aim The aim of this study was to examine the information seeking skills of undergraduate education majors at the University of Central Florida (UCF). Specifically, this study attempted to discern the types of sources (academic vs. non-academic) undergraduate education majors used to find information for their research. The study also sought to determine whether an association existed between library instruction sessions and the types of sources used. The research was funded with a $1,000 grant sponsored by the UCF Quality Enhancement Plan (<http://www.if.ucf.edu/>). Sample The University of Central Florida enrolled almost 49,000 students at the start of the Fall 2007 semester. Of those students, 3,605 were undergraduate education majors. The study used a convenience sample of 200 currently enrolled undergraduate education majors. Participants volunteered after seeing advertisements for the survey or after a Curriculum Materials Center (CMC) employee asked if they would like to take a short, online survey. An incentive of $5 was offered to all participating students. Those who agreed to participate were shown how to access the survey in the CMC. Once the survey was confirmed as complete by the principal investigator, participants each received $5. Instrumentation The survey consisted of 14 questions (Appendix A). The survey was administered online using Survey Monkey questionnaire software (<http://www.surveymonkey.com/>). The survey asked questions about four areas of information seeking behavior: • the research habits of students (questions 1,10, and 11); • the ease of using the library's resources, and how important convenience is to the student in selecting resources. (questions 2, 3, and 9); • where students find most of their research information (questions 4,5, and 8); • evaluating sources (questions 6 and 7). Additional demographic questions asked participants about their class standing, the number of hours per day spent on the Internet, and the number of library instruction sessions attended. Results All 200 surveys were deemed usable, and no one from the original sample opted out of the research. Table 1 shows that the Internet was the predominant choice of almost threefourths of the respondents for class-related research. Nearly 9 out of 10 used the Internet for personal research. Even though these students realized that library resources were more credible than Internet sources, they still chose to use Internet sources instead of academic library sources for both personal and class work. The question remains as to why these students would make that choice. Ease of access may be an answer, as may the students' high comfort level with the Internet. Table 3 addresses these ideas. While almost 90% of respondents felt that the library's resources were not hard to use, 78% were still more comfortable using the freely available Internet instead of the library's resources. Dishearteningly, 52% of the respondents based their decisions more on convenient access than on the authority of the resource. Effects of Library Instruction Another important question is about the effect of library instruction on the students' choice of resources. A chi-square test of association was conducted to determine whether the proportion of undergraduate education majors who used academic sources in comparison to those who used non-academic sources varied depending on whether the students had taken at least one library instruction session. The null hypothesis (H0: x 2 = 0) states that the proportions are equal, while the alternative hypothesis (H1: x 2 ≠ ) is that the proportions are not equal. The independent variable, library instruction, was assessed with question 14: "Not counting CMC tours, how many library instruction sessions have you attended?" Choices ranged from zero sessions to five or more. All the responses of zero (n=61) were grouped into the category "No Library Instruction." All the responses from one session to five or more (n=139) were grouped into the category "Library Instruction." The dependent variable, "Types of Sources Used," was assessed with the question "Which of the following sources do you use most in your research?" All the responses for "Internet Sites" were grouped into the category "Internet." All the responses for "Book and Academic Journal" were used for the category "Academic Sources." The responses for "I Use All These Sources Equally" were grouped together in the category "All Equally." No one selected the response "Newspapers or Popular Magazines." Table 4 illustrates the chisquared test of association table. All variables were independent of each other, and all cells had at least five expected frequencies, so all assumptions for the chisquare test of association were met. At an alpha level of .05, the type of information resource used for research was not statistically significantly related to whether the student attended library instruction sessions (Pearson χ 2 (1, N = 200) = 1.612, p = .447, Cramer's V = .090). Students who had attended a library instruction session were proportionally just as likely to use academic and non-academic sources as those students who had not attended a library instruction session. The measure known as 'effect size' evaluates the strength of the association being tested (Morgan,Reichert,and Harrison 15). It may be seen as the practical significance of a test result. In this study the Cramer's V value of .090 indicates a small effect size. Tables 5 and 6 illustrate the findings. Since the effect size is small, it can be thought of as having less practical significance to the field of information literacy and library instruction. Effect size considered with sample size determines power, which is the probability that a test will reject a false null hypothesis. A test with a small effect size might not generate enough power to detect statistical significance (Morgan,Reichert,and Harrison 16). Discussion Although this study is limited in that without a true random sample and larger sample size the results cannot be generalized to the entire population, the results are nonetheless disappointing, if not surprising, for those interested in library instruction. The fact that students surveyed here performed most of their research, whether for class or personal reasons, on the freely available Web is supported by the findings of Griffiths and Brophy, Davis, and the Pew Internet report on the use of the library and the Internet by college students (Jones). Moreover, 79% of students surveyed stated that academic sources (e.g., books and journals) are more credible than the Internet, yet they still rely heavily on Internet sources for their research. Griffiths and Brophy concluded that students have difficulty using library resources, so they turn to the Internet with which they are much more comfortable. This study found that while students did not find the library difficult to use, they were more comfortable using the freely available Internet. Some might argue that with so many library resources being online, the distinction between the Internet and library resources is blurred, and that students may have difficulty differentiating between the two. This may have been the case in this study, since definitions of "Internet resources" and "library resources" were not provided in the survey. However, personal experience at the library's reference desk and in library instruction sessions suggests that students do make the distinction between resources on the library's site and those available freely through a search engine such as Google. Additionally, while Google Scholar further erodes the separation of academic sources and the freely available Web, personal experience again suggests that undergraduate students are not using Google Scholar. Again, this may be due to students being unaware not only of the differences between academic and nonacademic sources, but also the appropriateness of using those sources. This study found no association between library instruction and the types of sources used by students. This is supported by the findings of Emmons and Martin and Robinson and Schlegl. Furthermore, the studies by Davis and Robinson and Schlegl found that instructor guidelines played a more significant role in student citations than did library instruction. This raises a crucial question as to how much students are learning about research from simply following the rules written in their class syllabi. If students are not citing Internet sources simply because they are told to use more academic sources, it is possible that they will revert to using the Internet when they are not specifically instructed to do so, and they would not have gained a deeper understanding of the critical importance of using academic sources. This is important, since almost 90% of the students in this study said they use the Internet as a primary tool for personal research. However, Beile and Boote found that the greatest increases in post-test scores occurred among students who had previous library instruction (6). A 2006 bibliometric study conducted by Wang found a statistically significant difference in the citations of students who had taken a forcredit library course as compared to citations listed by those students who had not taken the course. Those who had taken the course cited more scholarly sources (Wang 85). Further, Wang reviewed the guidelines set forth by the professors and found that none of them specified an academic penalty for having too many nonacademic citations (Wang 87). This suggests that for-credit library classes or multiple library instruction sessions may prove more effective in changing students' information seeking behavior than the traditional "oneshot" library instruction class. These studies could have an important impact on how academic libraries approach library instruction. Libraries have long used the "one-shot" library instruction session where a professor brings his/her class to the library for a session on how to use the library. While this approach does have some value as an introduction to the library for new students, perhaps it is time for libraries to seriously consider alternative practices. Academic libraries might be better served to invest their limited resources in for-credit library classes, mandatory multiple library instruction sessions, or in integrating librarians into the class curriculum. These changes in practice will not be easy. Not only would these approaches require more time and effort, but the devaluing or possible eradication of one-shot library instruction classes strikes at a core belief of academic librarians. While the vast majority of library instruction at the University of Central Florida Libraries consists of one-shot classes aimed mainly at freshman composition students, the library has made efforts to enhance the instruction program. In conjunction with Course Development and Web Services (CDWS), the library has created online information literacy modules that can be used by the teaching faculty in their online or face-toface classes (<http://infolit.ucf.edu/faculty/>). These modules focus on different areas of research, and more are forthcoming. They include content, practice, and assessment, so an instructor can see how well students understand the information. The UCF Libraries also offer "embedded librarians" as an integral part of online classes. They answer questions, create tutorials, and work with the instructors on creating proper research assignments. During the seven academic years in which this service has been offered, UCF Librarians have been embedded in 187 classes reaching almost 5,600 students. Although no formal assessment of library skills has been made of students in classes with embedded librarians, further investigation is planned. Conclusion This study found no association between library instruction and the use of traditional academic library resources in student research. Academic libraries are currently investing staff and time to order to teach information literacy, and yet the truly important question of how to effectively change students' perception of research methodology remains unanswered. Is information literacy and its generic offshoot library instruction truly effective? Perhaps the solution lies outside the library in the types of assignments students are given and how they are graded. Do libraries need to rethink and redesign how they organize and allow access to information? Or have we truly entered a new age of research where quantity, easy access, and keyword searching are more important than controlled vocabulary and peer-review? This study makes no claims to answering these questions, no one study alone can, but it is important that as the library profession moves forward it develop a research agenda and theoretical foundation which will eventually answer these questions. Works Cited 9.) I would use a source because it is convenient to use even though it is not the best source on my topic.
2019-02-13T14:06:21.292Z
2008-12-13T00:00:00.000
{ "year": 2008, "sha1": "2b08ace503fe20a50dbd1ee68c234fb9c025b9f9", "oa_license": "CCBYNCSA", "oa_url": "https://journals.library.ualberta.ca/eblip/index.php/EBLIP/article/download/1838/3696", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8983d3cdedd4e48b809b9c61039a4746ea814cfa", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
266917668
pes2o/s2orc
v3-fos-license
The relationship between healthy living-style behaviors and type-2 diabetes risk of students of health sciences The aim of this study was to determine the relationship between healthy life-style behaviors and risk of Type 2 Diabetes Mellitus of students, and also to compare the sub-dimensions of Healthy Living-Style Behaviors Scale-Ⅱ (HLBS-II) with the anthropometry and general characteristics. Socio-demographic form, HLBS-II and The Finnish Diabetes Risk Score (FINDRISC) were used and anthropometric measurements were taken. With the increase in waist/height ratio, physical activity sub-dimension of HLBS-II was affected (p<0.05). The medical check-up status effected every sub-dimension and the total score of HLBS-II (p<0.001). With the increase in waist/hip ratio of female students, FINDRISC also increased (p<0.001). As the waist/height ratio increased, the mean scores of FINDRISC also increased (p<0.001). Students with a BMI value ≥30 had higher FINDRISC scores (p<0.001). There is a negative relationship between HLBS-II total score, nutrition, self-actualization, and stress management, which are sub-dimensions of HLBS-II, and FINDRISC scores of students of health sciences. INTRODUCTION Diabetes is defined as a metabolic disease with a chronic course that occurs as a result of insufficiency in insulin secretion or in the use of insulin.This metabolic disease is based on the constant high level of sugar in the blood 1 .According to TURDEP I and TURDEP II studies conducted on approximately 25.000 people in 1997 and 2010 in Turkey, diabetes prevalence increased from 7.2% to 13.7% in a 12-year period 2,3 .It is important for individuals to be able to understand health-related information and maintain their health, because diabetes is a disease that can be prevented and/or controlled before it occurs.Creating the correct perception and increasing awareness about the disease shows that it is possible to prevent the rate of diabetes increase and all related complications 4 .The main goal of the treatment of diabetic individuals should be to provide glycemic control.In addition, other known risk factors such as blood pressure and weight gain of patients should be monitored 5 .In order to bring diabetic individuals blood glucose levels to the reference levels and to optimize their daily life activities, they should receive a medical therapy, medical nutrition therapy and increase their physical activity 6 . The basis of healthy lifestyle choices and behaviors exhibited in adulthood is laid in childhood and adolescence 7 .In this period, when young individuals start university life, which also includes adolescence, they try to get used to many changes that also affects their habits in adulthood.Individuals' in this period, leaving the family home, tending towards eating behaviors independent of the family, preferring food such as fast food rather than healthy food, inactivity, trying to get used to the university life, meeting new people and wanting to resemble their peers, increasing the tendency to use tobacco and tobacco products may pave the way for the emergence of many chronic diseases such as diabetes in the future, as well as causing many changes in individuals' private life and healthy lifestyle behaviors [8][9][10] .Some of the important causes of diabetes are social environment, lack of information and motivation of individuals and an understanding of unhealthy lifestyle 11 .The fact that university students are in the young age group may reduce the risk of diabetes, but the increase in obesity in recent years due to the sedentary life of the students and the rapid life causes the Type 2 Diabetes Mellitus (T2DM) risk prevalence of university students to increase 12,13 .Students are expected to reflect these behaviors to their lives with the education they receive so that they can gain healthy eating habits, recognize changeable risk factors of diabetes such as increasing physical activity, and make healthy lifestyle behaviors a habit. Health sciences students' application of healthy lifestyle behaviors to their lives affects the lives of other people in terms of both increasing their quality of life and being a role model for the society they live in 8,14,15 .With this study, it was aimed to determine the relationship between healthy lifestyle behaviors of health science students, who will have a key role in the future both in the society and health institutions, and their risk of developing T2DM. Study design and sampling This cross-sectional study was conducted at Marmara University Faculty of Health Sciences between November 2019 and May 2020. The sample size was calculated using the EpiInfo program.In this calculation, the incidence of the event was 50%, the error level was 5%, and the pattern effect was taken as 2, and the sample size was determined as 648.For the losses that may arise during the research process, it was planned to invite 730 students to the study. The inclusion criteria for this study were: To be a registered student of the Faculty of Health Sciences at the duration of the study.The exclusion criteria were: Pregnant and lactating women, students that were diagnosed as Type 1 or Type 2 Diabetes Mellitus prior to the study. Measures The data was collected by the researchers during face-to-face interviews.Participants of the study completed a socio-demographic form, The Healthy Living-Style Behaviors Scale Ⅱ (HLBS Ⅱ) and The Finnish Diabetes Risk Score (FINDRISC) form. The Healthy Living-Style Behaviors Scale II: HLBS Ⅱ was prepared by Walker et al. in 1987 and renewed in 1996 16 .The scale measures health-promoting behaviors, such as healthy eating, regular physical activity, positive relationships and reducing stress, associated with an individual's healthy lifestyle.The scale consists of 52 items in total and has 6 sub-factors.Subgroups are health responsibility, physical activity, nutrition, self-actualization, interpersonal support and stress management.The overall score of the scale gives the healthy lifestyle behaviors score.All items of the scale are positive.The rating is in the form of a 4-point Likert; never (1), sometimes (2), often (3), regularly (4).The lowest score for the entire scale is 52, the highest score is 208 and higher scores are interpreted as good healthy lifestyle behavior of the individuals.In our country, a validity and reliability study were carried out by Bahar and col-leagues; the Cronbach Alpha coefficient of the scale is 0.92 and it has a high degree of reliability.The reliability coefficients of the sub-dimensions of the scale are; Health responsibility 0.77, Physical Activity 0.79, Nutrition 0.68, Self-Actualization 0.79, Interpersonal Support 0.80, Stress Management 0.64 17 . The Finnish Diabetes Risk Score: FINDRISC was developed in 2003 by Lindström and Tuomilehto to measure the 10-year risk of developing T2DM in Finland 18 .FINDRISC is also used by the International Diabetes Federation, and its Turkish translation has been made by Turkey Endocrinology and Metabolism Society in our country.It is recommended to be used for research on risk of developing diabetes in the following 10-years.FINDRISC consists of 8 questions.When the scores obtained to determine the diabetes risk of individuals are added together, those who score less than 7 are considered to have "low risk", 7-11 points have "mild risk", 12-14 points have "medium risk", 15-20 points have "high risk" and more than 20 points are considered to have "very high risk" 6 . Evaluation of anthropometric measurements All anthropometric measurements were carried out by the researchers at the faculty.The height of the students was measured with a fixed height meter that had 0.5 cm intervals; the measurements were taken without shoes.For body weight, a bioelectric impedance analysis device (Inbody 270 portable) was used.Students were asked to remove all heavy clothing and shoes before stepping on the device.The device was set to -1.0 kg for the remaining clothes.Waist circumference (WC) was measured after normal exhalation, with an inflexible tape at the umbilicus level and without clothes in the area 19 , and hip circumference were measured around the largest part of hips and the distance was noted. Body mass index (BMI) was calculated as weight (kg) divided by height (m) squared and classified into four groups according to World Health Organization.The BMI was considered underweight if it was <18.5, normal if it was 18.5-24.9kg/m 2 , overweight if the BMI was 25.0-29.9kg/m 2 , obese if the BMI was ≥30.0 20 . Statistical analysis The data were evaluated statistically using the SPSS (Statistical Package for the Social Sciences) 28.0 package program.The Kolmogorov Smirnov Z test was used to determine whether the mean scores of the scale were compatible with the normal distribution.Spearman correlation for determining the relationship between scale scores (sub-dimensions of HBLS and FINDRISC); para-metric (Independent t-test, One-way ANOVA test), or non-parametric tests (Man Whitney U test, Kruskal Wallis test) were used to compare scale scores with independent variables.Statistical significance was accepted as p<0.05 in all analyzes. RESULTS and DISCUSSION From the 730 students that were invited for the study, 9 students were excluded for reasons such as not meeting inclusion criteria, and with 721 students the study was started.Five students were excluded from the study due to missing data.Overall, 716 (98.1%) students in 2 nd , 3 rd and 4 th grades from the Department of Nutrition and Dietetics, Physiotherapy and Rehabilitation, Midwifery, Health Management and Nursing completed the study (Figure 1).Considering the risk of developing T2DM in the next 10 years, it was seen that majority (85.9%) of the students participating in the study were in the low-risk group and only a few of them (0.7%) was in the high-risk group.In a study in which Çolak used FINDRISC, it was observed that 72% of university students had low risk of T2DM, 24.7% had mild risk, 2.8% had moderate risk and 0.5% had high risk, and these results were similar to our findings 21 . The items of the FINDRISC scale and the distribution of students according to these items were shown in Table 2. Since all the students were under the age of 45, they received 0 points from this item.Only 2.1% of the students had a BMI above 30 and 3.3% had higher waist circumference than reference values and got 3 points in these categories (see Table 1 for the FINDRISC category distribution of students).According to the data obtained by comparing the anthropometric measurements and the FINDRISC scores presented in Table 3; statistically significant differences were found between T2DM risk scores and waist circumference of both female students' (p<0.001) and male students' (p=0.01).It was observed that students with a BMI value of 30 and above had statistically higher FIND-RISC scores (p<0.001).Recent studies on waist/height ratio emphasize that this ratio is a better measure for determining cardiometabolic risk and T2DM risk than BMI, waist circumference and waist/hip ratio [22][23][24] .In this study, a statistically significant difference was found between the waist/height ratio of the students and their diabetes risk scores.When the data obtained were evaluated, it was determined that when waist/height ratio were increased, the averages of FINDRISC scores were also increased. In Gezer's study to determine the risk of diabetes with nursing students between the ages of 19-24, the rate of female students in the low-risk group for T2DM was found to be 65.5%, while the rate of male students in the same risk group was found to be 77.0% 22.In our study, no relationship was found between the gender of the students and their diabetes risk scores. Shown in Table 4, the relationship between the general characteristics of the participants and their HLBS II scores was examined.The average of health re-218 Acta Pharmaceutica Sciencia.Vol. 62No. 1, 2024 sponsibility sub-dimension was higher in female students whose waist circumference was higher than 88 cm and the average score of interpersonal support sub-dimension was higher in those with a waist circumference lower than 80 cm (respectively, p=0.001; p=0.037).The average score of the physical activity sub-dimension of the nursing students was higher than the other departments (p=0.021) and nutrition and dietetics students' average score for the nutrition sub-dimension was higher than the other departments (p<0.001).Also, the mean score of the nutrition sub-dimension of third grade students was found to be statistically higher than other grades (p=0.042)(not shown in table ). *Spearman Correlation test The correlations between the sub-dimensions of HLBS II and FINDRISC scores were shown in Table 5. In a study conducted with only female university students, the score of physical activity of sub-dimension of HLBS II were found to be the lowest of all sub-dimensions 25 .In another study, it was found that male university students' physical activity and stress management sub-dimensions of HLBS II were significantly higher than female students 26 .Similar to this study, we found the physical activity sub-dimension scores of male university students statistically higher than the scores of female students. In a study it was found that the average scores of self-actualization, physical activity, nutrition, interpersonal support and total HLBS II scores of the group with normal waist-to-height ratio (0.4-0.5) to be significantly higher than students with waist-to-height ratio lower than 0.4 25 .Similarly in our study we found that physical activity sub-dimension of HLBS II scores were statistically higher in students with normal waist-to height ratio (0.4-0.6).While some studies could not find any difference between the nutrition sub-dimension and BMI 15,27 , Alkan et al. found that students with normal BMI had higher scores in nutrition sub-dimension than underweight students 25 .In our study we found that nutrition sub-dimension score was significantly higher in students that were in the normal and overweight BMI range. In current study, statistically significant differences were found between the mothers' educational status of the students and the health responsibility, nutrition, self-actualization and interpersonal support.Also, statistically significant differences were found between the fathers' educational status of the students and the sub-dimension of HLBS II; nutrition, interpersonal support and total score of HLBS II.In a study conducted in Mexico, it was observed that as the mothers' educational level increased, the mean scores in nutrition, physical activity, stress management, interpersonal support subscales and the total score of HLBS II increased significantly 28 .In the study of Tuğut and Bekar, when the health perception and healthy lifestyle behaviors of university students were examined, it was seen that educational status of mothers and fathers was effective in terms of health perception on university students 29 .These results support our findings. In similar studies it was stated that students mostly had three main meals 30,31 . In the study conducted by Mazıcıoğlu and Öztürk with third and fourth grades of university students, it was found that 48.9% consumed three meals a day, 24.8% consumed less than three meals and 26.1% consumed more than three meals a day 32 .In our study, 51.96% of the students had three main meals, while 47.49% had less than 3 meals and 0.55% had more than 3 meals a day.Significant differences were found between students' main meal consumption status and subscales of HLBS II; health responsibility, nutrition, self-actualization, interpersonal support and total HLBS II scores.Accordingly, it was seen that the average HLBS II score of those who consume more than 3 meals is higher than those who consume 3 meals or less.The reason of majority of the students participating in this study consuming 3 or more meals may be due to the fact that the study was conducted in the faculty of health sciences and the awareness on this issue was high. In our study, statistically significant differences were found between students' medical check-up status and HLBS II sub-dimensions; health responsibility, physical activity, nutrition, self-actualization, interpersonal support, stress management and HLBS II total score.Accordingly, the average HLBS II score of the students who had medical check-ups was found to be higher than the students who did not.In the study conducted by Cihangiroglu and Deveci with health school students, it was determined that as the students' evaluation of their health status increased in the "good" direction, the total score of the HLBS II scale and the mean scores of health responsibility, physical activity and stress management also increased 15 .Similarly, in the study of Ayaz and colleagues, it was reported that there was a positive significant relationship between the importance of health and self-actualization, nutrition, stress management and HLBS II scale scores 33 .The students' fulfillment of these attitudes and behaviors and their high scores suggested that they care about their health, taking responsibility for their own care, monitoring their own health, having regular medical check-ups, paying attention to the frequency and order of medical controls, and their behaviors in maintaining and improving health were sufficient. The fact that the study was conducted in a single university and the female gender was very high compared to the males can be shown among the limitations of the study.In addition, since the health awareness of the students studying in health-related departments is high, it is necessary to conduct similar studies with students from other departments. In conclusion, this student-based study has various results that healthy livingstyle behaviors have an important impact on the risk of type 2 diabetes mellitus.Students' BMIs, waist/height ratio, waist to hip ratio, waist circumferences have effects on their FINDRISC scores.Also, genders, the educational levels of parents, numbers of main meals and getting medical check-ups affect their HLBS II scores.Moreover, the sub-dimensions of HLBS II (especially, nutrition, self-actualization, and stress management) can affect the FINDRISC total scores.When all our findings are considered together, the risk of developing T2DM may be low but still present in the students of health sciences, especially in terms of anthropometric measurements and socio-demographic characteristics. STATEMENT OF ETHICS This study was approved ethically by the Marmara University Faculty of Health Sciences Non-Invasive Clinical Studies Ethics Committee with the protocol no: 31.10.2019/103and the research was conducted following the principles stated in the Helsinki Declaration. Figure 1 . Figure 1.Modified CONSORT flow diagram for a single-arm, nonrandomized study Table 1 . General characteristics and anthropometric measurements of students (n=716) Table 3 . Comparison of anthropometric measurements and FINDRISC Type 2 Diabetes Risk Scores (n=716) Table 4 . Comparison of general characteristics and anthropometric measurements of students and sub-dimensions of the Healthy Living-Style Behaviors Scale (n=716) Table 5 . Relationship between sub-dimensions of Healthy Living-Style Behaviors Scale and FINDRISC Type 2 Diabetes Risk Assessment (n=716)
2024-01-11T16:04:44.633Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "a41a776933b85ed80eb99da2c5fac124a71a6c46", "oa_license": null, "oa_url": "https://doi.org/10.23893/1307-2080.aps6214", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "beb577f3a1009d82de7cf91bc5aec3bc9cf390e8", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
220543643
pes2o/s2orc
v3-fos-license
COVID19 and increased mortality in African Americans: socioeconomic differences or does the renin angiotensin system also contribute? The dawn of the new decade is marked by the emergence of the novel coronavirus SARS-CoV-2, whose spread has resulted in the COVID-19 pandemic, having already affected millions of individuals and resulted in hundreds of thousands of deaths worldwide [1, 2]. While the pandemic situation is constantly evolving, alarming signals have arisen during the past few weeks from the United States of America, which now represents the world’s most affected country, as disproportionally higher infection and mortality rates in African–Americans compared to other races were reported in some states [3, 4]. After these initial reports that raised public awareness, most states gradually started sharing data regarding confirmed cases and deaths by race. Most of them have reported higher infection rates in African–Americans, although data regarding confirmed COVID-19 cases by race are largely incomplete [5]. Furthermore, based on current estimates, it is calculated that overall African–Americans suffer from a 2.4 and 2.2 times higher mortality rate when compared to Whites and Asians or Latinos, respectively [6, 7]. The first thing that needs to be addressed is whether this phenomenon is actual or COVID-19 was spread in states with a relatively higher African–American population. Available data suggest that the mortality rate is indeed higher, even when adjusted for the African–American population in each state in most cases. In Illinois, for instance, 14% of the population which is African–Americans accounts for 36% of confirmed COVID-19 deaths. Similarly, in Michigan 43% of deaths concerned African–Americans who represent 14% of the state’s population [7]. However, more data is eagerly needed on this topic: first of all from every state, and second for specific counties as well, since the population is not evenly divided by race within each state. The higher mortality rate in African–Americans raises questions about the underlying mechanisms behind these racial disparities. Several known mechanisms might be implicated, including increased comorbidities, inequalities in healthcare access, and socioeconomic factors. However, we propose that another mechanism might be also implicated: the renin-angiotensin system. Introduction The dawn of the new decade is marked by the emergence of the novel coronavirus SARS-CoV-2, whose spread has resulted in the COVID-19 pandemic, having already affected millions of individuals and resulted in hundreds of thousands of deaths worldwide [1,2]. While the pandemic situation is constantly evolving, alarming signals have arisen during the past few weeks from the United States of America, which now represents the world's most affected country, as disproportionally higher infection and mortality rates in African-Americans compared to other races were reported in some states [3,4]. After these initial reports that raised public awareness, most states gradually started sharing data regarding confirmed cases and deaths by race. Most of them have reported higher infection rates in African-Americans, although data regarding confirmed COVID-19 cases by race are largely incomplete [5]. Furthermore, based on current estimates, it is calculated that overall African-Americans suffer from a 2.4 and 2.2 times higher mortality rate when compared to Whites and Asians or Latinos, respectively [6,7]. The first thing that needs to be addressed is whether this phenomenon is actual or COVID-19 was spread in states with a relatively higher African-American population. Available data suggest that the mortality rate is indeed higher, even when adjusted for the African-American population in each state in most cases. In Illinois, for instance, 14% of the population which is African-Americans accounts for 36% of confirmed COVID-19 deaths. Similarly, in Michigan 43% of deaths concerned African-Americans who represent 14% of the state's population [7]. However, more data is eagerly needed on this topic: first of all from every state, and second for specific counties as well, since the population is not evenly divided by race within each state. The higher mortality rate in African-Americans raises questions about the underlying mechanisms behind these racial disparities. Several known mechanisms might be implicated, including increased comorbidities, inequalities in healthcare access, and socioeconomic factors. However, we propose that another mechanism might be also implicated: the renin-angiotensin system. Comorbidities Cumulative evidence from China and Italy, two countries being at the epicenter of COVID-19 pandemic before the US, suggests that, besides age, comorbidities such as cardiovascular disease (mainly ischemic heart disease and stroke), hypertension, diabetes mellitus, chronic respiratory disease and atrial fibrillation increase the risk of mortality among the affected patients [8,9]. Indeed, hypertension increases the risk for the development of acute respiratory distress syndrome (ARDS) by 82%, while diabetes mellitus augments the corresponding risk by 134% [10]. Hypertension is more prevalent among African-Americans, even at earlier ages, compared to other races, while control rates remain poorer [11,12]. Besides, it has been previously demonstrated that black patients with diabetes mellitus feature significantly greater odds for insufficient glycemic control, compared to nonHispanic whites [13], while they also have a significantly higher rate of in-hospital complications, compared to nondiabetic white patients with hyperglycemia. However, no statistically significant difference for adverse in-hospital outcomes is observed, when * Michael Doumas michalisdoumas@yahoo.co.uk 1 2nd Propedeutic Department of Internal Medicine, Aristotle University, Thessaloniki, Greece white and black patients with diabetes are compared [14]. Of note, African-American women have been shown to have a greater risk for stroke, heart failure and end-stage renal disease (ESRD) compared to white women, whereas African-American men feature greater risk for heart failure and ESRD, but lower risk for coronary artery disease compared to white men [15]. Collectively, that evidence might partially explain the greater burden of COVID-19 pandemic among black patients. Socioeconomic factors and healthcare access Socioeconomic factors could also play a significant role. While billions of people worldwide are encouraged to implement teleworking, for many African-Americans it is not a matter of choice due to working in essential industries, with less than 20% being able to work from home, rising the possibilities for exposure and infection [16]. Social distancing has been so far recognized as the most effective measure of spread attenuation [17]. However, the family structure differs in African-Americans, as family members share closer bonds and are more likely to share accommodation, resulting in close contact among the elderly and the youth, who are also more unlikely to conform to social distancing. In fact, the spread of the disease in China and Europe during the previous months led several people to assume that African-Americans are "immune" to COVID-19, resulting in significant misinformation on this issue with obvious consequences. Indeed, a recent report revealed that many African-Americans lack critical knowledge about COVID-19 and have not changed their daily routine [18]. Given that the unemployment and uninsurance rates for African-Americans are higher than average, their access to healthcare facilities is significantly disabling and probably resulting in under detection of less serious cases [19]. Another significant factor in this field is the relatively higher mistrust of African-Americans in the healthcare system [20]. The limited access to combined with the mistrust at the healthcare system might result in significant delays in seeking assistance, and thus increased mortality rates in African-Americans. However, Latinos share some of the above-mentioned socioeconomic characteristics, at least partly, although early reports suggest that they exhibit lower mortality rates compared to African-Americans, as it has already been mentioned [7]. Renin-angiotensin system We should also highlight the potential role of renin-angiotensin-aldosterone system (RAAS) blockers use among black patients. Low plasma renin activity, associated with a salt-sensitive phenotype, has been documented for black patients compared to white individuals [21]. The hallmark ALLHAT trial, which enrolled a significant proportion of black patients, demonstrated for the first time the superiority of chlorthalidone compared to lisinopril in the prevention of surrogate endpoints, namely stroke, combined coronary artery disease and cardiovascular disease and heart failure [22]. Thus, according to the 2017 American College of Cardiology/American Heart Association Hypertension Guidelines, initial antihypertensive treatment in black adults with hypertension but without heart failure or chronic kidney disease, including those with diabetes mellitus, should include a thiazide-type diuretic or a calcium-channel blocker (CCB) [23]. However, a recent meta-analysis pooling data from a total of 38,983 hypertensive black patients did not reveal a significant difference between RAAS blockers and the rest antihypertensive drug classes regarding the odds for hard endpoints, except for stroke; patients treated with RAAS blockers featured an over 50% increase in the odds for stroke, compared to those treated with diuretics or CCBs [24]. Anyway, the fact is that the use of RAAS inhibitors is less common in African-Americans compared to Caucasians [25]. Based on the pathophysiologic background underlying SARS-CoV-2 infection, several preclinical studies raised concerns on the safety of RAAS blockers in patients with documented infection; however, there is no hard evidence to support the discontinuation of these agents, especially in high-risk patients [26]. For this reason, several scientific societies, such as the European Societies of Hypertension and Cardiology (ESH, ESC), the Heart Failure Society of America, the American College of Cardiology and the American Heart Association (HFSA/ACC/AHA) issued statements that advised towards the continuation of RAAS blockers for indications that are known to be beneficial [27][28][29] Furthermore, recent evidence suggests that their use not only cannot be associated with increased risk of COVID-19 or increased risk of in-hospital mortality, but also it has been associated with improved survival, although with limitations [30][31][32]. Of note, no differences have been reported among ACE inhibitors and angiotensin receptor blockers regarding major clinical outcomes (severity of SARS-CoV-2 infection, mortality) [33,34]. Therefore, it has to be elucidated whether the lower usage rates of RAAS blockers among black patients could partially contribute to the observed racial disparity in the severity of SARS-CoV-2 infection. It has been demonstrated that SARS-CoV-2 invades human alveolar epithelial cells through the angiotensin converting enzyme 2 (ACE2) receptor, leading to downregulation of ACE2 expression and rapid progression to ARDS [35]. It would be therefore interesting to know whether black patients exhibit greater genetic susceptibility to SARS-CoV-2, even though the genetic basis of ACE2 expression in different populations remains largely unknown [36]. What is more, specific ACE2 gene polymorphisms have been correlated with essential hypertension, atrial fibrillation, major adverse cardiovascular events, reduced left ventricular ejection fraction and increased left ventricular mass, mainly in Asian, and Caucasian populations [37][38][39][40]. Therefore, one could speculate that there might be a vicious circle between increased susceptibility to SARS-CoV-2, cardiovascular comorbidities and final development of severe infection, which has to be proven. Unfortunately, there are no data until now regarding the interconnection between ACE2 polymorphisms and cardiovascular disease development in African-American populations. Existing gene databases, such as the Million Veteran Program including over 825,000 Veteran participants, could serve as a valuable tool towards this direction [41]. Conclusion Undoubtedly, COVID-19 pandemic will continue to strain health care systems worldwide. As our understanding regarding the pathophysiologic mechanisms implicated in this disease further evolves, we may be able to better acknowledge demographic, genetic, behavioral and health factors that are associated with increased mortality in specific vulnerable groups, such as African-Americans. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2020-07-16T15:08:41.564Z
2020-07-15T00:00:00.000
{ "year": 2020, "sha1": "f82232382c473e2389904338de932e99886589e8", "oa_license": null, "oa_url": "https://www.nature.com/articles/s41371-020-0380-y.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f82232382c473e2389904338de932e99886589e8", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Medicine" ] }
129945262
pes2o/s2orc
v3-fos-license
Boundary Conditions and Vacuum Fluctuations in $\mathrm{AdS}_4$ Initial conditions given on a spacelike, static slice of a non-globally hyperbolic spacetime may not define the fates of classical and quantum fields uniquely. Such lack of global hyperbolicity is a well-known property of the anti-de Sitter solution and led many authors to question how is it possible to develop a quantum field theory on this spacetime. Wald and Ishibashi took a step towards the healing of that causal issue when considering the propagation of scalar fields on AdS. They proposed a systematic procedure to obtain a physically consistent dynamical evolution. Their prescription relies on determining the self-adjoint extensions of the spatial component of the differential wave operator. Such a requirement leads to the imposition of a specific set of boundary conditions at infinity. We employ their scheme in the particular case of the four-dimensional AdS spacetime and compute the expectation values of the field squared and the energy-momentum tensor, which will then bear the effects of those boundary conditions. We are not aware of any laws of nature constraining us to prescribe the same boundary conditions to all modes of the wave equation. Thus, we formulate a physical setup in which one of those modes satisfy a Robin boundary condition, while all others satisfy the Dirichlet condition. Due to our unusual settings, the resulting contributions to the fluctuations of the expectation values will not respect AdS invariance. As a consequence, a back-reaction procedure would yield a non-maximally symmetric spacetime. Furthermore, we verify the violation of weak energy condition as a direct consequence of our prescription for dynamics. Introduction One of the most remarkable outcomes of string theory was the proposition of the AdS/CFT correspondence [1]. It is conjectured that a theory of quantum gravity on n-dimensional AdS displays an underlying equivalent conformal quantum field theory without gravity, taking place at the (n − 1)-dimensional conformal boundary of AdS. Accordingly, applications to high energy and condensed matter physics appeared within the efforts to test the limits of this new conjecture, placing the anti-de Sitter spacetime under the scientific spotlight. Although most of the developments in AdS rely on string theory techniques, on a recent work [2], the authors have focused on studying semiclassical properties of the spacetime. Using the mathematical apparatus of Quantum Field Theory (QFT) in curved spaces, they have found the fluctuations of the expectation values of the energymomentum tensor and the field squared in AdS n . However, they did not discuss in depth the implications of the causal structure of the spacetime, i.e., the effects of non-globally hyperbolicity. Since AdS has a conformal boundary, we may not be able to determine much about the history of a physical quantity without specifying its behavior at infinity. Such a circumstance poses a fundamental issue on the quantization procedure, namely the solutions of the wave equation will not be uniquely defined by initial conditions in AdS, i.e., the Cauchy problem is not well-posed. Thus, unless we give extra information at the conformal boundary, the lack of predictability makes it impracticable to build a quantized field whose dynamical evolution comprises the entire history of the spacetime. Avis, Isham, and Storey [3] were the first ones to address the causal pathology of AdS when solving field equations. They developed QFT on AdS 4 by regulating information leaving or entering the spacetime by hand. Their approach proposes the imposition of boundary conditions at the spatial infinity in order to control whether information flows through (or is reflected by) the conformal boundary. Even though Avis et al. provide us with physically consistent solutions to the wave equation, works by Wald [4] and Ishibashi [5,6] reveal that a broader category of boundary conditions might be employed to obtain a physical dynamical evolution. In [5], the authors present a prescription for dynamics of fields in general nonglobally hyperbolic spacetimes based on the grounds of physical consistency. In order to fulfill some reasonable physical requirements (to be explained later), they argue that the spatial component of the differential wave operator must be self-adjoint. Besides, in [6], they show that the prescription for dynamics in AdS translates into specifying boundary conditions at the conformal boundary. While Kent and Winstanley, in [2], impose the Dirichlet boundary condition at infinity, perhaps without realizing, they are neglecting an entire set of non-equivalent dynamical outcomes. According to Ishibashi and Wald [6], those outcomes would correspond to the various boundary conditions that one could have specified at infinity. In this paper, we study physical effects that may arise due to non-Dirichlet boundary conditions at the conformal boundary. We investigate those effects by computing the vacuum fluctuations of the expectation values of the quadratic field and the energymomentum tensor for conformally coupled scalar fields in AdS 4 . Also, we will keep Ref. [2] as a basis for our results and shall return to it for further comparison. We have organized this article as follows. In Sec. 1, we briefly review some of the fundamental aspects of the anti-de Sitter solution. Then, in Sec. 2, we display the systematic procedure that describes the dynamics of scalar fields in non-globally hyperbolic spacetimes -such as AdS -first presented by Wald and Ishibashi. With that scheme in hands, we show the implications their prescription has on scalar fields propagating on AdS, in Sec. 3. Our next step is to build the proper Green's functions in Sec. 4, and employ them in the computations of the renormalized quantities of interest, namely the fluctuations of the expectation values of the field squared and the energymomentum tensor, both shown in Sec. 5. Finally, we discuss our results in Sec. 6. Anti-de Sitter spacetime Surfaces of constant negative curvature are well-known in geometry and comprise the set of hyperbolic spaces. In the context of General Relativity, the equivalent to those spaces is the n-dimensional anti-de Sitter space, which appears as a solution to Einstein equations when choosing a negative cosmological constant (Λ < 0) in the absence of matter and energy. Setting Λ := − (n−1)(n−2) 2H 2 , we may write the Einstein equations as The outcome is an n-dimensional maximally symmetric pseudo-Riemmanian metric defined over a Lorentzian manifold with constant negative curvature, i.e., the AdS n spacetime. In a suitable set of parametrized coordinates {x µ } * , the line element for the induced metric g µν on AdS n is where dΩ 2 n−2 is the line element on a unit (n − 2)-sphere. Topology We may understand AdS n as an isometric embedding of a single sheeted ndimensional hyperboloid in an (n + 1)-dimensional flat space provided with metric diag(−1, 1, · · · 1, −1). Timelike curves in AdS are transverse sections of the hyperboloid, and they are always closed. The periodicity of the timelike coordinate, τ , suggests that given a point in spacetime, we can return to it by only traveling along a timelike geodesic of length 2π in τ . Accordingly, the topology of AdS n becomes apparent, namely S 1 × R n−1 , which is compatible with the existence of closed timelike curves. Thus, * The radial coordinate, ρ, is defined over the interval [0, π/2). The polar and azimuthal coordinates on the unit (n−2)-sphere are θ j (j = 1, . . . , n−3) and ϕ := θ n−2 , respectively, each satisfying 0 ≤ θ j ≤ π and 0 ≤ ϕ < 2π. The timelike coordinate, τ , ranges from −π to π. unphysical events can take place in the spacetime, such as a particle returning to the same position through a periodic motion in time. Causal structure Wald remarks in [7] that observers following closed timelike geodesics would have no difficulty altering past events hence breaking causality. In an attempt to solve this primary issue, we can 'unwrap' the hyperboloid along the timelike direction, and patch together unwrapped hyperboloids one after the other. In other words, we construct a spacetime spatially identical to AdS but extended in time, i.e., the temporal coordinate no longer ranges from −π to π but from −∞ to ∞. We refer to such procedure as the universal covering of AdS, and the resulting spacetime as CAdS. Even though the unwrapping of AdS prevents the existence of closed timelike curves, another fundamental causality issue remains, namely the lack of predictability associated with fields propagating on the spacetime. Indeed, no Cauchy hypersurfaces exist in AdS (and CAdS) hence portraying it as a non-globally hyperbolic spacetime. The Cauchy problem will not be well-posed, yielding non-unique dynamics for a given set of initial conditions. We can understand this scenario as a result of information leaking through the spatial infinity of the spacetime, i.e., flowing in (out) from (through) the boundary. In order to solve such a pathological behavior, we shall discuss in the next sections how to adequately address causality issues associated with field equations in non-globally hyperbolic spacetimes. Scalar fields in non-globally hyperbolic static spacetimes An extensive literature (see, for instance, [8] and references therein) provides a complete guide on QFT in curved spaces, and conduct us through a generalized quantization procedure based on that of QFT in Minkowski spacetime. Nevertheless, several researchers developed most of it in a category of spacetimes whose causal structure is thoroughly well-defined, namely globally hyperbolic spacetimes. Indeed, as we discussed previously if a spacetime does not feature global hyperbolicity, then basic field equations might not have causal solutions, which jeopardizes the quantization of fields. On what follows, we use works by Wald [4] and Ishibashi [5,6] to prescribe the appropriate dynamics of scalar fields in non-globally hyperbolic spacetimes. Let us consider a static spacetime (M, g µν ), which admits the following decomposition of its metric [9] In Eq. 3, h ij is the metric induced on a hypersurface Σ orthogonal to a given timelike Killing field τ µ of the metric, and we define V 2 = −τ µ τ µ . In this particular case, Klein-Gordon equation, reduces to is the spatial component of the wave operator, and D i is the covariant derivative in a spatial slice of Σ. Wald points out in [4] that A is an operator defined on a Hilbert space H = L 2 (Σ) with domain D(A) = C ∞ 0 (Σ), and whose self-adjointness properties are relevant to examine the dynamical evolution appropriately. An extensive literature on Functional Analysis (e.g., see [10,11]) discusses the properties of such operators and present a systematic procedure for obtaining their self-adjoint extensions, accredited to Weyl and von Neumann. It can be easily checked that (A, D(A)) defined above is symmetric. For such a symmetric operator, we denote by (A † , D(A † )) its adjoint operator. Symmetry of A implies that A = A † . However, we may have D(A) = D(A † ) -when A is not selfadjoint. In this case it may be possible to find the self-adjoint extensions of A. In order to find these extensions, let us define the deficiency subspaces of A, denoted N ± ⊂ H, by and the deficiency indices as n ± = dim(N ± ). There are three cases to be considered: (i) If n + = n − , then A has no self-adjoint extension. (ii) If n + = n − = 0, then A is essentially self-adjoint, and we obtain it by taking the closure,Ā, of A. (iii) If n + = n − = n ≥ 1, then infinitely many self-adjoint extensions of A may exist. They are in one-to-one correspondence to the isometries between N + and N − parametrized by an n × n unitary matrix, U . Certainly, the third case is more complex than the others, and we must follow a method for obtaining the self-adjoint extensions (see [11] for a proper description of it). They are given by A E , with E being a parameter labeling the extension, defined by and for all Φ ∈ D(A E ). This procedure can always be followed to find whether an operator has self-adjoint extensions and identify them, in case they exist. In particular, Wald [4] proposes that there might exist a set of solutions of the wave equation 5 associated with each self-adjoint extension, i.e., given well-posed initial conditions to the Cauchy problem, namely It is straightforward to notice that for each extension A E there will be an associated dynamical evolution of Eq. 9. Consequently, the dynamics of the field is not uniquely determined by initial conditions. We identify those non-equivalent solutions as a result of various boundary conditions that one can impose at a region in space, such as a singularity or a boundary [4]. Ishibashi and Wald, in [5], argue that Eq. 9 is the only one that prescribes a physically sensible dynamics of scalar fields in non-globally hyperbolic static spacetimes. By comparison with the globally hyperbolic case, they establish a set of conditions that determine whether a time evolution is consistent or not, namely: (i) solutions of the wave equation must be causal; (ii) the prescription for dynamics must be invariant under time translation and reflection; (iii) there exists a conserved energy functional also respecting time translation and reflection invariance, in agreement with the globally-hyperbolic case; (iv) solutions satisfy a convergence condition, as proposed in [4]. Boundary conditions at infinity of anti-de Sitter Let us now consider Klein-Gordon equation 5 in AdS n , as follows where m ξ is the effective mass of the field defined by m 2 ξ = m 2 − ξn(n − 1)H −2 , and is the Laplace-Beltrami operator on the unit (n − 2)-sphere whose eigenfunctions are Generalized Spherical Harmonic functions, Y l (θ j , φ), with eigenvalues l(l + n − 3). We may recall that a static slice of AdS n can be decomposed into a real interval [0, π/2), labeled by the radial coordinate ρ, and an (n − 2)-dimensional unit sphere S n−2 , parametrized by the angular coordinates θ j and ϕ. It is also worth pointing out that, as the spacetime is static, there exists a timelike Killing field ∂ t , whose eigenfunctions e −iωt with positive energy, ω > 0, can be used to expand the solution φ. Thus, φ will be an eigenfunction of the quadratic operator ∂ 2 t with eigenvalue −ω 2 . With those considerations in hand, let us write the solution as Under the transformatioñ and omitting temporal and angular dependence, Eq. 10 reduces to upon the identification [6] * which is a differential operator whose domain is C ∞ 0 (0, π/2) defined over a Hilbert space H = L 2 ([0, π/2], dρ), and the coefficients of the equation are defined as and From Eq. 17, it is straightforward to check that The coefficient ν is taken to be the positive square root of ν 2 and will depend on the mass and coupling factor of the field. In such conditions, there are four relevant cases to be analyzed, namely (i) ν 2 ≥ 1: in this case, the effective mass of the field satisfies the relation H 2 m 2 ξ ≥ −(n + 1)(n − 3)/4, which comprise the minimally coupled, massless scalar field for n ≥ 3. (iii) ν 2 = 0: this is the case when the effective mass squared reaches a critical value, namely H 2 m 2 ξ ≡ −(n − 1) 2 /4. (iv) ν 2 < 0: in this case, the effective mass squared is lower than the critical mass, i.e., H 2 m 2 ξ < −(n − 1) 2 /4. In [6], the authors examine the positivity of the operator A in terms of ν. They demonstrate that, in all cases in which ν 2 ≥ 0 -i.e., in (i), (ii) and (iii) -A is a positive operator. Meanwhile, in case (iv), the operator is unbounded bellow. Consequently, A has no positive, self-adjoint extensions in case (iv). On the other hand, at least one self-adjoint extension to A exists -that is, the Friedrichs extension [10] -in all other cases: (i), (ii) and (iii). The solutions to Eq. 14 are given by The other linear independent solution is never square-integrable, so we neglect it here. According to Eq. 6, to construct the deficiency subspaces N ± , we must take ω 2 = ±λi, so ω ∈ C. In such conditions, as shown in [6], solution 19 fails to be square integrable in case (i), i.e., ν ≥ 1. However, for 0 ≤ ν < 1, which corresponds to cases (ii) and (iii), f is square integrable for all ω ∈ C. In case (i), the deficiency subspaces are trivial, so n + = n − = 0, and the operator admits a unique self-adjoint extension. In other words, the repulsive effective potential in A, i.e., (cos ρ) −2 , prevents the fields from reaching spatial infinity. Hence, they vanish there, and no additional boundary conditions are required. Conversely, in cases (ii) and (iii), the deficiency subspaces N ± are each spanned by an eigenfunction f ± of A with eigenvalue ω 2 = ±2i. Thus, the deficiency indices in these cases are n + = n − = 1, so infinitely many positive self-adjoint extensions of A exist. Now, the effective potential is not as strong as in case (i); hence we may associate the extensions to boundary conditions prescribed at infinity. A one-parameter family of self-adjoint extensions, A β , of A exists for 0 ≤ ν 2 < 1 (cases (ii) and (iii)). Equation 7 provides us with the appropriate domain of A β . Since the domain of A consists of functions in C ∞ 0 , all additional information needed to prescribe a physically consistent dynamical evolution must come from the asymptotic behavior of f + and U f + , for all isometries U . Let U β denote the isometries between N + and N − , given by for β ∈ (−π, π]. Let us consider the function whose behavior near infinity (ρ = π/2) dictates the boundary conditions satisfied by all solutions φ t of the form 9. For 0 < ν < 1, the asymptotic behavior at ρ = π/2 is where the coefficients of the leading terms, a ν and b ν , are functions of ν, σ, the spacetime dimension n and the parameter β. The leading powers in ρ of f + are from which we can see that the asymptotic boundary condition depends on the ratio a ν /b ν , which may take any real value. For ν = 0, we have and an analogous procedure reveals that the asymptotic boundary condition depends on a 0 /b 0 also in this case. However, the function (sin ρ) −σ−1/2 · (cos ρ) −1/2 · f β and its first derivative in ρ both scale with a 0 when approaching infinity ρ = π/2. Setting a 0 = 0, we recover Dirichlet and Neumann boundary condition imposed simultaneously, which is precisely Friedrichs extension. On what follows, we shall denote the ratio a ν /b ν by α ν , hence all self-adjoint extensions of the operator will be parametrized by α instead of β, although α ≡ α(β). From Eq. 23, we can check that * which we identify as generalized Robin boundary conditions for 0 < ν < 1. One recovers generalized Dirichlet or Neumann boundary conditions by setting α ν equals to 0 and ±∞, respectively. In the particular case ν = 1/2, Eq. 23 reduces to an even simpler form of the boundary conditions given by † which is the usual Robin boundary condition, hence mixing Dirichlet (α = 0) and Neumann (α = ±∞) conditions. Even though the extensions A α are now parametrized by a real parameter α ν , not all of them are positive. Except for ν 2 ≥ 1, whose unique self-adjoint extension is already positive, the remaining cases satisfy the positivity conditions shown in [6]: For where γ is the Euler gamma and ψ is the digamma function. It is worth pointing out that equations 25 and 26 must be satisfied mode by mode, i.e., for each spherical label l -and for each σ, indirectly (see Eq. 17) -, the conditions are satisfied by f β,ω,l . Accordingly, there are infinitely many parameters α ν,l associated to each f β,ω,l , and they all satisfy different positivity conditions, given in equations 27 and 28. Green's functions in AdS In [12], Allen and Jacobson show that, in a maximally symmetric spacetime, twopoint functions such as G F (x, x ) = −i ψ|T {φ(x)φ(x )}|ψ , where |ψ is a maximally symmetric state, may be written in terms of the geodetic interval s(x, x ) ‡, i.e., (29) * We exchanged all indices β for α. †In case ν = 1/2, we drop the index of α 1/2 and replace it simply by α. ‡In AdS, s is constructed so that it goes to zero as x → x and goes to infinity as we approach the boundary Their proposition simplifies the computations considerably since the wave equation becomes an ODE of the variable s. They also require that the Green's function falls off as fast as possible at spatial infinity, which in AdS translates into: G F → 0 as s → ∞. In other words, they are choosing Dirichlet boundary condition for the field φ. Kent and Winstanley, in [2], exploit this simplicity to find the fluctuation of the field squared and the energy-momentum tensor in all spacetime dimensions of AdS. They also verify that their results are compatible with the ones of Burgess and Lütken, whose approach in [13] was to perform a summation of modes of the wave solutions. We are not aware of any law of nature that restricts the boundary conditions of all modes to Dirichlet ones. Indeed, Ishibashi and Wald showed in [5] that there is an entire category of boundary conditions that prescribe a physically consistent dynamical evolution. Additionally, there is no guarantee that all modes must satisfy the same boundary condition. Let us then consider a setup in which one of the modes of the wave equation, u ωα,lα , is chosen so that its radial component f ωα,lα (ρ) satisfies a generalized Robin boundary condition with parameter α. Meanwhile, the components f ω,l (ρ) of all other modes u ω,l (x) (l = l α ) satisfy Dirichlet boundary conditions. The Green's function in this case is given by mode sum (from now on, we consider τ > τ ) where N ω,l are normalization constants. We may complete the last term in the summation for all Dirichlet modes by adding them to and subtracting them off Eq. 30, i.e., F may not be a maximally symmetric function. It seems reasonable for us to write that Equation 32 illustrates the break of AdS invariance of the Green's function, as it may not depend on the geodetic interval s entirely anymore. We attribute the break on the maximal symmetry of G F to the imposition of different boundary conditions for each angular mode. Renormalized quantities for a conformal massless scalar field in AdS 4 In order to shed light on what we have discussed so far, we shall specialize to four spacetime dimensions, AdS 4 . For simplicity on the computation of quantities of interest, let us restrict ourselves to a conformally invariant, massless scalar field, φ, i.e., m = 0 and ξ = 1 6 . In this case, from Eq. 16, we get ν = 1/2, and from Eq. 17, we find that σ = (2l + 1)/2. Equation 14 becomes and its solutions are where C 1 and C 2 are constants to be determined, and P and Q are the associated Legendre functions of the First and Second kinds, respectively. Square integrability requires f to fall off at the origin ρ = 0, hence C 1 → 0 * . A complete set of eigenfunctions is then for normalization constants N ω,l to be determined. As discussed in Sec. 3, boundary conditions at infinity are necessary to prescribe the dynamical evolution of the field in AdS n . In case ν = 1/2, Robin boundary conditions 26 are the appropriate ones. We aim to provide an example of the setups discussed in the last section. For that, we will consider that all non-spherically symmetric modes respect Dirichlet boundary conditions. However, the l = 0 mode will be chosen to satisfy Robin condition with a parameter α. As discussed above, the vacuum will not be AdS invariant in this case. However, since the non-trivial boundary condition is on l = 0 mode, we still preserve spherical symmetry. [14] allow us to describe the behavior of f ω,l and its derivative at the boundary, as follows For l > 0, all modes satisfy f ω,l (ρ → π/2) = 0 (Dirichlet boundary condition), thus its positive quantized frequencies are For l = 0, we calculate the ratio between derivative 37 and function 36 to use it in 26, i.e., Positivity condition 27 requires that In our analysis, we consider α ≥ 0, which includes Dirichlet, α = 0, and Neumann, α → ∞, cases. Equation 39 imposes a quantization condition for the frequencies ω in terms of the parameter α. Except for α = 0 and α = ∞, it cannot be solved analytically for an arbitrary value α. One can readily verify that, in the Neumann case (α → ∞), the frequencies are odd integers. Meanwhile, for Dirichlet, they are even integers, which is consistent with Eq. 38. In our procedure, we employed the software Mathematica [15] to solve equation 39 numerically in a determined range of ω for several values of α. As shown in Fig. 1, the solutions of 39 are given by the intersection points between the two functions. We can see that ω values for arbitrary α always lie between an odd number and its next even integer, which are precisely the frequencies for Neumann and Dirichlet conditions, respectively. Thus, given a Neumann frequency, ω N,r = 2r − 1, and a Dirichlet one, ω D,r = 2r, for r > 0, we may denote an α frequency between them as ω α,r , even though it is not an integer number. Quadratic field fluctuations φ 2 Before computing the Green's function, it is useful to write solution f ωα,r,0 in a more convenient form and normalize it accordingly. Using Ref. [14], we find * Now, we recall our discussion from last section to construct the appropriate Green's function. We can decompose our Green's functions in two parts, i.e., and G (D) (43) * For convenience, we change the lower label in f ω,0 from ω α,r to r simply, and add an upper index α to denote our choice of boundary condition. s(x, x )). As Kent and Winstanley show in [2], approaching the coincidence limit s → 0, the function G (D) F diverges according to the Hadamard form. Thus, pointsplitting renormalization can be employed to compute finite quantities. Furthermore, they obtain the Hadamard forms in AdS for any spacetime dimension through a systematic method, based on [16]. In the particular case of AdS 4 , for a conformally invariant field, the Green's function G (D) F has the Hadamard form given by After renormalization, it may be written as [2] G We may find the expectation value of the quadratic field fluctuations as follows which is naturally in accordance with the results in Ref. [2]. Analogously, the effect of our Green's function 42 on φ 2 appears when taking the coincidence limit x → x. However, calculating G (α) F analytically is impossible, since the summation is taken over numerical values of frequencies. Hence, we adopt a numerical approach to find our results. We expect G F . On the other hand, we cannot perform the infinite sum in 42 numerically, so a residual divergent behavior might appear. Through our computations, we noted it was convenient to take the coincidence limit in the radial coordinate first, i.e., ρ → ρ, and then in the time coordinate. Thus, our final step would be to take the limit of τ → τ . It is more convenient though, to analytically extend the function on the complex plane and take the limit through the imaginary axis, i.e., τ → τ + i , hence τ − τ → −i . Finally, by multiplying G (α) F by i, we will have an entirely real-valued function that, in the limit → 0, yields directly the quadratic fluctuations of the field, and it is much simpler for us to handle it numerically. Before implementing the numerical routine, we considered the only case that can be treated analytically, which is the Neumann condition, α → ∞. In this situation, the frequencies are ω ∞,r = 2r − 1, for r > 0, and the Green's function reduces to the following summation iG (∞) which we calculated using Mathematica [15], resulting It is straightforward to find the expectation value φ 2 (N ) by simply taking → 0, i.e., The function φ 2 (N ) is finite because both terms inside the sum in Eq. 47 diverge with same strength. Naturally, their subtraction eliminates the infinities. In particular, the last term in Eq. 47, the Dirichlet counterpart of G (α) , appears for all values of α and dictates the divergent behavior at → 0. We find its form by calculating the infinite summation and expanding it in powers of , i.e., Our numerical approach to find the expectation value φ 2 (α) proceeded as follows: (i) Given a value for α, solve Eq. 39 to find the frequencies ω α,r up to r max = 5000; (ii) Given a value of ρ between 0 and π/2, compute numerically the truncated summation for 50 values of equally spaced in the range 0.002 to 0.1. * (iii) Fit the function f (α) ρ [ ] using a model that reproduces the divergent behavior in Eq. 50 followed by a Taylor expansion up to order 2 , i.e., As G (α) F is a finite quantity, we expect the divergent behavior of f (α) ρ [ ] to be extremely attenuated. We have found coefficients b ranging between 10 −9 and 10 −12 , recovering the expected almost-finite behavior. The coefficients c and d were effective on reducing the residuals of the fit. Finally, a gives the approximated finite numerical value of φ 2 (α) at the point ρ. (iv) Repeat steps 2 and 3 for as many values of ρ between 0 and π/2 as desired. (v) Repeat the entire procedure for another value of α. We followed the scheme described above for 14 values for the parameter α. We chose 80 equally spaced points in the range (0, π/2) to obtain a good resolution of the behavior of φ 2 (α) (ρ). Our results are plotted in Fig. 2. The curve corresponding to α = 1000 reproduces almost perfectly the analytic Neumann result 49. Accordingly, as we approach the other extreme, α = 0 -corresponding to Dirichlet conditions -we can see the curves getting closer to zero. Consistently, if α = 0, then G (α) F indeed vanishes, as one can see from Eq. 42. * Our choice for r max and the range of was made so the last term of the sum would be negligible with respect to the first one. Indeed, the first term is of order e −2·1·0.002 ∼ 10 −1 , while the last is e −2·5000·0.002 ∼ 10 −9 . Also, we needed small enough so the divergent behavior would appear. In [2], the authors obtain the renormalized energy-momentum tensor T µ ν ren in AdS n . They use the formula from Ref. [16] T µν ren = −[G] µν + 1 2 and Θ µν is a purely geometric tensor constructed to be conserved. Kent and Winstanley find that the non-geometrical component of the tensor is proportional to the metric tensor, which is completely consistent with the maximal symmetry of AdS. In our particular case of a conformally invariant field in four spacetime dimensions, we have and the geometric tensor Θ µν is identically zero. We may obtain this renormalized expectation value from Green's function G (D) F ren , hence is associated with Dirichlet conditions in all modes of the wave equation. Here, we want the contributions to the energy-momentum tensor coming from G (α) F . Our approach will be analogous to that of the Green's functions: we decompose T µ ν ren into two parts, one carrying the boundary condition, denoted T µ ν (α) ren with 16 components T µ ν , and another one reproducing the Dirichlet results as in Eq. 56. In our case, equations 54 and 55 may be written as and from which it follows that [G](ρ) ≡ φ 2 (α) (ρ). According to formula 53, we have here Considering all non-vanishing Christoffel symbols, the definitions for [G] and [G] µν , and the symmetric condition T µν ren , we readily verify that the only nonvanishing components are diagonal terms and the term T τ ρ (= T ρτ ). Let us recall the temporal inversion (τ → −τ ) symmetry of AdS, denoted I, given in four dimensions by the transformation matrix I µ µ = diag(−1, 1, 1, 1). As none of our quantities depend explicitly on τ , we expect this discrete symmetry to be preserved. In particular, we expect T τ x j = T −τ x j = T τ x j , for x j = (ρ, θ, ϕ). On the other hand, T µ ν (α) ren transforms as a tensor, so we have That yields T τ ρ = −T τ ρ , which then implies T τ ρ = T ρτ ≡ 0. At this point, we have a diagonal tensor, whose remaining components may be calculated using Eq. 59. Our computational efforts were not successful when trying to compute the numerical expressions directly. However, we came up with a solution based on some properties that T µ ν (α) ren must satisfy, based on the definition of T µ ν ren . Let us first consider the effect of the trace anomaly. One can readily verify that it is respected by T µ ν (D) ren [2,16], i.e., so our tensor T µ ν (α) ren must be traceless, which is our first constrain on the remaining diagonal components. We may use the symmetries of AdS as well. Although our Green's function breaks AdS invariance of the radial coordinate ρ, all other symmetries should remain valid. In AdS 4 there exist 10 Killing fields corresponding to the following isometries: one temporal translation, three rotations, four boosts and four spatial translations. From which, we only expect the first two to be preserved after imposing Robin boundary conditions in only one of the modes. The temporal Killing field, t = ∂ τ , yields a conservation equation along with its flow, given by the Lie derivative of the tensor with respect to t, i.e., which shows that all components of T µ ν (α) ren are independent of τ . Additionally, we have the generators of spherical symmetry, given by the following Killing fields Since a combination of them is still a Killing field, we may use χ 2 and χ 3 to obtain χ 4 = ∂ θ . We can use χ 1 and χ 4 to find other two conservation equations similar to that of t, as follows These equations show us that T µ ν (α) ren can be a function of ρ only, i.e., T µ ν (α) Finally, the conservation equation, provide us with the last set of constrains. As T µ ν (D) ren is proportional to the metric, it is automatically conserved, since ∇ µ g µν = 0. Hence, for T µ ν ren to be entirely conserved, we must impose Eq. 69 on T µ ν (α) ren as well, which, using the properties we have found for T µ ν (α) ren so far, reduces to ∂ ρ T ρ ρ − tan ρT τ τ + (4 csc(2ρ) + tan ρ)T ρ ρ −2 csc(2ρ)(T θ θ + T ϕ ϕ ) = 0, Before discussing our numerical approach for the expectation value of the energymomentum tensor, we treat the case α → ∞, i.e., Neumann boundary condition. Again, we were able to find an analytic result only in this situation. We used equation 59 to find the formulas for components, * Our attempts to compute [G] ρρ and [G] ρ analytically and numerically were not successful. Hence, we adopted another approach that combined the explicit formulas above and the constrains given by equations 62 and 70. Let us conveniently define a function F(ρ) depending exclusively on the quantities we were able to compute, namely [G] τ τ (ρ) and [G](ρ), as follows Using Eq. 62 and recalling that T θ θ = T ϕ ϕ , we find that and applying it to 70, we have The equation above is of the form upon the identifications u ≡ T τ τ , p(ρ) = 2 9 − cos(2ρ) + 2 csc 2 ρ (5 − cos(2ρ)) cot ρ and q(ρ) = 2 (sin(2ρ)F (ρ) + (7 − cos(2ρ))F(ρ)) (5 − cos(2ρ)) cot ρ . Applying it in Eq. 83, and then using 76 and 75, we find Now, we have a result to compare our numerical ones with. To compute the function F numerically, we used our previous results of φ 2 (α) (= [G]), but we also need [G] τ τ . According to 55, we find it by taking the second derivative of G (α) F (τ, τ , ρ, ρ) with respect to τ and, then, taking the coincidence limit. In the convention we adopted, ∂ τ τ = −∂ . Its then expected that the divergent behavior of the Dirichlet counterpart G Our numerical procedure to find the expectation value of the energy-momentum tensor fluctuations was: (i) Given a value for α, use the frequencies ω α,r found before; (ii) Given a value of ρ between 0 and π/2, compute numerically the truncated summation −i∂ G ρ [ ] using a model that reproduces the divergent behavior followed by a Taylor expansion up to order 2 , i.e., As expected, the divergent behavior of F (α) ρ [ ] is extremely attenuated, and the coefficient b is negligible compared to the others. Again, the coefficients c and d were effective on reducing the residuals of the fit. Finally, a gives the finite approximated numerical value of [G] τ τ at the point ρ. (iv) Repeat steps 2 and 3 for as many values of ρ between 0 and π/2 as desired to obtain the complete [G] τ τ (ρ). (v) Use our previous results for [G] together with [G] τ τ in Eq. 75 to find a numerical interpolation of F(ρ), denoted F[ρ]. (vi) Given a value of ρ between 0 and π/2, use F[ρ] in Eq. 83 and perform a numerical integration to obtain an approximate value of T τ τ at that specific ρ. (vii) Repeat step 6 for several values of ρ to find a complete numerical function T τ τ . With that in hands, compute T ρ ρ and T θ θ using equations 76 and 75. Similarly to our results for the expectation value of the field squared, we followed the numerical procedure for 14 values of α. We have found all components of T µ ν (α) ren . In Fig. 3, we can see T τ τ for several values of α, it is clear that the form of the function follows the analytic result for the Neumann condition (plotted in gray). Discussion and further remarks Avis, Isham, and Storey took a first-step, in Ref. [3], towards the development of a quantum field theory in anti-de Sitter spacetime. They acknowledged that the conformal infinity poses a serious causality issue to the wave equation but solve it by regulating the information flow through the boundary 'by hand.' They imposed the so-called 'transparent' and 'reflective' boundary conditions at infinity in analogy to a box in Minkowski spacetime. In this way, they quantized the fields in the Einstein Static Universe and restricted it to the AdS later. Conversely, in this article, we considered the developments made by Ishibashi and Wald in [5], where they propose a physically consistent prescription for the dynamical evolution of fields. In the particular case that we have considered, they show that the imposition of mixed boundary conditions at the spatial infinity is sufficient to determine the evolution of quantum fields uniquely. In the setup studied by Kent and Winstanley, in Ref. [2], all angular modes of the wave equation satisfy the same Dirichlet boundary condition at infinity. Their results are consistent with the maximal symmetry of AdS. Hence, the expectation values of
2019-04-24T16:53:46.000Z
2019-04-24T00:00:00.000
{ "year": 2019, "sha1": "994a735e087e0cba598af518009bd6c501d95b79", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10714-020-02672-4.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "6932da36370930767931346c252168b66958e9ca", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218641018
pes2o/s2orc
v3-fos-license
Pediatric reporting of genomic results study (PROGRESS): a mixed-methods, longitudinal, observational cohort study protocol to explore disclosure of actionable adult- and pediatric-onset genomic variants to minors and their parents. Background Exome and genome sequencing are routinely used in clinical care and research. These technologies allow for the detection of pathogenic/likely pathogenic variants in clinically actionable genes. However, fueled in part by a lack of empirical evidence, controversy surrounds the provision of genetic results for adult-onset conditions to minors and their parents. We have designed a mixed-methods, longitudinal cohort study to collect empirical evidence to advance this debate. Methods Pediatric participants in the Geisinger MyCode® Community Health Initiative with available exome sequence data will have their variant files assessed for pathogenic/likely pathogenic variants in 60 genes designated as actionable by MyCode. Eight of these genes are associated with adult-onset conditions (Hereditary Breast and Ovarian Cancer Syndrome (HBOC), Lynch syndrome, MUTYH-associated polyposis, HFE-Associated Hereditary Hemochromatosis), while the remaining genes have pediatric onset. Prior to clinical confirmation of results, pediatric MyCode participants and their parents/legal guardians will be categorized into three study groups: 1) those with an apparent pathogenic/likely pathogenic variant in a gene associated with adult-onset disease, 2) those with an apparent pathogenic/likely pathogenic variant in a gene associated with pediatric-onset disease or with risk reduction interventions that begin in childhood, and 3) those with no apparent genomic result who are sex- and age-matched to Groups 1 and 2. Validated and published quantitative measures, semi-structured interviews, and a review of electronic health record data conducted over a 12-month period following disclosure of results will allow for comparison of psychosocial and behavioral outcomes among parents of minors (ages 0–17) and adolescents (ages 11–17) in each group. Discussion These data will provide guidance about the risks and benefits of informing minors and their family members about clinically actionable, adult-onset genetic conditions and, in turn, help to ensure these patients receive care that promotes physical and psychosocial health. Trial registration ClinicalTrials.gov Identifier: NCT03832985. Registered 6 February 2019 (Continued from previous page) for comparison of psychosocial and behavioral outcomes among parents of minors (ages 0-17) and adolescents (ages [11][12][13][14][15][16][17] in each group. Discussion: These data will provide guidance about the risks and benefits of informing minors and their family members about clinically actionable, adult-onset genetic conditions and, in turn, help to ensure these patients receive care that promotes physical and psychosocial health. Trial registration: ClinicalTrials.gov Identifier: NCT03832985. Registered 6 February 2019 Keywords: Return of genomic results, Genomic medicine, Secondary findings, Pediatrics, BRCA1, BRCA2, Lynch syndrome Background Exome and genome sequencing are increasingly integrated into clinical care and research [1][2][3][4][5][6][7], providing an opportunity to examine sequence data for pathogenic/ likely pathogenic variants in clinically actionable genes. However, the potential benefits and harms of returning genetic results to minors (ages 0-17) and their parents/ legal guardians (hereafter referred to as "parents") are matters of ongoing controversyespecially returning genetic results for adult-onset conditions that are not clinically actionable in childhood [8,9]. The debate intensified with the 2013 publication of recommendations from the American College of Medical Genetics and Genomics (ACMG) advising that clinicians notify their patients, regardless of age, when a variant known or expected to increase disease risk was identified incidentally through clinical sequencing in one of 56 clinically actionable genes not related to the test indication [10,11]. Examples of clinically actionable conditions included hereditary breast and ovarian cancer (HBOC) syndrome (BRCA1/2), Lynch syndrome (MLH1, MSH2, MSH6 and PMS2), and familial hypercholesterolemia (LDLR, APOB and PCSK9), all of which have Centers for Disease Control and Prevention (CDC) tier-one level of evidence for reducing morbidity and mortality in certain indications [12]. Of the 59 genes currently considered by the ACMG to be sufficiently actionable to merit patient analysis and notification, 52 are associated with conditions that have pediatric-onset or initiation of recommended risk reducing procedures in childhood [10,13]. The remaining seven genes and their three associated conditions -HBOC (BRCA1/2), Lynch syndrome (MLH1, MSH2, MSH6, PMS2), and MUTYH-associated polyposis (MUTYH)do not typically lead to pediatric onset of disease [10,13], and thus, recommended surveillance and risk-reducing actions are postponed until adulthood [14,15]. Opposition to disclosure of adult-onset, clinically actionable results to minors ACMG's recommendations and subsequent reaffirmations regarding disclosure of secondary findings regardless of age [10,11,13] contrast with long-standing recommendations and policy statements by professional societiesincluding the ACMG [16] to defer clinical testing for adult-onset genetic conditions until minors reach adulthood and can decide for themselves whether to have testing. Professional guidelines recommending against testing for adult-onset genetic conditions are based on expert consensus and are focused on the traditional normative standardthe best interests of the minorand cite concern about potential harms as well as absence of clear medical benefit in childhood [16][17][18][19][20][21][22][23]. Potential harms and wrongs include psychological impacts such as increased distress, negative impacts on self-image, feelings of guilt or blame towards a family member, and misattributing symptoms to the condition [16,[24][25][26]. Additionally, disclosing an adult-onset genetic result to a minor and their parent could disrupt family relationships through differential treatment by parents (including "vulnerable child syndrome") or increased parental anxiety and/or guilt [18,[24][25][26]. Of further concern are the potential for discrimination by life or disability insurers and stigmatization by peers [24,25]. Finally, some scholars have suggested that childhood testing fails to respect the minor's future autonomy by infringing upon their "right to an open future" in which they can decide for themselves whether or not to be tested [17,18,[24][25][26]. These ethical arguments underpinning the professional guidelines regarding genetic testing in childhood are reviewed extensively elsewhere [18,22,24,25]. Support for disclosure of adult-onset, clinically actionable results to minors In contrast, authors of the ACMG secondary finding recommendations and other proponents of returning actionable clinical or research findings to all patients, regardless of age, advocate for the broader interests of the family and of the minor to be included in the riskbenefit analysis [27]. For instance, they say, identifying an adult-onset condition in a minor could prompt adult relatives, including parents, to be tested for a potentially life-threatening condition (hereafter referred to as "cascade testing"), thereby protecting the interests of dependent minors [10,28]. Other proposed benefits of disclosing adult-onset genetic results to minors include psychological benefits (e.g., the opportunity to adjust to hereditary disease), the ability to inform life planning (e.g., reproductive decision-making), and positive impact on family relationships (e.g., promotion of realistic parental expectations) [24,26]. Additionally, some argue that disclosing adult-onset, clinically actionable results promotes autonomy, given that parents are best placed to decide what is in their child's best interest [24], adolescents can contribute to informed decision-making [24], and failing to disclose the variant could prevent families from ever knowing their risk and, therefore, could deny the minor the opportunity to know about their risk in adulthood [27,29]. Finally, there could be legal incentives to disclose clinically actionable variants to minors in states where courts recognize the "loss of chance" doctrine [30,31], a medical malpractice doctrine that enables a plaintiff (patient) to bring suit against a defendant (medical provider) whose breach of duty substantially reduced the chance of a more favorable outcome (such as a delayed diagnosis diminishing the chance of recovery from a pre-existing medical condition such as a variant conferring genetic risk). This protocol paper focuses on the research components involving human participants. The PROGRESS study team also will be conducting legal research regarding the loss of chance doctrine that will be discussed separately. Parent and adolescent stakeholder views While genetics providers, laboratories, and ethicists have debated disclosure of clinically actionable results to minors and their parents, empirical studies have found interest by parents and adolescents in receiving genetic findings even if the minor's health care is not immediately affected [32][33][34][35][36][37][38][39]. For example, half of a sample of British adults felt that parents should be able to test their children for adult-onset conditions, even while acknowledging the validity of reasons for deferring testing until adulthood (e.g., stigma, fear of discrimination) [38]. Nearly all participants in focus groups of parents of pediatric participants in Geisinger's MyCode® Community Health Initiative wanted Lynch syndrome results for their children, explaining that the importance of these results to their children's future health outweighed the right of minors to make their own testing decisions once they reach adulthood [35]. Adolescents in several studies of stakeholders' views of receiving results from genomescale sequencing also expressed interest in adult-onset results and in being involved in decision-making about whether to learn these results [33,36,37,40]. Furthermore, student participants in the 2016 American Society of Human Genetics (ASHG) DNA Day Essay Contest were asked to name an adult-onset genetic condition and defend or refute ASHG's 2015 recommendation [17] to defer testing for adult-onset conditions until adulthood. Of the 205 students who wrote about HBOC syndrome, 56% argued for BRCA1/2 testing before adulthood, citing reasons such as prevention and life planning [39]. As Mand et al. [24] note, "[m] ost arguments on both sides are testable empirical claims, so far untested, rather than abstract ethical or philosophical positions." The limited evidence that does exist from minors who underwent genetic testing has not substantiated the negative psychosocial impacts anticipated by those opposed to the return of genetic information prior to adulthood [41,42]. Specific to a clinically actionable, adult-onset condition, one study found that, female adolescents (age 11-19 years) from BRCA1/2 families did not differ in their general psychosocial adjustment as compared to girls from breast cancer families without a BRCA1/2 pathogenic/likely pathogenic variant and peers without breast cancer in their family [43]. However, the available evidence concerning minors' psychosocial outcomes after receiving their own genetic results is limited by a general focus on pediatric-rather than adult-onset conditions, methodological differences that hinder comparisons, and a lack of longitudinal follow-up that would facilitate a clear understanding of how adult-onset genetic findings affect minors and their families over time [41,42,44]. There is less evidence still about the optimal way of disclosing adultonset genetic risks to minors and their parents, should evidence about the risks and benefits of disclosure suggest such a policy. Methods/design The Pediatric Reporting of Genomic Results Study (PROGRESS) seeks to determine how best to use genetic information to guide care over the course of a minor's development in ways that maximize the physical and psychosocial health of the minor and their family. Specifically, the study aims to use a mixed-methods, longitudinal, observational cohort study to: Aim 1: Determine whether anxiety, depression, family functioning, and health-related quality of life differ at 12 months post-disclosure among adolescents (participants age [11][12][13][14][15][16][17], as well as among parents of minors (participants age 0-17) who: 1) receive an adult-onset result; 2) receive a pediatric-onset result; or 3) do not receive a genetic result. Aim 2: Assess cascade testing uptake and initiation of risk reduction behaviors among parents from whom the minor inherited their adult-or pediatric-onset genetic variant. Based on the limited available literature on the effects of informing minors about their genetic condition or their hereditary risk, we hypothesize that there will be no differences in primary psychosocial outcomes in adolescents and parents of minors who receive an adult-onset finding, those who receive a pediatric-onset finding, and those who do not receive a genetic finding. Geisinger's MyCode® community health initiative PROGRESS will leverage experience from reporting clinically actionable genetic findings to adults enrolled in Geisinger's MyCode® Community Health Initiative (MyCode). As described elsewhere [45][46][47], Geisinger's MyCode project was launched in 2007 and serves as a repository of blood, DNA, and serum samples from participants who consent to broad, health-related research use of their samples, including genomic analysis [48]. MyCode is a major resource for research that combines information obtained from DNA and serum with health information from the electronic health record and other sources with the intention of improving the prevention, diagnosis, and treatment of disease [47]. In 2012, MyCode began enrolling minors with parental or legal guardian consent and assent for enrollees age 7-17 years [47]. In 2013, Geisinger began developing a process to return clinically actionable results to adult MyCode participants through the Genomic Screening and Counseling Program (GSCP) [46,49]. This study will augment the existing GSCP to return clinically actionable results to minors and their parents, while collecting data to assess psychological and behavioral outcomes among the participants and their parents who receive a genetic result. Figure 1 summarizes the PROGRESS schema, which was approved by the Geisinger Institutional Review Board (IRB# 2018-0419). PROGRESS will use a mixedmethods, longitudinal, observational cohort study design to compare psychological outcomes and health-related quality of life among three groups of pediatric MyCode participants and their parent(s): Group definitions Group 1 -Those with a clinically confirmed, clinically actionable, pathogenic/likely pathogenic variant in a gene associated with one of four adult-onset diseases for which no risk-reducing interventions are available in childhood -HBOC, Lynch syndrome, MUTYH-associated polyposis, and HFE-Associated Hereditary Hemochromatosis. Group 2 -Those with a clinically confirmed, clinically actionable, pathogenic/likely pathogenic variant in a gene associated with pediatric-onset disease or with adult-onset disease for which risk reducing interventions begin in childhoodall other ACMG SF v2.0 genes (Additional File 1). Group 3 -Those who do not have a potential pathogenic/likely pathogenic variant identified, and therefore do not receive a genetic result. Members of this group, who will be frequency matched to Group 1 and 2 participants based on age (+/-2 years) and sex assigned at birth, will serve as controls to assess outcomes among members of Groups 1 and 2. Recruitment/enrollment Variant files from exome sequencing completed through the DiscovEHR collaboration with Regeneron Genetics Center [48] for any pediatric MyCode participants between the age of 0-17 years will be assessed for pathogenic/likely pathogenic variants in 60 genes designated as actionable by MyCode (Additional File 1) [50]. This gene list includes the ACMG SF v2.0 list as well as biallelic HFE C282Y variants [13] (Additional File 1). Before clinical confirmation of variants in a CLIA-certified laboratory, a list of prospective pediatric participants will be generated. Prospective participants will include minors with a potential pathogenic/likely pathogenic variant (Groups 1 and 2) and age-(+/− 2 years) and biological sex-matched controls without such a variant (Group 3). The study team will mail the parents of these prospective participants a letter describing the study, elements of informed consent, and an opportunity to opt out of additional study contact. Two weeks later, research staff will call those who have not opted out of study contact and offer an in-person visit to discuss the study. These staff, who will be blinded to potential participants' expected study group, will lead the in-person consent process and obtain written documentation of parental consent. Prospective participants and their parents will be unaware of their potential group status during recruitment and enrollment. Pediatric participants ages 7-17 years will be engaged in the discussion and have the opportunity to provide assent. If an additional sample is required for MyCode study participation or clinical confirmation of a potential pathogenic/likely pathogenic variant [47,49], this will be collected at the time of the study consent/assent visit. At the time of enrollment, study staff will also ask parents for guidance on how to disclose any results to their assenting children (e.g., at the in-person disclosure consultation or at a separate consult). If a minor is unable to assent due to such individual factors as a cognitive impairment, their parent(s) will be asked to consent, and if consent is obtained, the parent(s) will be included in the study. Participating minors who reach the age of majority (18 years) during the study will have the opportunity to participate in an informed consent process at age 18. Participants will be compensated for study participation after each completed quantitative survey. A subset of parents and adolescents will be invited to complete semistructured interviews and will be compensated further. Exclusion criteria Parents who decline participation and/or minors who do not assent and their parents will be excluded from the study. If assent/consent for PROGRESS are not given and the minor is suspected to have an adult-onset result, their sample will be held until the individual reaches 18 years of age and has re-consented to MyCode. If a minor is suspected to have a pediatric-onset result but consent/ assent for PROGRESS are not given, their sample will proceed to clinical confirmation and, if confirmed, will follow established return procedures of the MyCode GSCP without further quantitative or qualitative data collection. Minors with an already identified genetic result for one of the 60 genes designated as actionable will be excluded when generating the list of potential participants. Minors who have not undergone genetic testing but have a known family history of a clinically actionable variant in one of the 60 genes will be eligible to participate. Minors who have already undergone exome sequencing on a clinical basis will be excluded from Group 1 or 2 if a variant in one of the 60 genes was identified and will also be excluded from Group 3 in light of their experience with genetic testing and potentially complex medical history. Sample size Based on the current and anticipated pediatric participation in MyCode over the course of study enrollment, we estimate that 8500 minors will be eligible for the study. Given the expected rate of individuals with a pathogenic/likely pathogenic variant in one of the target genes -2.3% of adult MyCode participants sequenced to date [50] we anticipate 195 pediatric participants will have a genetic result. Ninety-eight (98) and 97 of these are anticipated to be in Groups 1 (adult-onset result) and 2 (pediatric-onset result), respectively. Based on experience recruiting MyCode participants for additional studies, we estimate that 65% of the families approached will consent to participate (internal data), leaving an estimated 64 minors in Group 1 and 63 in Group 2. Assuming conservatively that one parent enrolls for each child, we anticipate there will be 64 parents in Group 1 and 63 parents in Group 2. Since roughly one-third of the minors receiving a result will be age 11-17 years, and therefore eligible to contribute to data collection, we anticipate an additional 42 adolescents in Groups 1 and 2 (21 in each group). The eligible pool of pediatric MyCode participants with no genetic results to return will be matched on an age (+/− < 2 years) and biological sex distribution to Groups 1 and 2. We will approach 195 parents for inclusion in Group 3 and anticipate 65% to consent for participation (n = 127). We also anticipate an additional 42 adolescent participants in this group, for a total sample size of 254 parents and 84 adolescent participants across all three study groups. For psychosocial outcomes in Aim 1, we have specified a priori each pairwise comparison to be of interest. Therefore, all calculations assume 80% power and 5% significance level. Using the sample sizes noted above with a 10% dropout, the minimum detectable effect size (change in standard deviation units) for the key quantitative psychosocial outcomes is 0.53, 0.45, and 0.46 for the comparison of Group 1 vs. Group 2, Group 1 vs. Group 3, and Group 2 vs. Group 3, respectively. If we are successful in recruiting a second parent for some of the minors, then we can expect the minimum detectable difference to decrease. These effect sizes are considered moderate in size and are less than the effect size seen in a previous study that used the Hospital Anxiety and Depression scale in a sample of adolescent girls from families with BRCA1/2 variants [43]. The primary outcomes in Aim 2 are cascade testing uptake and initiation of recommended risk reduction. From the literature on cascade testing uptake in male and female first-degree relatives of individuals with a genetic condition [51][52][53], we estimate that approximately 50% of parents will complete cascade testing. To account for the possibility that cascade testing in one parent will spur uptake or negate the need for testing in another, we will incorporate an intra-class correlation value of 0.20. Based on the above sample size estimates, the study will be able to detect a 21% difference in the percentage of parents in Groups 1 compared to those in Group 2 that complete cascade testing (e.g., 50% vs. 73%). Based on a previous study among unaffected women with a pathogenic BRCA1/2 variant [54] and assuming that males pursue management behaviors at a similar rate, we estimate that 65% of the parents will initiate risk reduction. Therefore, the study should be able to detect an 18% difference in the percentage of parents in Groups 1 compared to those in Group 2 who initiate risk reduction (e.g., 65% vs. 85%). Clinical confirmation and results disclosure After consent/assent, DNA samples from participants with a potential pathogenic/likely pathogenic variant will be sent to a CLIA-certified clinical laboratory for confirmation [55]. Parents of minors with a clinically confirmed pathogenic/likely pathogenic result in one of the actionable genes of interest will learn of their child's result during an in-person consultation conducted by a genetic counselor. Whether the minor learns of the result at the same disclosure consult as their parent(s) or during a separate consult will be dictated by the selections that the parent and minor made at the time of enrollment. Parents of minors whose variants are not confirmed clinically and participants without a pathogenic/likely pathogenic variant (Group 3) will be scheduled for a study visit to notify them of their study group assignment and remind them to follow-up with their healthcare providers if they have significant personal or family history of cancer or cardiovascular disease. Data collection Data will be gathered via quantitative surveys using validated measures, qualitative interviews with adolescents and parents of minors, and review of electronic health record and testing laboratory data to determine parents' cascade testing uptake and initiation of risk reducing behaviors (Table 1). Parent-participants will be asked to assess psychosocial outcomes for themselves and for their children. Adolescents will also participate in quantitative surveys and qualitative interviews. Adolescents who are unable to assent due to individual factors will be excluded from quantitative and qualitative measures. EHR review Cascade testing uptake a n/a T4 n/a Initiation of risk management behaviors a n/a a Primary outcomes, T1 = baseline; T2 = 1-month post-disclosure; T3 = 6-months post-disclosure; T4 = 12-months post-disclosure, P=Parent of minor (ages 0-17), A = Adolescent (ages [11][12][13][14][15][16][17], EHR = Electronic Health Record Quantitative measures Survey instruments that include published quantitative measures (including those for anxiety/depression, psychological flexibility, family functioning, quality of life, body image, self-esteem, decisional regret, perceived cancer/heart disease risk, genetic counseling satisfaction, health literacy and genomic literacy) will be administered at the time of enrollment (T1, Additional Files 2 and 3), one-month post disclosure/visit (T2, Additional Files 4 and 5), six-months post disclosure/visit (T3), and/or 12 months post disclosure/visit (T4) for all three study groups [43,54,[56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72]76]. Additionally, Groups 1 and 2 will complete measures of condition-specific distress, adjustment to genetic information, family communication of genetic test results, and patient education and empowerment one-month post disclosure (T2), sixmonths post disclosure (T3), and/or 12-months post disclosure (T4) [73][74][75]. Longitudinal evaluation of a subset of these measures will enable exploration of changes over time. Table 1 summarizes the primary outcomes, covariates, and published measures collected in each study group. To ensure a satisfactory response rate, surveys will be offered via multiple modalities, including by phone, internet, and mail. Additionally, parents of minors in Groups 1 and 2 will be surveyed at 12 months post-disclosure (T4) to determine whether parents of minors with a genetic result had cascade testing for the familial gene variant and whether those found to carry the familial variant have performed disease risk management behaviors (e.g., breast MRI for women with a pathogenic BRCA1 variant). The study team will also query electronic health records to capture cascade testing and risk management behaviors among parents and will correspond with the genetic testing laboratory that confirmed the minor's clinically actionable result to verify completion of cascade testing in the family. Qualitative measures For Groups 1 and 2, the genetic counselor disclosing results will conduct a psychosocial assessment during the disclosure visit. Genetic counselors are qualified to conduct psychosocial assessments and provide brief psychosocial counseling [77]. The study clinical psychologists will review the genetic counselor's approach to psychosocial assessments and provide input in accord with the psychologists' expertise. The disclosure will be audio recorded for future qualitative review by the study team. Semi-structured interviews with a subset of up to 45 participants (or until thematic saturation is achieved) will also be conducted by trained research staff using an interpretive phenomenological approach to elucidate the lived experience of adolescents and parents of minors receiving clinically actionable results [78]. Interviews will be conducted using an established interview guide with parents and adolescent participants from each group receiving results (Groups 1 and 2) at one-month (T2) and 12-months (T4) post disclosure (Additional Files 6 and 7). Approximately 15 interviews will be conducted among parents of younger children (age 0-10 years), 15 additional interviews will be conducted with parents of adolescents (age 11-17 years), and a final 15 interviews will be conducted among adolescents. The semistructured format will enable data collection about preselected constructs for which established measures do not exist (such as "vulnerable child syndrome" and a "right to an open future") while allowing participants to inform the study team of constructs that might not have been considered. Interviews will be conducted throughout the study's duration to allow for assessment of changes in experience that could be related to modifications in practice for the target conditions (e.g., changes in risk management recommendations). Data analysis Aim 1: Analyses will focus on understanding if change in the primary and secondary psychosocial outcomes from pre-to post-disclosure differs significantly among groups. The analysis of psychosocial change of the children will employ linear mixed models (LMMs) with random effects to capture correlation due to repeated measures. We will use the parental reporting for this analysis. The model will include random effects for the intercept and slope, and an interaction between the group indicator and time. If, after plotting the data, it is found that the slope of each outcome variable is not linear, then the random slope parameter will be replaced with a categorical, fixed-effects time variable. In either model parameterization, contrasts can be set up to test for change from baseline and compared among groups. A priori it is of interest to compare each group to the others; no post-hoc adjustment will be made. The groups will be compared on baseline covariates. If any are found to vary significantly, then the LMMs will be extended to include the potentially confounding variables. If any of the primary psychosocial outcomes are found to violate the normality assumption, we will consider transforming those variables or using Generalized Estimating Equations (GEEs). As a secondary analysis, we will analyze the responses of adolescents aged 11-17 using the same approaches as above. Additional analyses of the secondary outcomes will use regression models appropriate for a given distribution; LMM for continuous, logistic regression for binary/ordinal, and Poisson regression for discrete counts, all with including random effects to capture the within subject correlation due to repeated measures. Aim 2: In this aim it is anticipated that any loss to follow-up will have minimal impact on the outcomes, as those data will be obtained from surveys at 12 months post-disclosure and via the electronic health record (EHR). Either self-report of cascade testing uptake or presence of cascade test result in the EHR will count as evidence of having had cascade testing. Initiation of recommended disease risk reduction, a dichotomous variable, will be calculated for each parent -participant who is found to carry the familial gene variant. As with the assessment of cascade testing uptake, initiation of risk reduction will be determined by parental self-report at 12 months post-disclosure and via query of the Geisinger EHR and of Keystone Health Information Exchange. Participants will be considered to have performed recommended risk reduction, if at 12 months post-disclosure, they have had any of the risk reduction procedures recommended for individuals with their genetic condition. The analysis will use a random effects logistic regression model for cascade uptake. A random effect for family will be included to account for the inherent correlation of the clustered analysis design that collects data from parents. Comparisons between Groups 1 and 2 for initiation of recommended risk reduction will use a binary logistic regression model. Both models will include a covariate for Group membership. As described above under Aim 1, the models will be extended to include baseline covariates that were found to be different between groups. Psychosocial support Given concerns about the potential for adverse psychosocial outcomes of returning adult-onset genetic results to minors [16-18, 22-24, 26], genetic counselors returning results, study staff administering instruments and scoring quantitative measures, and those performing qualitative interviews will notify the study's pediatric clinical psychologists of any clinically relevant scale scores or psychological concerns that arise during data collection and/or results disclosure. Moreover, a clinical psychologist will check-in with all participants receiving a genetic result one-month post-disclosure (T2, Groups 1 and 2), will conduct periodic psychosocial assessments with adolescents with an adult-onset genetic result (Group 1), and will schedule separate therapeutic interactions with participants who exhibit clinically significant distress or other psychological outcomes. The study genetic counselor will also contact parents of children and adolescents at one-and six-months post-disclosure (T2, T3) to assess additional informational and support needs. Any unanticipated adverse events will be reported to the IRB and all adverse events (anticipated or unanticipated, serious or not, related or unrelated) will be reported to the funding agency. Additionally, an external, five-member Event Monitoring Committee (EMC) [79,80] has been convened to provide additional, independent study oversight and protection of the psychosocial wellbeing of pediatric participants. The EMC has multidisciplinary expertise relevant to the study (e.g., experts in adolescent health, bioethics, and pediatric genetics) and will work with the study team to address and prevent adverse events. In an effort to prevent adverse events, the EMC has reviewed study procedures and protocols and will have access to quantitative and qualitative data during the study to identify participant burden and psychosocial concerns. The EMC also will have the capacity to respond immediately to any serious adverse events, recommend changes to address or mitigate the impact of those events, and identify events that should lead to immediate cessation of the study. The EMC will provide additional, independent oversight to further safeguard pediatric participants' welfare. Study Status As of March 16, 2020, 5212 pediatric participants have consented to MyCode and provided a sample for genomic analysis. Of those, 1878 have undergone exome sequencing as part of the DiscovEHR collaboration with Regeneron Genetics Center [48]. Review of research sequence data has shown that seven are eligible to be sent for clinical confirmation of an expected pathogenic/likely pathogenic variant in one of the 60 genes designated as actionable by MyCode. To date, seven parents of minors have been approached for the study; none have consented to participate. Discussion Integrating exome and genome sequencing into clinical care and research has resulted in increasing opportunities to examine sequence data for pathogenic/likely pathogenic variants in clinically actionable genes. At present, there is a discrepancy between ACMG's recommendation to return secondary findings without regard to age and various guidelines recommending against testing minors for adult-onset diseases due to concerns about negative impacts. Data are needed to inform this discussion and shape policies, protocols, and clinical care [16,44,81]. This mixed-methods, longitudinal, observational cohort study is designed to address this evidentiary gap. Psychosocial and behavioral data will allow for comparison of outcomes in adolescents and parents of minors who receive an adult-onset result, in those who receive a pediatric-onset result, and in those who do not receive a genetic result. This is the first study of which we are aware that will disclose adult-onset results to minors and their parents and compare outcomes among study groups with and without an adult-onset result in a real-world setting. This will provide several key opportunities to inform the debate regarding the disclosure of these results to minors and their parents through research and clinical testing (e.g., cascade testing, return of variants as secondary findings). First, the study will allow for examination of whether the psychological outcomes of adolescents and parents of minors receiving an adult-onset result through a supportive clinical encounter differ from outcomes among those who receive a pediatric-onset finding or those without a genetic finding. The study has also been designed to collect quantitative and qualitative data longitudinally, thereby allowing nuanced assessment of outcomes that have historically raised concerns among clinicians and ethicists (e.g., parents may treat their children as vulnerable, or actions taken in response to the result may restrict children's life choices). The study also allows us to determine whether returning adult-onset results to minors does, in fact, promote cascade testing among parents and to describe behavioral outcomes among parents. Finally, data collected to address the study's primary outcomes might also enable clinicians and researchers to proactively identify which parents and adolescents may benefit from additional supportive resources when receiving clinically actionable, adult-onset genetic results, should evidence about the risks and benefits of disclosure suggest such a policy. Several limitations are inherent in the study design and population. The study population corresponds to the local population which, although socioeconomically diverse and geographically rural, is of primarily Northern European ancestry [47]. The age and sex distribution of minors receiving a result will reflect those in which a pathogenic/likely pathogenic variant is identified, and therefore might not mirror MyCode pediatric participants overall. Additionally, primary analyses will be conducted using sex assigned at birth; however, given that gender identity could affect psychosocial outcomes, gender identity will also be collected as part of the study. Although our study will contribute critical data, additional studies will need to replicate findings in other populations to resolve the debate of whether to provide adult-onset genetic findings to minors. Furthermore, the 12-month post disclosure follow-up for all participants might not provide sufficient time for some of the psychosocial outcomes to manifest. Similar studies with lengthier time frames would provide information about psychosocial impact as younger patients transition to decisional maturity and as older minors transition to adulthood. In sum, the PROGRESS study will compare psychosocial outcomes over time among minors who receive an adult-onset genetic result and their parents, those who receive a pediatric-onset result, and those who do not receive a genetic result. It will also describe cascade testing and risk-reduction behaviors among parents of minors who receive a genetic result. The study will provide much-needed data on the risks and benefits of disclosing genetic results related to adultonset conditions to minors and their parents, informing policy and practice in this contested area of genomic medicine. Ethics approval and consent to participate This project is approved by the Geisinger Institutional Review Board (IRB# 2018-0419). A 2.3. in the MyCode® Community Health Initiative biobank at Geisinger. Parents of prospective participants will receive a letter describing the study, elements of informed consent, and an opportunity to opt out of additional study contact. Two weeks later, research staff will call those who have not opted out of study contact and offer an in-person visit to discuss the study. Parents of pediatric participants in this study will provide written informed consent and pediatric participants age 7-17 years will provide written assent prior to participation.
2020-05-15T12:40:24.978Z
2020-05-15T00:00:00.000
{ "year": 2020, "sha1": "eac1fd88e6adcc23bb7c599a3303422f834ce1c0", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/s12887-020-02070-4", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "eac1fd88e6adcc23bb7c599a3303422f834ce1c0", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
55945564
pes2o/s2orc
v3-fos-license
Weather and resource information as tools for dealing with farmer–pastoralist conflicts in the Sahel . Conflicts between pastoralists and farmers in the Sahel mainly arise from competition over land and water resources or because of livestock damage to crops. Rather than being linked to larger environmental change processes such as climate change, conflicts are often caused by inappropriate zoning of land, governance and unequal power relations between stakeholders. However, conflicts may be affected by more short-term weather and resource information that guide mobility of pastoralists. In this paper, we therefore explore if improved weather and resource information and improvement in its communication could prevent conflicts or reduce their severity. Based on a survey of key stakeholders involved in dissemination of weather and resource information and studies on pastoral access to and use of information, we conclude that improved information may both reduce and increase the level of conflict, depending on the context. Communication of information will need to go beyond just the weather and resource information and also include the multiple options for herd movements as well as providing information on herd crowding and potential conflict areas. Introduction The history of conflicts involving pastoralists and farmers in the West African Sahel is long (Thébaud and Batterbury, 2001;Turner, 2004).Conflicts arise from agricultural encroachment on land and pastures traditionally used by pastoralists, or are associated with livestock damages on crops in rain-fed fields and in irrigated gardens.Moreover, there are also classic conflicts between pastoralists for access to and use of pastures and watering points (wells, boreholes).Especially in the dry season, when prices for accessing water can be high, conflicts may intensify.These conflicts often receive more attention than the well-known symbiotic relationships whereby farmers and pastoralists exchange crop residues and manure.In addition, herders and farmers are very heterogeneous and overlapping categories, both in terms of production systems, social organization and ethnicity.Across the Sahel, livestock is to an increasing extent owned by groups that are not usually considered to be pastoralists, and groups that are traditionally considered to be pastoralists are increasingly becoming involved in farming and other economic ac-tivities (Hesse, 2011), adding to the complexity of conflicts (Beeler, 2006). Numerous explanations of the farmer-pastoralist conflicts have been suggested in the scientific literature (see overviews of this large literature by Hussein et al., 1999;Thébaud and Batterbury, 2001;Turner, 2004;Turner et al., 2011;Benjaminsen, 2016).In recent years, for example, it has been claimed in various political arenas that climate change has caused -or aggravated -conflicts, due to its alleged negative impacts on resource availability.Closer examination of these claims have largely caused them to be refuted as the root causes of conflicts are found in socio-political events and conditions such as inadequate land policies and rent-seeking (Benjaminsen et al., 2012;Benjaminsen, 2016).Some studies do show that extreme weather events such as droughtswhether triggered by climate change or not -may exacerbate existing conflicts (Raleigh and Kniveton, 2012); but the fact that climate variability is perceived to have a much lower impact on livestock productivity in areas, where policies of zoning of pastoral lands were implemented and enforced in the mid-2000s, indicates that climate factors are secondary to policy drivers (Mertz et al., 2010).In the study by Mertz et al. (2010), these zoning policies, which typically entail delimitation of areas for pasture and movement of livestock, were mainly seen to have had a positive impact and conflict mitigating effect; but they have been criticized for being too technocratic and top-down oriented with little pastoral involvement in their conception (Hesse and Thébaud, 2006).In Niger, the limited or inadequate implementation of zoning policies has in some cases had counterproductive effects, creating negative impacts for pastoralists and potentially exacerbated conflicts (Bonnet and Hérault, 2011;Oxby, 2011), as was also experienced in earlier attempts at controlling, for example, wells and boreholes in eastern Niger (Thébaud and Batterbury, 2001). Whatever the reason behind farmer-pastoralist and pastoralist-pastoralist conflicts, they prevail in many areas in the Sahel.The underlying causes may be social, political or economic, but the direct drivers of specific conflicts are mostly a result of competition for concrete land areas, certain types of vegetation and water resources used for both farming and livestock.As all of these resources are influenced by climate variability, one may hypothesize that better information on the state and changes in resources, and on the weather patterns that influence them, would be useful for mitigating conflicts, at least in the short-term and even if it would only be treating the "symptoms" rather than their root causes. Hardly any attention has, however, been devoted to what role information about climate, weather, and natural resources might play for conflict resolution.This is surprising as both pastoralists and farmers have been shown to act upon the information available to them and are indeed able to understand more complex probabilistic forecasts, including the risks associated with following recommendations on, for example, sowing dates and length of the rainy season (Ingram et al., 2002;Roncoli et al., 2009;Rasmussen et al., 2015).A study in Senegal showed, on the other hand, that pastoralists are reluctant to support information sharing about pastures (Kitchell, 2016).Reasons include that pastures become a "common property" and this may compromise pastoralists' priority access to certain areas, potentially creating additional conflict.Yet, this was not found in northern Burkina Faso, where there was a demand for information and criticism was more directed at its value and the forms of communication (Rasmussen et al., 2014(Rasmussen et al., , 2015)).In any case, when people are faced with increasing climate variability, their actions and management strategies will most likely differ depending on the level of knowledge gained about the weather and the resource availability.The question remains whether this knowledge will mitigate or exacerbate conflicts when decisions about resource use and mobility are made. In the present paper, we discuss the possible linkages between small-scale, localized but common resource conflicts and various information dissemination systems.For example, we look into dissemination systems based on information from satellite data, traditional forecasting methods, and seasonal forecast models.The question asked is whether such information systems, apart from being useful as a basis for day-to-day decisions, will tend to lessen or increase competition for resources and thus the potential for conflicts.We use the limited existing literature to assess the role of information and complement this with a short questionnaire survey among local government and private stakeholders involved in dissemination of climate, weather, and resource information.The questionnaire survey was distributed to staff from key dissemination institutions in West Africa, including provincial agricultural and meteorological services and radio stations in Burkina Faso, Mali, and Niger.The staff from the government agencies were either regional directors or leaders of so-called "focal points" for weather and resource information dissemination and the staff from radio stations were either directors or journalists working on popular dissemination of this information.All participants therefore had comprehensive local knowledge of the areas they worked within the three countries, but they were mainly policy implementers and agents for transmitting knowledge to the actual land users.The survey was conducted during the workshop on the dissemination of agro-hydro-climatological information to final users in the project: Knowledge Based Climate Adaptation in West Africa (original French title: Atelier de Diffusion et de Dissémination de l'Information Agro-Hydro-Climatique aux Usagers Finaux du Projet ACCIC), held in Ouagadougou, Burkina Faso, 3-5 December 2015.A total of 24 participants were asked to complete the questionnaire and of these 16 participants responded anonymously.The questionnaire requested information on their knowledge of cases, where weather or resource information had contributed to resolving or aggravating conflicts, and on their opinion on the role of information as a conflict resolution tool, including how conflict resolution may take place.In addition to the survey, notes were taken of discussions during the workshop to capture more nuanced details in views on the utility of various types of information and the potential of different dissemination systems.This highlighted considerable differences in knowledge of pastoral strategies that -not surprisingly -were most well-known among participants from the driest regions, e.g., eastern Niger and northern Burkina Faso. Before moving to the results of the survey, we start by identifying the information needs of pastoralists -the potential users of weather and resource information -as they have been largely neglected as recipients of such information (Rasmussen et al., 2014(Rasmussen et al., , 2015)).We then discuss implications of the results for farmer-pastoralist conflict resolution and development of appropriate information systems in the Sahel. Information needs of pastoralists Pastoral societies still rely to a large extent on traditional agricultural and livestock production methods even though the sector, to an increasing extent, has become a supplier of meat to the coastal regions of West Africa, and thus partly commercialized.As pastoralists are becoming sedentary in many parts of the Sahel, such as the Ferlo of northern Senegal, the competition for land and resources in nearby areas gets more pronounced because pastoralists still rely on varying degrees and types of herd mobility (Adriansen and Nielsen, 2005).Ensuring appropriate and efficient mobility of livestock is thus the key element for which pastoralists need information about the state and expected changes in climate, weather, and resources.Rasmussen et al. (2014) discuss the demand for information among pastoralists on the basis of field work in northern Burkina Faso and find that pastoralists seek information that would facilitate more informed decision making on herd management.These include the location of the herd in order for it to thrive and make the best of current -and expected future -vegetation and water resources as well as information on markets for selling livestock and purchasing feed and veterinary services, especially if there are expectations of insufficient future availability in pastures and water. The basis upon which these decisions are taken by pastoralists includes experience from the past, pastoralists' own observations, e.g., signs indicating the arrival of the monsoon and information from family members, friends or hired scouts on vegetation and water resources -as well as prices -often conveyed by mobile phone (Rasmussen et al., 2015;Kitchell, 2016).These traditional information systems are now being complemented by satellite-based information on weather and resource availability, but the role of these new technologies -as well as the full potential of mobile phone technologies -in this decision making process and for preventing or resolving conflicts has yet to be fully explored. Information on climate variability and seasonal forecasts Weather patterns and climate variability are important for the availability of vegetation and water resources and improvements in this information could potentially be beneficial for pastoralists (Rasmussen et al., 2015;Kitchell, 2016).The long-term effects of climate change, which are likely to include increasing temperatures and fewer but more violent rainfall events (Niang et al., 2014), will of course be relevant for the future survival of pastoralism and farming (Lambin et al., 2014), especially if the observed trends in rainfall anomalies in August, a crucial months for crops and vegetation in general, continue (Mertz et al., 2012;Nicholson, 2013).However, short-term seasonal forecasts are more useful for farmers and pastoralists.Since 1998, the Climate Outlook Forum PRESAO (Prévisions Saisonnières en Afrique de l'Ouest) has created seasonal rainfall forecasts (Tarhule and Lamb, 2003;Patt et al., 2007) and although criticized for their lack of reliability (Fraser et al., 2014) significant advances in the understanding of the West African weather systems have paved the way for better forecasts (Polcher et al., 2011).Such forecasts are mostly seen as an input to farmers' choices of which fields to cultivate and which crops or crop varieties to cultivate.Although farmers, as mentioned above, may use seasonal forecasts rationally, relatively few farmers actually use them (Ingram et al., 2002(Ingram et al., , 2008;;Roncoli et al., 2009;Roudier et al., 2014), probably because of inaccessibility to the information.The forecasts are therefore mostly used for national planning purposes and early warnings of crop failure.Analogously, pastoralists' use of seasonal forecasts appears very limited in the Sahel (Rasmussen et al., 2014). Information on vegetation resources Vegetation information may be provided by field observation or by satellite-based remote sensing.Obviously, pastoralists themselves monitor vegetation resources and share this information, often using mobile phones, but this information is limited in spatial extent and completeness.A number of methods for satellite-based monitoring of vegetation productivity in the Sahel have been developed and could be potentially useful for pastoralists.The current standard methodology is based on analysis of time series of coarse resolution satellite images, mostly from NOAA AVHRR, SPOT Vegetation and MODIS, using the normalized difference vegetation index (NDVI) as a proxy for vegetation productivity.Mbow et al. (2013) show that NDVI is sensitive to the species composition, limiting its precision for assessment of fodder production.While in cropped areas the summed NDVI is correlated to crop yield and therefore useful in early warning systems of crop failure, outputs from such monitoring systems are of limited value to pastoralists.For pastoralists the end of the rainy season is the most critical period of the year as they must make decisions on herd location, selling of livestock, splitting of herds etc. based on information on dry season fodder resources.Unfortunately, satellite-based methods for providing information on available non-green fodder resources in near real time and with the necessary spatial detail are not presently operational and a suitable method for distributing such information would have to be developed. Information on water resources Pastoralists also need real time information on water availability in day-to-day decisions, especially during the dry season when ponds and lakes progressively dry out and only water from wells and boreholes is available.This can be provided by remote sensing methods that use high resolution satellite images for monitoring the gradual drying out of surface water resources.However, as wells and boreholes www.earth-syst-dynam.net/7/969/2016/Earth Syst.Dynam., 7, 969-976, 2016 are not always operational, especially those that are operated by pumps that require maintenance, information on access, availability, and price of water is therefore also crucial, and this is rarely collected and broadcasted widely enough. A "pastoral decision support system" would ideally integrate such information, including information on the physical availability and management of the water resources. Other information types: herd location and markets Herding decisions are not only based on meteorological information and information on the availability of resources. Rather, the decisions also involve knowledge of -or expectations of -the competition for these resources from herds other than one's own.Such information is not publicly available and is therefore obtained through informal networks of family and friends, mostly by mobile phones.Moreover, as pastoralism becomes increasingly commercialized, decisions are to a greater extent guided by economic criteria, e.g., livestock prices and prices on supplementary feed.Such market information is nowadays available to a substantial proportion of the pastoralists through the same informal networks based on mobile phones, which are gaining increasing importance as the key distribution method. Communication of information to pastoralists: new communication technology Since satellite-based crop/vegetation monitoring was first introduced in the 1980s, the information, e.g., in the form of maps of NDVI, has been presented to end users, such as pastoralists, by radio and television broadcast.Obviously, the impact on pastoralist decision making was quite limited (Rasmussen et al., 2014).The main users of previous efforts to disseminate information, such as the Famine Early Warning System Network (FEWS NET), were primarily government agencies and international donors involved in food relief (Boyd et al., 2013).As mobile phones have become widespread in West Africa, information distribution -and especially the speed of distribution -has been transformed and this should be accounted for in new strategies for dissemination of weather and resource information, especially for pastoralists who rely on mobile phones more than any other sector in rural West Africa (Rasmussen et al., 2015).While "smartphone" technology may provide a promising avenue for delivering spatially detailed information, their use may, however, be limited in the Sahel.This is because presentation of information to pastoralists that are illiterate and do not have full command of national languages will require careful consideration in order to avoid misinterpretation and inequality in access to the information -the pastoralists themselves use almost exclusively oral communication and services that employ voice messages in local languages are therefore by far the most promising (Rasmussen et al., 2015).Moreover, the use of "smartphones" rather than traditional mobile phones, will demand more frequent charging, which might prove difficult in remote pastoral communities unless its use is supported by technological development of solarpanel-based chargers and/or by battery charging becoming a widely available commercial service.The rapid expansion of mobile phone use among pastoralists also provides a basis for systematically crowd-sourcing and feedback of localized information to the information service providers, e.g., as discussed by Muller et al. (2015).So far, however, there is limited experience with this in Africa. Conflicts and the role of weather and resource information As mentioned above, very few studies have explored whether weather and resource information can be used as a tool for resolving conflicts or indeed whether better availability of this information may aggravate conflicts. Results of survey with dissemination stakeholders The 16 respondents from the workshop on dissemination provided a somewhat diverse picture on the role of information for conflicts.Three respondents were not aware of concrete cases where weather or resource information had played a role in conflict resolution or aggravation, but the remaining 13 provided a total of eight combinations of information types and conflict outcomes (Fig. 1).Most respondents provided cases, where information resolved conflicts, which may not be so surprising given the role that these agencies play in disseminating this type of information.How-ever, there were exceptions and these were particularly related to information on water and vegetation resources that could lead to aggravation of conflicts.The cases described were quite diverse and, in the words of respondents, included the following: Biomass and water information to pastoralists will make them move to favorable areas, provoking conflicts both with farmers and other pastoralists.This is caused by lack of areas for free passage of cattle and because of competition for water in wells. Too early movement of animals both north to south and south to north caused conflicts in transition zones. These are thus both cases of correct information that leads to clashes between farmers and pastoralists as well as among pastoralists, since favorable areas had not been adequately zoned to receive such a large number of pastoralists and wrong information to farmers led to cultivation in areas less suitable for cultivation, but where livestock would graze during the rainy season. Interestingly, the survey revealed a new type of conflict arising from information dissemination: conflicts between farmers and institutions.It was, for example, expressed that "flooding forecasts led farmers to sow on higher and more dry lands and dry spells then caused yields to decline.This caused the farmers to criticize the meteorological department".Besides lower yields, the expansion into drylands also led to disruption of livestock corridors.This statement highlights the issue of communicating the uncertainty related to the information, since uncertain information clearly leaves great room for misunderstanding and miscommunication of risks which can have huge repercussions for pastoralists and farmers' livelihood. The larger number of responses related to positive impacts of information on conflict resolution was also illustrated by explanations such as the following: Information given on reduced water level in a dam allowed farmers and pastoralists to agree on the water management and use in the dam. [Agro-meteorological] information helps pastoralists choose itineraries that avoid newly sown areas by farmers and help farmers avoid planting in livestock corridors. Information on timing of retracting waters in Lake Chad gives pastoralists the option to avoid islands, where farmers start cultivating. Moreover, when asked whether improved information on weather and resources in certain contexts could assist in conflict resolution, all respondents that provided answers said yes.They illustrated the answers partly with their comments to the previous questions but also elaborated the following: If information is given so that pastoralists have a variety of options, then they can plan and diversify their movements to avoid all going to the same places.Pastoralists need to have their own information dissemination system improved through proper participation in information system development. Continued information on and zoning of pasture, livestock corridors, watering areas are needed to avoid further conflicts. Improvement of the use of mobile phones and other new technologies accessible to pastoralists. Feedback to information providers of information needs to be systematic necessary for the systems to get better. There was thus a strong emphasis on developing information systems that build on traditional ways of communicating information and ensure the participation of pastoralists in their conception as well as for feeding back actual on-site information on resources and weather to improve the information provided.The use of mobile phone technologies was not seen as an obstacle at all since mobile phones have already been appropriated by pastoralists. There was, among representatives from the radio stations, a strong and not surprising emphasis on the use of radio transmissions as a way to disseminate information and thus also to contribute to the prevention of conflicts.However, with the exception of Mali, where radio broadcasts were mentioned to have alleviated concrete conflicts, it was not possible to establish whether radio has been successful in addressing this issue. Perspectives for weather and resource information to contribute to resolving conflict The participants in the workshop all agreed during discussions that there is a need to improve both the quality of information and how it is disseminated as conflicts that could possibly have been avoided still occur.It is thus evident that farmers and pastoralists in the Sahel make decisions on their use of natural resources on the basis of incomplete information, both about current conditions, e.g., on the spatial distribution of resources and about probabilities of future events, e.g., the rainfall in the coming rainy season or next year's livestock prices.In this section we therefore discuss the possible consequences of making information on current and future resources more tailored to the needs of pastoralists as a user group, including how it may influence the occurrence of conflicts involving pastoralists. www.earth-syst-dynam.net/7/969/2016/Earth Syst.Dynam., 7, 969-976, 2016 A promising option is to produce real time, spatially explicit information on availability of fodder and water resources (particularly in the dry season) and distribute this to pastoralists, e.g., in graphical form by smartphone or more appropriately, given the limited use of smart phones so far, as voice messages in local languages on an automated phone service as suggested by Rasmussen et al. (2015).This would require investments and partnerships with the private telecommunication sector, but given their success in developing affordable mobile services in Africa, it would appear a feasible proposition.Access to and prices of water are also important for decision making, and information on all these elements would most likely affect decisions concerning location (and possibly splitting) of herds.This would reduce the probability of making inappropriate short-term decisions which might cause increased livestock mortality, economic losses, and conflict with farmers and other pastoralists (Hesse, 2011). Conflicts may arise in a situation where all pastoralists have identical, real time information about where vegetation and water resources are currently available, and about the access to and price of water resources.As mentioned by the workshop respondents, this may lead many pastoralists to pursue similar strategies, potentially causing increased risk of overuse and subsequent resource depletion and conflict if they all descend on the same areas.However, it may also allow pastoralists to obtain information about more options than they otherwise would have had and thus contribute to spreading herds more and lowering pressure in each area.The question is thus simply whether a structured satellitebased information system could provide, if not better, then more information across larger areas than the traditional systems or whether it will just result in more people hearing about a limited number of favorable areas, creating more crowding than previously.Or perhaps the traditional pastoral information systems mentioned earlier are already sufficient in capturing all available resources and a new system will not make any difference.The only way to answer these questions will of course be to increase the knowledge base on the information-conflict link between farmers and pastoralists.Moreover, attention will also have to be given to potential conflicts between information providers as increased information flows may also result in increased incidences of wrong or inadequate information as in the flood forecasting example listed above.The probabilistic nature of the information needs to be very carefully explained to recipients as too frequent losses of livestock or crops will undermine trust in the system and could escalate into conflicts if damages are severe. As mentioned in the introduction, it is clear that, in any case, better information will not be enough to solve conflicts between farmers and pastoralists and among pastoralists.The underlying causes for conflicts are most often related to land policies (Benjaminsen et al., 2012;Benjaminsen, 2016), and implementation and enforcement of pastoral land zoning is proposed as a way to reduce the number of conflicts if it will clarify land uses for all groups (Mertz et al., 2010;Hesse, 2011).While such land use policies have been implemented in many of the silvo-pastoral zones of the Sahel, they are much less prevalent in the more semiarid and sub-humid zones dominated by farming and it is often in these areas that conflicts arise when pastoralists search for dry season pastures and water resources.Moreover, there are, unfortunately, also many examples of how inadequate or limited implementation of such policies -however well-intended they are -lead to more conflict or other negative outcomes (Thébaud and Batterbury, 2001;Bonnet and Hérault, 2011;Oxby, 2011).Pastoralists and pastoral organizations need to be sufficiently empowered in order to influence land use policies.Top-down technocratic approaches do not facilitate such empowerment (Hesse and Thébaud, 2006) and national and local government will therefore have to truly engage in dialogue with farmers and pastoralists to ensure their involvement and participation.One important first step could be to ensure equitable access to information for both farmers and pastoralists.In particular, more tailored weather and resource information could play a role for increasing the general information level of pastoralists and placing them in a stronger position to argue for their rights to the traditional pastures in predominantly agricultural zones. Conclusions For centuries Sahel has been a scene of fierce competition for land and natural resources, both among pastoralists and between pastoralists and farmers.The great variability in time and space of resource availability requires pastoralists to make decisions on the basis of incomplete information, sometimes with negative outcomes.Use of modern technologies such as satellite-based earth observations to collect and mobile phones to distribute information on weather, climate variability, vegetation and water resources could be promising for reducing the conflicts that arise over land and access to pasture and water resources.However, more information may also lead to increased conflict in some cases if it is not managed or communicated in a way that will avoid too many herds descending on areas limited in size. Future information systems should not only entail actual improvements in access to real time, spatially explicit weather and resource information.They should also integrate elements such as areas with potential herd crowding and in general be developed with the participation of pastoral communities in order to better target the most pressing needs. The present paper arrives at these conclusions based on a small survey of stakeholders and a review of the literature.Hence, there is certainly a strong need for studies that take a more systematic look at how weather and resource information intersects with conflict mitigation.Such studies should aim to improve the understanding of the direct linkages be-
2018-12-12T10:40:40.165Z
2016-12-13T00:00:00.000
{ "year": 2016, "sha1": "1277b32168def382c8f5eb39c201e956e9c1ed50", "oa_license": "CCBY", "oa_url": "https://www.earth-syst-dynam.net/7/969/2016/esd-7-969-2016.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "23eaef93c5a6195c812e29c1660afd1aa7a6755a", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Business" ] }